Skip to main content

Here’s how to destroy a computer holding government secrets

heres destroy computer holding government secrets guardian
Image used with permission by copyright holder

The Guardian has released a new video showing staff destroying computers holding information leaked by NSA whistleblower Edward Snowden. Under the watchful eye of technicians from GCHQ (Government Communications Headquarters), the British spy agency, editors used drills and angle grinders to obliterate a series of hard drives together with all the data stored on them.

While the video has only just seen the light of day, the events it depicts date from last summer. After a series of tense meetings between 10 Downing Street and the Guardian’s editor Alan Rusbridger, it was decided that the British newspaper would take an axe to its own records rather than face legal action from the government.

However, both sides were fully aware that the data was mirrored elsewhere: “It was purely a symbolic act,” said deputy editor Paul Johnson. “We knew that. GCHQ knew that. And the government knew that. It was the most surreal event I have witnessed in British journalism.” Subsequent stories were published from the Guardian’s offices in the U.S..

You can view the video in full on the Guardian site if you enjoy the sight of power tools ripping through computer hardware. The video has been posted to promote a new book on the events written by Guardian correspondent Luke Harding, but PR stunt or not, it still makes for an interesting watch. A hi-tech degausser was also used to erase hard drive data via intense electromagnetic fields.

Material leaked by Edward Snowden to the Guardian was originally kept on four separate laptops which had never been hooked up to the Internet or any other network. Round-the-clock security guards, multiple passwords and an electronics ban around the laptops were also used to protect the information from spreading further.

David Nield
Dave is a freelance journalist from Manchester in the north-west of England. He's been writing about technology since the…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more