Skip to main content

US, UK nuclear submarine secrets accidentally published online

nuclear-submarine-mod
Image used with permission by copyright holder

A “technical error” has caused Britain’s Ministry of Defense to inadvertently publish classified sections of a report containing sensitive information about US Navy and British military nuclear submarines, the BBC reports.

Sensitive information in the report, which was published to Parliament’s website after a Freedom of Information request by anti-nuclear campaigners, includes how much structural damage British subs can take before a full meltdown takes place, as well as US vessels’ abilities to handle nuclear core failure.

The “schoolboy error,” as the MoD has called it, was due to improper redaction in the PDF document. As British tabloid the Daily Star Sunday, which first reported the gargantuan slip-up, points out, “anyone wanting to read the censored sections just had to copy the text.” This was most likely because whomever attempted to redact the document did so with the digital black highlighter, which simply covers up, rather than fully redacts, the text in a PDF.

British MP Patrick Mercer, who served in the country’s Army, said the leaked info could have “potentially catastrophic” results because the revealed information was “highly interesting” to the UK’s enemies.

Immediately after the Daily Star Sunday‘s report, the MoD removed the over-sharing document with a properly redacted one, and thanked the press for pointing out their error.

“The MoD is grateful to the journalist for bringing this matter to our attention,” said an MoD spokesman. “As soon as we were told about this, we took steps to ensure the document was removed from the public domain and replaced by a properly redacted version (PDF). We take nuclear security very seriously and we are doing everything possible to prevent a recurrence of this.”

Obviously, one simple way to avoid such a scenario in the future is to use an actual marker, rather than a digital one, to black-out text that should not be shared with the entire world.

(Image via)

Andrew Couts
Former Digital Trends Contributor
Features Editor for Digital Trends, Andrew Couts covers a wide swath of consumer technology topics, with particular focus on…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more