Skip to main content

FBI warns U.S. firms about ‘destructive’ malware after Sony Pictures hack

Just days after Sony Pictures’ internal computer system was hit by hackers, the FBI has issued a so-called “flash warning” to U.S. businesses warning them to beware of specific malicious software that could cause havoc on their own networks.

The report, issued to firms across the country on Monday evening, reveals some details about the malware and how it was used in a recent attack, but, as is usual with such flash warnings, doesn’t name the specific company involved. However, the nature and timing of the five-page warning suggests it could be linked to the recent high-profile attack on Sony Pictures.

According to Reuters, which managed to obtain a copy of the report, the FBI’s warning to firms explained that the destructive software has the ability to make computers inoperable and shut down networks.

Such reports are issued by the FBI to businesses when it discovers an emerging and potentially damaging cyber threat. The warning gives security experts at the firms a chance to check their systems and protect themselves against a potential attack.

It seems likely the FBI’s warning is linked to the attack that hit Sony Pictures early last week. Employees at the company first became aware of the intrusion when computers across the network began showing the message “Hacked by #GOP,” apparently short for “Guardians of Peace.”

The incident shut down servers and reportedly exposed a large amount of sensitive company data, including a number of unreleased movies that were apparently nabbed in the attack and subsequently posted online.

The source of the data breach isn’t currently known, though it’s been suggested that hackers working on behalf of North Korea could be to blame as it came just a few weeks before the release of The Interview, a Sony-backed movie about a CIA plot to assassinate Kim Jong-un, the North Korean leader.

The regime earlier this year made clear how it felt about the movie, saying its release would be tantamount to “an act of war.”

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more