Skip to main content

New Sony Pictures movies leaked online following recent hack

” id=”attachment_686944″]sony gets out its check book agrees to pay up 8m settle hack lawsuit pictures
”[Image
The hack that hit Sony Pictures last week has reportedly led to the leak of at least five of its latest movies, with copyright-infringing download sites seeing rampant activity in connection with the titles.

Movies nabbed by hackers in the high-profile attack on Sony’s systems include the yet-to-be-released Annie, Mr. Turner, Still Alice, and To Write Love on Her Arms, as well as Fury starring Brad Pitt, Variety reported over the weekend.

“The theft of Sony Pictures Entertainment content is a criminal matter, and we are working closely with law enforcement to address it,” Sony told the entertainment magazine.

Data from piracy-tracking company Excipio reveals that the DVD screener of Fury, which landed in cinemas in October, has so far seen more than 1.2 million downloads via file-sharing sites.

The hack hit Sony’s internal systems at the start of last week, forcing executives and employees at the entertainment giant to suspend all use of online communication tools until further notice.

The intrusion became apparent when computers on the network began displaying the message, “Hacked by #GOP,” apparently short for “Guardians of Peace.”

This was accompanied by threats to reveal “top secrets” of Sony, suggesting the hackers had gotten hold of a large amount of sensitive information belonging to the company.

The source of the security breach isn’t currently known, though Re/code reported in recent days that the company was looking into the idea that hackers may have been working on behalf of North Korea, as the attack came just weeks before the release of The Interview, a Sony-backed Seth Rogen flick about a CIA plot to assassinate North Korean leader Kim Jong-un.

As for Sony workers wondering when they’ll be able to start using their internal computer network again, reports over the weekend suggest it could be up and running again some time today.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more