Skip to main content

Google researchers say hacking attacks on journalists are likely state-backed

journalists likely targets state sponsored digital attacks say google researchers morgan marquis boire
Morgan Marquis-Boire Image used with permission by copyright holder

A report from a pair of Google security engineers claims that 21 of the 25 largest news outlets in the world have been attacked by hackers that were likely either working for governments or carrying out the attacks in support of them, according to Reuters.

Shane Huntley, who released the report at a Black Hat conference in Singapore this week with co-author Morgan Marquis-Boire, says that journalists were “massively over-represented” in the overall pool of people who were victims of such attacks. For example, Huntley mentioned that Chinese hackers penetrated one “major” Western news outlet using a carefully-written questionnaire that was emailed to that organization’s staff members.

“If you’re a journalist or a journalistic organization we will see state-sponsored targeting and we see it happening regardless of region, we see it from all over the world both from where the targets are and where the targets are from,” Huntley said.

Part of the problem is the lack of attention paid to security by news organizations. “A lot of news organizations are just waking up to this,” said Marquis-Boire. However, individual journalists are taking steps to protect themselves and their sources, even as their organizations lag behind.

“We’re seeing a definite upswing of individual journalists who recognize this is important,” Marquis-Boire said.

Considering the volume of people that have used passwords as simplistic as “123456,” we’re not terribly surprised that a lack of focus on the issue of security has been at the forefront of the problem.

Topics
Mike Epstein
Former Digital Trends Contributor
Michael is a New York-based tech and culture reporter, and a graduate of Northwestwern University’s Medill School of…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more