Skip to main content

London police arrest five possible Anonymous hackers

AnonymousFive suspected Anonymous members were arrested in London today for violating the Computer Misuse Act. The men range in age from 15 to 26 and face as much as 10 years in prison and £5,000 fines.

According to the Metropolitan Police’s website, the arrests stem from an ongoing investigation into the recent DDoS attacks that occurred after the fallout of the arrest of WikiLeaks’ leader Julian Assange. Various companies refused to host WikiLeaks, and the site has been perpetually blocked since its release of classified U.S. cables. In response, a group of “hactivists” who call themselves Anonymous, issued repeated attacks on WikiLeaks’ opponents, including the likes of Visa, Mastercard, PayPal, and Amazon, to varying success.

Anonymous claims to be a leaderless organization without an established hierarchy. Authorities worldwide have been investigating the group for its responsibility in the recent bout of Web attacks, which, to date, has led to arrests in the Netherlands.

The group is being targeted stateside as well. Less than a month ago we learned that the FBI was attempting to trace ISP addresses to the DDoS attacks, that it had seized several hard drives. Twitter was also subpoenaed to hand over user information regarding WikiLeaks, which possibly could have related to members of Anonymous using the site to coordinate its attacks.

Anonymous recently announced it would be targeting the Egyptian government for its censorship of social media outlets as an attempt to quell political protests.

Topics
Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more