Skip to main content

Europol busts behemoth botnet, 3.2 million strong

europol
Image used with permission by copyright holder
This week, one of the world’s largest active botnets was finally brought down in an operation undertaken by a taskforce at Europol with the help of specialists from Symantec, Microsoft, and Anubis Networks.

Made possible by the notorious Ramnit malware, the 3.2 million-strong botnet was employed for a slew of nefarious activities, including massive spam campaigns, DDoS attacks, and virus distribution across thousands of separate networks.

Though the team behind the bust refused to name the group responsible for Ramnit, they told reporters that the server seizures spanned four different countries, all independently operating from one another and tasked with various parts of a puzzle that worked in concert to maintain the syndicated crime spree.

Because Ramnit was so versatile, it was capable of everything from flooding social networks with infected links to building backdoor trojans in individual systems. It’s said the tool was preferred by many high profile hackers due to its modular nature, which could be actively updated to deal with efforts to prevent its spread as it moved from one machine to the next.

When asked about the effect that the takedown of Ramnit would have, Steve Rye of the National Crime Agency told reporters that “…as a result of this action, the world is safer from RAMNIT, but it is important that individuals take action now to disinfect their machines, and protect their personal information.”

Chris Stobing
Former Digital Trends Contributor
Self-proclaimed geek and nerd extraordinaire, Chris Stobing is a writer and blogger from the heart of Silicon Valley. Raised…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more