Skip to main content

FBI tackles Coreflood botnet infecting 2.3 million PCs

botnet
Image used with permission by copyright holder

The Department of Justice and FBI have scored a big victory against a major international cyber theft ring suspected of stealing more than $100 million.

The thieves used malware called Coreflood to form a network of 2.3 million remotely controlled zombie pcs, also known as a botnet. The botnet snagged banking credentials and other sensitive data, which was used to steal large amounts of funds through wire and bank fraud. The botnet’s growth spans over a decade.

More than half of those computers were located within the United States, though the culprits are thought to be from overseas, possibly Russia, according to the director of research at the SAN institute, Alan Paller. A Michigan real estate company and North Carolina investment company both lost over $100,000, but the extent of how widespread the losses are isn’t fully known yet due to the large quantity of data stolen.

The Coreflood botnet was taken down by U.S. government programmers yesterday. The Department of Justice and the FBI took control of five servers used for botnet command, and also seized 29 domains. Government programmers instructed the infected PCs to stop what they were doing and shut down.

Those worried about their own infection have little recourse but to wait it out. Government officials are working with service providers to determine which computers have been infected. The FBI and Department of Justice have stated law enforcement has no authority to access data on infected computers once identified.

This Coreflood botnet comes at the heels of the slightly larger Rustock botnet – said to be responsible for close to half of the global spam – gone silent in March.

Jeff Hughes
Former Digital Trends Contributor
I'm a SF Bay Area-based writer/ninja that loves anything geek, tech, comic, social media or gaming-related.
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more