Skip to main content

Google pulls AVG for flawed security extension that exposed user data

researchers use ambient light sensor data to steal browser exhausted man computer problems desk hacking hackers malware frust
Shutterstock
Google has discovered that AVG’s free anti-malware tool Web TuneUp put up to nine million Chrome users at risk of exposing their personal data by altering settings in the browser.

As a result, AVG’s tool has been banned from automatically installing when a user installs the company’s anti-virus software. Currently there are about nine million people with the Web TuneUp extension installed in Chrome.

Tavis Ormandy, a Google Project Zero researcher, said the extension leaked browsing history and data online where a knowledgeable attacker could exploit the vulnerability to snoop on what sites a person had logged into. In one example, a malicious actor could hijack the Gmail account of an unsuspecting user or steal passwords.

Ormandy found that the extension was force installing itself and left users with no means to opt out. “Apologies for my harsh tone, but I’m really not thrilled about this trash being installed for Chrome users,” wrote Ormandy in an email to AVG, describing the extension as “so badly broken.”

“My concern is that your security software is disabling web security for 9 million Chrome users, apparently so that you can hijack search settings and the new tab page,” he said. “I hope the severity of this issue is clear to you, fixing it should be your highest priority.”

According to Ormandy’s correspondence with AVG, the initial patch did not solve the issue, but on Tuesday of this week the latest update was to his satisfaction. “The vulnerability has been fixed; the fixed version has been published and automatically updated to users,” said AVG in a statement, thanking Google for bringing it to its attention.

Regardless, the Web TuneUp extension has still been blocked from auto-installing. AVG has provided no further comment on the matter.

This is AVG’s second run-in this year with security pros who were carrying out audits of its software. In March, its software was found to have flawed code that could disable Windows security features. These sort of issues highlight how users should be especially cautious of software that promises protection as it could be doing the very opposite.

Jonathan Keane
Former Digital Trends Contributor
Jonathan is a freelance technology journalist living in Dublin, Ireland. He's previously written for publications and sites…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more