Skip to main content

New browser exploit tracks even the most paranoid web users

have i been pwned owner uncovers 13 million plaintext passwords leaked from free webhost is a safe password even possible we
guteksk7/Shutterstock
When it comes to tracking your web browsing, webmaster have all sorts of options – many of which web users actively block. But what if a malicious website owner could turn security features against you?

A researcher proved it’s possible to do just that over the weekend.

Most web users are aware that sites use can use cookies or browser fingerprinting to track you – it’s why so many users make a habit of deleting cookies, scrambling their user agents, and taking advantage of Incognito Mode.

But in a presentation over the weekend security researcher Yan Zhu showed the world a new tracking method that gets around even the most paranoid user, by exploiting the certificates your browser uses to connect to secure sites.

Don’t believe me? Try Zhu’s site Sniffly out for yourself in Chrome or Firefox, and you’ll probably end up with an accurate list of sites you have and haven’t visited.

icymi, sniffing browser history using HSTS/CSP code + demo is up at https://t.co/iAxVPyOGzv. it's called that b/c i had a cold last week.

— yan (@bcrypt) October 26, 2015

To (dramatically) simplify what’s going on here, the exploit attempts to load various images from encrypted domains, then detects whether or not your browser can establish a secure connection with those sites. If it can connect, it’s because you have an  HSTS pin for the site – so there’s a good chance you’ve visited the site before.

It’s a simple way to get a quick list of which secure sites you have and haven’t visited. The information collected this way is less reliable, only relates to sites encrypted using HTTPS, and is less specific that other methods – the sites you’ve visited are revealed, not the individual pages. But it’s still noteworthy, because nothing like it’s been done before.

You can watch Zhu’s entire presentation, read the slides or check out Sniffly on GitHub, if you want a more complete breakdown of how the exploit works.

Justin Pot
Former Digital Trends Contributor
Justin's always had a passion for trying out new software, asking questions, and explaining things – tech journalism is the…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more