Skip to main content

New hacking challenge shows Heartbleed is as bad as we thought

blackberry roll heartbleed patches android ios week
Image used with permission by copyright holder

You’ll have been hearing a lot about the Heartbleed bug this week, and it’s now been confirmed that the vulnerability can be used to nab private security keys from a server. That means a rogue site could pose as a genuine one, and neither you nor your browser would be any the wiser.

A quick recap: Heartbleed allows hackers to ping vulnerable servers for all kinds of sensitive information, including email addresses, passwords and credit card numbers. At first, there was some debate about whether this information could include private SSL keys, in many ways the most valuable data for a hacker; now we have confirmation that it can.

White-hat hackers Fedor Indutny and Ilkka Mattila successfully took on the Heartbleed hacking challenge laid down by Web performance and security company CloudFlare. “We confirmed that both of these individuals have the private key and that it was obtained through Heartbleed exploits,” said CloudFlare.

Having access to these private keys means hackers can return even after the Heartbleed exploit has been closed to steal more information — it’s akin to having the keys to a car rather than having to smash through the window. Only when server security certificates are updated (i.e. the locks are changed) will the bad guys be foiled, and that’s going to take some time.

Big-name companies including Google, Yahoo and Dropbox are scrambling to update their systems to close the Heartbleed loophole, but the danger is far from over. Stay tuned to our lists of apps and websites that are affected for details of how to protect yourself, and follow any prompts you receive to reset your passwords from the online services you use.

[Image courtesy of Heartbleed.com / Karen Blaha]

David Nield
Dave is a freelance journalist from Manchester in the north-west of England. He's been writing about technology since the…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more