Skip to main content

Mozilla quickly patches security hole on Nobel Prize site

Firefox 3.6Mozilla might be having trouble with its deadlines for Firefox 4, but when it comes to patching a dangerous hole in its browser, it acted remarkably quickly.

On Tuesday, visitors of the Nobel Prize website were attacked by malware, which drove them to a hacker-controlled Taiwanese site that attempted to plant a Trojan on the unlucky users’ PCs. But the window for infiltrating vulnerable systems was shut quickly, as Mozilla responded with a patch within 48 hours. A simple download of Firefox 3.6.12 now blocks the malware and is available for Mac, Linux, and Windows OS.

The now-disabled Trojan worked by installing a code on vulnerable OS that would hijack the system, and relinquish control to the hacker. Mozilla’s e-mail client Thunderbird was also susceptible to the attack, which ultimately proved unreliable and easily dealt with by Mozilla.

To add controversy to the story, there’s speculation that the zero-day attack had ties to jailed Chinese Nobel Prize winner, Liu Xiabo. Xiabo, a democratic leader, is in prison for his defiance of the Chinese government. Being as the attack source was a Taiwanese connection, some speculate this was directed at visitors of the site who support Xiabo. There’s little proof that these claims are anything more than conspiracy theories, however, as critics have pointed out it was an amateurish attempt at spreading the virus.

Either way, users who have followed Firefox’s instructions to install its latest version along with the patch are free to safely visit the Nobel Prize site.

Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more