Skip to main content

Hacker earns $225,000 at Pwn2Own 2015

Pwn2Own 2015: Day 2 Highlights
The 2015 edition of Pwn2Own is over. Participants discovered an incredible 21 critical bugs, resulting in a combined payout of $557,500.

Almost half of the money went to Jung Hoon Lee, aka lokihardt, who demonstrated a nasty attack against Chrome. His hack started with a buffer overflow race condition and then, to break out of the security sandbox that’s supposed to keep exploits from spilling over to Windows, executed attacks against two separate Windows kernel drivers. By the time the dust as settled, Lee had gained full system-level access.

That was enough to make him $110,000 richer. He earned $75,000 for breaking into Chrome, $25,000 for escalating to a system-wide attack, and $10,000 for proving the attack works against both the stable and beta versions of the browser.

Lee also executed an attack against Internet Explorer 11 that earned him $65,000 and demolished Safari with an exploit and sandbox escape that earned him $50,000. In total he took home $225,000. Not bad for a two-day event!

As impressive as Lee’s attacks were, he didn’t earn the record for most won by a single competitor. That honor goes to a French firm called VUPEN, which earned $400,000 in 2014 by demonstrating a range of attacks against Chrome, Firefox, Internet Explorer, Adobe Reader and Adobe Flash that involved 11 zero-day exploits. VUPEN is an organization, though, not an individual; Lee’s winnings are the most earned by a single person thus far.

Pwn2Own is an annual hacking competition hosted by HP that’s been active since 2007. It’s meant to give hackers incentive to reveal new attacks to software developers before they’re used in the wild.

Matthew S. Smith
Matthew S. Smith is the former Lead Editor, Reviews at Digital Trends. He previously guided the Products Team, which dives…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more