Skip to main content

Google will pay you $100K if you can pull off the ultimate Chrome hack

school computers chromebooks more than apples budgetlaptops acerchromebook15
Image used with permission by copyright holder
Google has doubled the top reward in its bug bounty program for Chrome from $50,000 to $100,000 in the hopes of encouraging more white hat hackers to collaborate on patching bugs and vulnerabilities.

The Chrome Reward Program, which was launched six years ago, invites hackers to try and compromise the security of Chrome devices and Chrome OS.

This latest $100,000 update applies only to the “persistent compromise” of a Chromebook in guest mode. The challenge has so far had no winners but, according to Google, “great research deserves great awards” and it’s hopeful that the hefty reward money will encourage greater research into Chromebook security.

Google has also added a brand new reward for anyone that can compromise Chrome’s Safe Browsing download protection features. This pays a baseline reward of $500.

Google has been pretty open with its bug bounty program over the years. In 2015, it paid out more than $2 million to security researchers that had discovered and disclosed vulnerabilities in various Google services, and more than $6 million since 2010.

The company runs a couple of different bug bounties such as a program for Android that pays up to $8,000 for a critical flaw or its wider security disclosure program for sites and services like Google.com, YouTube, and Blogger that pays up to $20,000.

Bug bounties are a popular way for tech companies to solicit help from the hacker and security communities on dangerous flaws and vulnerabilities that may have gone under the radar. By paying out some generous fees, the companies can encourage hackers to privately disclose bugs rather than exploit them or even sell them on the dark web.

The method seems to be catching on. Facebook recently paid out $15,000 over a serious bug that left everyone’s profile vulnerable. The Department of Defense has launched its own bug bounty program, Hack the Pentagon, to put its own website to the test.

Jonathan Keane
Former Digital Trends Contributor
Jonathan is a freelance technology journalist living in Dublin, Ireland. He's previously written for publications and sites…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more