Skip to main content

Google awards teenager $36,000 as part of its bug bounty program

Google has awarded a Uruguayan teenager for reporting a vulnerability that would have allowed hackers to make changes to the company’s internal systems. This marks the fifth bug that Ezequiel Pereira has submitted to Google’s bug bounty program. It’s also the most valuable, earning him more than $36,000.

Pereira began got his first computer at age 10 and has been programming since he took an intro class at age 11. He has spent years teaching himself various programming languages and participated in several coding contests, including one that earned him a trip to Google’s headquarters in California.

His drive for bug hunting began when he was younger. He said that he quickly found a bug that earned him $500 and had been hooked ever since.

“I found something almost immediately that was worth $500 and it just felt so amazing,” he told CNBC. “So I decided to just keep trying ever since then.”

Pereira found the bug earlier this year and reported it to Google. He only recently received permission to discuss the bug and how he found it once Google confirmed that it had resolved the issue.

In June of last year, Pereira discovered a bug that earned him $10,000 and used part of that money to apply for scholarships to U.S. universities. None of the schools he reached out to accepted him, so he is currently studying computer engineering in his hometown of Montevideo. He is hopeful that he’ll be able to use his earnings to fund his education, as he hopes to one day earn a master’s degree in computer security. Apart from his education, he has no major plans for his earnings aside from helping his mother pay the bills.

As of right now, Pereira has only submitted bugs to Google’s bug bounty program, but many tech and video game companies offer similar rewards for discovering and reporting bugs. The companies are hopeful that offering monetary rewards will encourage hackers to report exploits rather than sell them to bad actors.

For his part, Pereira has encouraged his friends and others to get involved with bug hunting. When his friends say they don’t think they have the proper knowledge, Pereira replies that “anyone can learn these things.”

Eric Brackett
Former Digital Trends Contributor
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more