Skip to main content

Yelp is offering ‘nice’ hackers up to $15,000 to squash its bugs

yelp bug bounty program
Image used with permission by copyright holder
White-hat hackers take note – another money-making opportunity has just landed.

Review site Yelp has, perhaps not before time, announced a public bug bounty program with a top payout of $15,000.

Security experts have been invited by Yelp to dig into its range of desktop and mobile sites to uncover weaknesses and flaws that could allow nefarious types to wreak havoc on its vast online business.

Yelp guarantees a minimum payout of $100 for every accepted report, though should you uncover the kind of critical flaw that would ordinarily cause a serious-minded developer to break into a cold sweat at the mere thought of its existence, you could be in line for the top cash award of $15,000. Or something close to it.

The online review giant is running its bug bounty program with HackerOne, a Silicon Valley firm that offers such services. A webpage dedicated to the Yelp program offers updates on payouts, and a quick look shows that in less than 24 hours two hackers have already picked up $100 each for their efforts.

This latest bug-squashing venture is actually an expansion of a private bug bounty program that Yelp launched two years ago. That one helped the company identify and fix more than 100 potential vulnerabilities, but it hopes that taking the program public will help it quickly close down any remaining weaknesses lurking in the depths of its online services.

Aware of the mind-blowing talent of some researchers, Yelp is asking bug hunters to “please be nice to us.” On its HackerOne page, the San Francisco-based company says, “We want you to bring out your big guns, but hold off on actually breaking anything. Please avoid DDoS’ing us or breaking our systems and services while you are testing.”

Yelp has posted an additional article laying out exactly what it wants security researchers to look for, so if you enjoy tinkering under the hood and are up for a challenge, go check it out.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more