Skip to main content

Facebook launches paid program to find glitches

unlock facebookOn Friday, social networking giant Facebook announced a program that pays people to find holes in its security system. Compensation will start at $500 and so far, no financial ceiling has been set.

Obviously you must be the first person to report a specific bug; no bounties for an error are given out twice. Facebook notes that some who submitted security errors in the past — who received little compensation other than maybe a t-shirt — were eventually brought on to the Facebook security team.

“Typically, it’s no longer than a day” to fix a bug, Facebook Chief Security Officer Joe Sullivan told Cnet in a conference call.

Only participants who legally agree to Facebook’s Responsible Disclosure Policy (which states that they will not publish or make available any of their findings), will be allowed to participate. In Facebook’s typical menacing-and-friendly-at-the-same-time sort of way, the company states, “If you give us a reasonable time to respond to your report before making any information public and make a good faith effort to avoid privacy violations, destruction of data and interruption or degradation of our service during your research, we will not bring any lawsuit against you or ask law enforcement to investigate you.”

Facebook has said that it will allow registered researchers, as they’re being called, to set up test accounts so they don’t have to worry about their own when going to work.

Also, there are exceptions to what Facebook will pay for: Security bugs in third-party applications, third-party websites that integrate with Facebook, Facebook’s corporate infrastructure, denial of service vulnerabilities and spam or social-engineering techniques are all excluded.

With regards to the last, a lot of Facebook users probably wouldn’t mind if they eventually opened up the floodgates against Newsfeed spam. We can only hope.

Either way, let the games begin.

Caleb Garling
Former Digital Trends Contributor
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more