Skip to main content

Google cracks CAPTCHA with an algorithm that’s 99.8 percent accurate

google area 21 hq
Image used with permission by copyright holder

CAPTCHA, the commonly employed security tool used by many websites to differentiate between humans and bots, isn’t completely impenetrable. Google has created an algorithm that can read and answer CAPTCHA fields with 99.8 percent accuracy, ZDNet reports.

Developed by the Google Street View team, the algorithm was designed to help recognize characters on storefronts and street signs in blurry images. The algorithm is able to read 90 percent of street numbers, according to Google. The discovery is great news for Google Maps, which will now be able to mine and use much more information to boost its database further.

In a paper presenting the results of their research, the team announced that the same software can read reCAPTCHA answers with almost 99.8 accuracy. Contrary to what you might think, however, the development team claims their work has not jeopardized reCAPTCHA. New “advanced risk analysis techniques” applied to CAPTCHA in the last year allegedly examine what users do before, during and after engaging the text field, which helps the software determine whether they’re human or not regardless of whether an answer is right or wrong.

“It’s important to note that simply identifying the text in CAPTCHA puzzles correctly doesn’t mean that reCAPTCHA itself is broken or ineffective,” Google Product Manager Vinay Shet wrote on the company’s online security blog. “On the contrary, these findings have helped us build additional safeguards against bad actors in reCAPTCHA.”

Mike Epstein
Former Digital Trends Contributor
Michael is a New York-based tech and culture reporter, and a graduate of Northwestwern University’s Medill School of…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more