Skip to main content

Hiybbprqag.com now redirects you to Google job listings

redirectThe Google versus Bing spat has escalated and out of it “hiybbpraqag” has made its way into our vocabulary. The now eponymous phrase was originally used by Google to test whether Bing was copying its search results. Google engineers rigged a system of faux results for uncommon or misspelled queries, including user searches for “hiybbpraqag.”

As the search engines battle it out and the public continues to try and discern who’s wronging whom (we ran our own test), “hiybbpraqag” is rising to infamy. Stephen Colbert even labeled it “a word meaning you got served.”

But amidst the war or words the two search engine giants are engaging in, a Taiwanese engineer believed to work for Google is helping his company capitalize on its recent publicity. Chih-Chung Chang has redirected the http://www.hiybbprqag.com URL to land visitors onto the Google Jobs page. According to Search Engine Land, Chang was quick, too: He snatched the domain name on February 1, just as the battle began. That’s one way to get some more attention during its hiring spree.

This whole thing has taken a turn for the hilarious. You’re up, Bing.

Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more