Skip to main content

Google calls out Bing for copying its search results

spelling
Image used with permission by copyright holder

They say imitation is the sincerest form of flattery, and if that’s the case, Bing must really admire Google. According to Google, the Microsoft search engine (and possibly its closest competitor), has been copying what people search and select on Google and using this data to tweak its own algorithm.

“I’ve spent my career in pursuit of a good search engine. I’ve got no problem with a competitor developing an innovative algorithm. But copying is not innovation, in my book,” says Amit Singhal, a Google Search engineer.

Search Engine Land broke the story this morning after following it for months. The site reports that in order to bust Bing, Google executed an undercover “sting operation” last year after noticing in May that Bing was offering nearly the same results when users would enter a misspelled word as it did (see the photo at top). Further into this investigation, Google also noticed a jump in how often it and Bing listed the same page as the top choice.

After being fairly convinced of what was happening, Google simply had to catch Bing in the act. Engineers developed a “one-time code that would allow it to manually rank a page for a certain time.” Then, it created some faux searches for extremely uncommon content and once the experiment went live, it took little more than 10 days for its results to start showing up on Bing. Before this experiment, these searches returned next to nothing on Google or Bing – Google made a manual change for certain pages to start showing up. And that’s when those sites also made their way to Bing’s search results.

testing
Image used with permission by copyright holder

So it seems clear Bring has been directly taking what Google users searched for and chose, and using their choices to create its own search engine results. Bing has improved its user base, and touts itself as a more convenient search engine, and separated itself from competitors based on its superior results. “Unlike most search engines, Bing serves up more than long lists of links. We organize our Search results so they’re easy to read and you can make informed choices…faster,” reads its own site.

And the site isn’t denying Google’s accusations. Director of Bing’s search engine Stefan Weitz e-mailed Search Engine Land the following statement:

As you might imagine, we use multiple signals and approaches when we think about ranking, but like the rest of the players in this industry, we’re not going to do deep and detailed on how we do it. Clearly, the overarching goal is to do a better job determining the intent of the search, so we can guess at the best and most relevant answer to a given query.

Opt-in programs like the Bing toolbar help us with clickstream data, one of the many input signals we and other search engines use to help rank sites. This “Google experiment” seems like a hack to confuse and manipulate some of these signals.

What Bing is (since it’s all but been confirmed) doing isn’t illegal, but it definitely falls into a moral gray area. Still, its users are probably happy with the search engine’s performance and Google technically can’t do anything about it except pout. “It’s cheating to me because we work incredibly hard and have done so for years, but they just get there based on our hard work. I don’t know how else to call it but plain and simple cheating. Another analogy is that it’s like running a marathon and carrying someone else on your back, who jumps off just before the finish line,” Singhal says.

Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more