Skip to main content

Microsoft to use Bing search data to predict outcomes of reality shows

microsoft improves bings ability to improve understand natural conversation in search bing
Image used with permission by copyright holder

You can use search data to detect patterns in an arguably endless array of behaviors across infinite subjects. This includes the results of reality TV shows, which Bing will attempt to predict, according to an official blog post from Microsoft.

Beginning today, Bing will attempt to forecast the results of shows like “The Voice,” “American Idol” and “Dancing With The Stars,” by scanning search data, along with “social input” from Facebook and Twitter. For instance, if you head over to Bing right now and search “American Idol predictions” like we did, the top of the page will feature a set forecasts for five singers. We’ll refrain from adding in any potential Bing-generated spoilers here, but you’re free to check out what the search engine thinks for yourself.

“In broad strokes, we define popularity as the frequency and sentiment of searches combined with social signals and keywords. Placing these signals into our model, we can predict the outcome of an event with high confidence,” the Bing Predictions Team says in its blog post.

Microsoft also says that Bing’s predictions incorporate numerous emotionally-driven factors into how it generates predictions, allegedly accounting for biases like favoritism, regardless of how a person’s favorite singer/contestant performs from one week to the next.

Bing’s prediction service won’t start and end with reality TV shows though.

“You can expect more from us in this area beyond predictions for voting shows like The Voice, American Idol and Dancing With the Stars,” the Bing Predictions Team says.

We can’t help but wonder what Bing will be attempting to predict next, and how accurate the search engine’s forecasts will be. What’s clear is that we won’t have to wait too long to find out.

Konrad Krawczyk
Former Digital Trends Contributor
Konrad covers desktops, laptops, tablets, sports tech and subjects in between for Digital Trends. Prior to joining DT, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more