Skip to main content

Microsoft improves Bing’s ability to understand natural conversation in search

microsoft improves bings ability to improve understand natural conversation in search bing
Image used with permission by copyright holder
Bing just got better at understanding natural conversation when you use it for search. Microsoft made the announcement in this official blog post.

Related: More signs point to Cortana coming to Windows 9

In the post, which is entitled “Let’s Have a Conversation,” Microsoft talks about how human beings are prone to naturally asking follow up questions when they engage in conversation. For instance, if you were to ask a history teacher when World War II started, a natural follow-up question would be one where you asked when it ended, how it started, and so on. Microsoft says that Bing is now able to interpret, and answer these sorts of questions.

Related: Windows 9 Threshold is coming: Here are the latest rumors, leaks, news, and more

Microsoft’s example involves President Obama. In their post, they ask Bing “Who is the president of the United States.” After pulling up that information successfully, Microsoft then shows an image of a user asking Bing “Who is his wife.” Bing replies with “Michelle Obama,” and even provides the year the two were married.

From there, other images show a user asking Bing two more follow-ups, like “How tall is she,” and “Who is her brother.” Bing knocks both of those questions out of the park.

“These improvements build on extensive work we have done to build out the Bing platform including investments in entity and conversational understanding,” Ke says. “This is a long journey, and we expect to deliver a number of additional improvements in the days ahead.”

This may seem like a minor improvement, but the little things can add up. Asking a search engine “How tall is she” is easier than asking “How tall is Michelle Obama.” If you type thousands of characters per day, as many people do these days, these improvements can pay off big time down the road in the form of less strain on your digits, palms, and wrists.

Konrad Krawczyk
Former Digital Trends Contributor
Konrad covers desktops, laptops, tablets, sports tech and subjects in between for Digital Trends. Prior to joining DT, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more