Skip to main content

Microsoft may have ignored warnings about Bing Chat’s unhinged responses

Microsoft’s Bing Chat is in a much better place than it was when it released in February, but it’s hard to overlook the issues the GPT-4-powered chatbot had when it released. It told us it wanted to be human, after all, and often broke down into unhinged responses. And according to a new report, Microsoft was warned about these types of responses and decided to release Bing Chat anyway.

According to the Wall Street Journal, OpenAI, the company behind ChatGPT and the GPT-4 model powering Bing Chat, warned Microsoft about integrating its early AI model into Bing Chat. Specifically, OpenAI flagged “inaccurate or bizarre” responses, which Microsoft seems to have ignored.

Bing Chat saying it wants to be human.
Jacob Roach / Digital Trends

The report describes a unique tension between OpenAI and Microsoft, which have entered into somewhat of an open partnership over the last few years. OpenAI’s models are built on Microsoft hardware (including thousands of Nvidia GPUs), and Microsoft leverages the company’s tech across Bing, Microsoft Office, and Windows itself. In early 2023, Microsoft even invested $10 billion in OpenAI, coming just short of purchasing the company outright.

Despite this, the report alleges that Microsoft employees have issues with restricted access to OpenAI’s models, and that they were worried about ChatGPT overshadowing the AI-inundated Bing Chat. To make matters worse, the Wall Street Journal reports that OpenAI and Microsoft both sell OpenAI’s technology, leading to situations where vendors are dealing with contacts at both companies.

The biggest issue, according to the report, is that Microsoft and OpenAI are trying to make money with a similar product. With Microsoft backing, but not controlling OpenAI, the ChatGPT developer is free to make partnerships with other companies, some of which can directly compete with Microsoft’s products.

Based on what we’ve seen, OpenAI’s reported warnings held water. Shortly after releasing Bing Chat, Microsoft limited the number of responses users could receive in a single session. And since then, Microsoft has slowly lifted the restriction as the GPT-4 model in Bing Chat is refined. Reports suggest some Microsoft employees often reference “Sydney,” poking fun at the early days of Bing Chat (code-named Sydney) and its responses.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more