Skip to main content

Microsoft says bizarre travel article was not created by ‘unsupervised AI’

According to a recent article posted by Microsoft Travel on microsoft.com, attractions worth checking out on a visit to the Canadian capital of Ottawa include the National War Memorial, Parliament Hill, Fairmont Château Laurier, Ottawa Food Bank … hang on, Ottawa Food Bank?

Spotted in recent days by Canada-based tech writer Paris Marx, the article puts Ottawa Food Bank at number 3 in a list of 15 must-see places in the city. And as if that wasn’t bad enough, the accompanying description even suggests visiting it “on an empty stomach.”

The piece was originally thought to have been created by generative artificial intelligence (AI). But Microsoft has since said that “unsupervised AI” was not involved. The entire article has now been taken down, though you can view an archived version of it.

Here’s the food bank description in full:

“Ottawa Food Bank — The organization has been collecting, purchasing, producing, and delivering food to needy people and families in the Ottawa area since 1984. We observe how hunger impacts men, women, and children on a daily basis, and how it may be a barrier to achievement. People who come to us have jobs and families to support, as well as expenses to pay. Life is already difficult enough. Consider going into it on an empty stomach.”

A screenshot of a Microsoft travel article.
Microsoft

With its insertion into the travel article clearly an error — and an awful one at that — it seemed probable  that the piece was knocked together using generative AI, a technology that we know Microsoft has a huge interest in.

But a statement from the company claimed the issue was “due to human error,” adding that it was “not published by an unsupervised AI” but instead generated through “a combination of algorithmic techniques with human review, not a large language model or AI system.”

Still, the company appears to have fallen short on several counts. First, it failed to perform proper human checks on the article before posting it, and second, nowhere on the webpage did it say that the content was generated through “algorithmic techniques.”

The mishap demonstrates the continuing need for human oversight when technology is used to create content. Slip-ups can be costly, as evidenced by a recent case in New York City in which a lawyer used the AI-powered ChatGPT chatbot to find examples of legal cases that he then included in a document to support a client’s case. But it was later found that ChatGPT had made them all up.

This article has been updated to include details of Microsoft’s statement.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more