Skip to main content

Cortana befuddles car accident witness by calling police station thousands of miles away

Woman accidentally calls police in Massachusetts to report car crash in England National Globaln
Microsoft’s digital assistant Cortana nearly saved the day when a woman in Barnstaple, England witnessed a hit-and-run. Unfortunately, Cortana called the police in Barnstable, Massachusetts, a couple thousand miles away.

The 911 operator and car accident witness spar for a while, attempting to ascertain where the accident occurred. The witness insists she saw a car clip another car while driving over the line, somewhere between Muddiford and Ilfracombe. The operator asks her to repeat herself, as he is not sure where those cities are.

The exchange continues before both parties realize the error: Cortana called Barnstable, Massachusetts instead of Barnstaple, England.

It is a common mistake for well-meaning but occasionally inept Cortana, in fact she — or Siri, Alexa, and Google Now — has probably made similar mistakes if you ever tried to call with just your voice.

There are whole subreddits and Tumblr pages dedicated to occasionally hilarious mistakes made by our beloved digital assistants, but this incident highlights an important area where Cortana, Siri and others often fall short: emergency situations.

There is a vigorous campaign to improve the emergency capabilities of smart assistants. Siri, in particular, is the subject of a series of petitions to improve how she handles certain queries regarding domestic violence and sexual assault and, to be fair, Siri includes helpful results on certain sensitive subjects.

But Siri and Cortana are important tools to have during an emergency, particularly a car accident when handling your phone might not be possible, so their ability to parse information accurately can be critically important — much more important than simply recognizing which pizza joint you meant to call.

Fortunately, in this case it does not appear anyone was in immediate danger. The Barnstable/Barnstaple mixup is a funny cross-cultural exchange but it highlights an important weakness that smart assistants still have.

Jayce Wagner
Former Digital Trends Contributor
A staff writer for the Computing section, Jayce covers a little bit of everything -- hardware, gaming, and occasionally VR.
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more