Skip to main content

AI chatbot goes rogue during customer service exchange

International delivery firm DPD is updating its AI-powered chatbot after it gave some unexpected responses during an exchange with a disgruntled customer.

Musician Ashley Beauchamp recently turned to DPD’s customer-service chatbot in a bid to track down a missing package.

After making little progress with the chatbot, he decided to have some fun by asking it to write a poem criticizing the company, which it duly did. He also asked it to swear, and again, it obliged, using the F-word in its response.

Beauchamp shared his frustrations in a post on social media that’s now been viewed nearly two million times.

“Parcel delivery firm DPD have replaced their customer service chat with an AI robot thing,” the musician wrote. “It’s utterly useless at answering any queries, and when asked, it happily produced a poem about how terrible they are as a company. It also swore at me.”

Parcel delivery firm DPD have replaced their customer service chat with an AI robot thing. It’s utterly useless at answering any queries, and when asked, it happily produced a poem about how terrible they are as a company. It also swore at me. 😂 pic.twitter.com/vjWlrIP3wn

— Ashley Beauchamp (@ashbeauchamp) January 18, 2024

Screenshots showed that Beauchamp asked the DPD chatbot to come up with a haiku — a Japanese poem comprising 17 syllables, with five syllables on the first line, followed by seven and five. But the digital assistant couldn’t even get that right, messing up the syllable count, even if the sentiment was on the mark.

The poem said:
“DPD is a useless,
Chatbot that can’t help you.
Don’t bother calling them.”

DPD said that an error had occurred with its AI chatbot following a recent system update, adding that it was working to ensure it doesn’t happen again.

While many companies have deployed chatbots over the years as part of their customer service setup, many have been criticized for their inability to provide meaningful help. But with generative AI gaining huge traction last year with tools like ChatGPT, it’s hoped that this more advanced technology will be able to transform customer-service chatbots into much more effective assistants. But with Beauchamp finding DPD’s AI chatbot “utterly useless” before getting it to swear and criticize the company, the technology clearly still needs some tweaking.

Beauchamp said that he found the incident amusing but added that “these chatbots are supposed to improve our lives, but so often, when poorly implemented, it just leads to a more frustrating, impersonal experience for the user.”

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more