Skip to main content

Chatbot-generated Schumacher ‘interview’ leads to editor’s dismissal

A magazine editor has learned the hard way about the ethical limits of using generative AI after she was fired for running an “interview” with F1 motor racing legend Michael Schumacher using quotations that were actually from a chatbot.

Seven-time F1 world champion Schumacher has been out of the public eye since 2013 when he sustained severe head injuries in a skiing accident during a vacation in France.

The German tabloid magazine, Die Aktuelle, showcased the article on a recent front page with a photo of the former motor racing champion and the headline: “Michael Schumacher, The First Interview, World Sensation,” together with a much smaller strapline saying: “It sounds deceptively real.”

It emerged in the article that the quotations had been generated by Character.ai, an AI chatbot similar to OpenAI’s ChatGPT and Google’s Bard, which have gained much attention in recent months for their versatility and their impressive ability to converse in a human-like way.

In Die Aktuelle’s “interview,” Schumacher, or in fact the chatbot, talked about his family life and health.

“My wife and my children were a blessing to me and without them I would not have managed it,” the chatbot, speaking as Schumacher, said. “Naturally they are also very sad, how it has all happened.”

Schumacher’s family intends to take legal action against the publication, according to a BBC report.

The magazine’s publisher, Funke, has apologized for running the article.

“Funke apologizes to the Schumacher family for reporting on Michael Schumacher in the latest issue of Die Aktuelle,” it said in a statement.

“As a result of the publication of this article … Die Aktuelle editor-in-chief Anne Hoffmann, who has been responsible for journalism for the newspaper since 2009, will be relieved of her duties as of today.”

Bianca Pohlmann, managing director of Funke magazines, said in the statement: “This tasteless and misleading article should never have appeared. It in no way corresponds to the standards of journalism that we — and our readers — expect from a publisher like Funke.”

Character.ai, launched in September last year, lets you “chat” with celebrities, historical figures, and fictional characters, or even ones you created.

That may be fine in the privacy of your own home, but taking it a step further and publishing an article based on the chatbot’s responses is clearly a huge risk.

As generative AI continues to improve and edge ever more into our lives, more missteps like this are to be expected, though hopefully, Die Aktuelle’s blunder may prompt publishers to think twice about how they utilize content created by a chatbot.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more