Skip to main content

Researchers argue AI can fool the Turing test without saying a thing

nestor ai paying attention artificial intelligence
Image used with permission by copyright holder
Alleged criminals might not be the only ones to benefit from pleading the Fifth. By falling silent during the Turing test, artificial intelligence (AI) systems can fool human judges into believing they’re human, according to a study by machine intelligence researchers from Coventry University.

Alan Turing, considered the father of theoretical computer science and AI, devised the Turing test in an attempt to outline what it means for a thing to think. In the test, a human judge or interrogator has a conversation with an unseen entity, which might be a human or a machine. The test posits that the machine can be considered to be “thinking” or “intelligent” if the interrogator is unable to tell whether or not the machine is a human.

Also known as the imitation game, the test has become an often-erroneous standard to determine if AI have qualities like intellect, active thought, and even consciousness.

In the study, Taking the Fifth Amendment in Turing’s Imitation Game, published in the Journal of Experimental and Theoretical Artificial Intelligence by Dr. Huma Shah and Dr. Kevin Warwick of Coventry University, the researchers analyzed six transcripts from prior Turing tests and determined that, when the machines fell silent, the judges were left undecided about their interlocutor’s humanness. The silence doesn’t even need to be intentional. In fact, it tended to result from technical difficulties.

“The idea [for the study] came from technical issues with a couple of the computer programs in Turing test experiments,” Shah tells Digital Trends. “The technical issues entailed the failure of the computer programs to relay messages or responses to the judge’s questions. The judges were unaware of the situation and hence in some cases they classified their hidden interlocutor as ‘unsure.’”

The silent machines may have baffled their judges, but their silence helped expose a flaw in the exam rather than confirm its utility, which Warwick says this raises serious questions regarding the Turing tests validity and ability to test thinking systems. “We need a much more rigorous and all encompassing test to do so, even if we are considering a machine’s way of thinking to be different to that of a human,” he tells Digital Trends.

Shah meanwhile notes that the test was designed to give a framework within which to “build elaborate machines to respond in a satisfactory and sustained manner,” not to build machines that simply trick judges. In short, the systems are meant to imitate human conversation and no human who takes the test seriously would fall silent. Right?

Well, they might, thinks Warwick. “One thing that I have learnt from such tests is that hidden human entities will almost surely do unexpected and illogical things,” he says. “In this case a human could easily get upset or annoyed by something a judge has said and decide not to reply — they are human after all.”

An alternative view is that the Turing test has already been undermined by the current state of AI. Shah says she agrees with Microsoft’s Chief Envisaging Officer in the UK, Dave Coplin, who thinks the “machine vs. human” challenges are no longer relevant. At the AI summit in London in May, Coplin pointed out that, given enough resources, at the rate AI is advancing, developing an intelligent machine doesn’t seem all that farfetched.

“The role of AI is to augment human performance with intelligent agents,” Shah says. “For example, a human educator using an AI to score student assignments and exam questions leaving the teacher time to innovate learning, inspiring students, encouraging more into STEM, including females, for a better life or world of cooperation.”

From this perspective, it’s absurd to develop an AI whose sole goal is to fool a human into thinking it’s human — especially if the simplest way to do so entails making it mute.

Dyllan Furness
Dyllan Furness is a freelance writer from Florida. He covers strange science and emerging tech for Digital Trends, focusing…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more