Skip to main content

IBM’s new supercomputer doesn’t just take orders — it can argue back

Despite supercomputers like IBM’s Watson proving its smarts by winning Jeopardy and chess against human opponents, the conversation surrounding artificial intelligence to date is to have machines do our bidding, like telling Siri to turn off the lights or asking Google for weather information. IBM’s latest efforts into the world of A.I. supercomputing is changing this narrative through Project Debater, a supercomputer that uses artificial intelligence to win debates against humans.

In essence, Project Debater is ushering in an era where A.I. supercomputers will be able to talk back to their human overlords, but IBM researchers promise that it won’t lead to a dystopian future where robots rule the world. The purpose of Project Debater, IBM Research Almaden vice president Jeff Welser told The Verge, is to help us understand language. Wesler even joked that despite its oratory skills, Debater would make a bad lawyer, so at least some legal jobs will be safe for now.

At an event in San Francisco, IBM’s Project Debater supercomputer took on humans in arguing modern social issues such as the benefits of telemedicine and the government subsidization of space exploration, and the consensus among journalists in the audience was unanimous: IBM’s supercomputer held its own. Despite some momentary glitches in an environment that is more ambiguous and doesn’t have as many clear-cut rules as previous games like chess, publications like USA Today, CNET, and The Verge concluded that Project Debater did well.

The topics for debate were not revealed to either opponent in advance to create a level playing field, and Project Debate went first in all rounds. The most impressive part is that Debater was able to understand and rebut its opponent’s presentation in near real-time. Although human opponents generally edged ahead of Project Debater in presentation, the audience favored Debater’s knowledge of the topic, as it was able to reference more than 300 million scholarly articles stored and indexed on IBM Cloud.

“We believe that mastering language is a fundamental frontier that A.I. has to cross,” IBM Research director Arvind Krishna said during a presentation that was reported by USA Today. “There’s aspects like speech recognition, speech to text that A.I. already does and does quite well. But that is not the same as listening comprehension or constructing a speech that can either be spoken or written or understanding the nuances of claims, meaning what supports a proposition or what may be against a proposition.”

But it wasn’t all just facts and knowledge. Debater was able to throw in jokes and master the timing of the delivery, CNET reported. “I can’t say it makes my blood boil, because I have no blood, but it seems some people naturally suspect technology because it’s new,” Debater joked in a debate round favoring telemedicine.

For Project Debater, it’s not just about winning arguments. IBM researchers claim that the computer’s intelligence can be used by legislators, lawyers, and business executives weighing complex issues to make informed decisions. The technology could even be helpful in weeding out fake news.

Chuong Nguyen
Silicon Valley-based technology reporter and Giants baseball fan who splits his time between Northern California and Southern…
IBM’s autonomous Mayflower ship lifted into the water ahead of launch
Mayflower

Is this the era of autonomous ships?

After two years of work on design, construction, and A.I. training, the Mayflower Autonomous Ship -- a.k.a. the fully autonomous trimaran that will cross the ocean from Plymouth, England to the U.S. -- was today lifted into the waters in advance of its official launch Wednesday.

Read more
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more