Skip to main content

Jeopardy! IBM Challenge ends with a big win for Watson the AI

Image used with permission by copyright holder

After three hard-fought nights of trivia play, Jeopardy!‘s IBM Challenge has come to an end. It seemed to be almost a certainty at the end of round two that thinking computer Watson would wipe the floor with human competitors Ken Jennings and Brad Rutter, and it did just that last night.

The bloodbath really happened on Tuesday, the end of the two-day first round, when Watson emerged as the victor with more than double the winnings of its fleshsack competitors. Watson had $23,440 banked going into Final Jeopardy last night, a short distance from second-place contender Ken Jennings’ $18,200. Rutter meanwhile was left in in the dust, only grabbing $5,600 for himself.

In the end, Jennings accepted defeat with a smile and an ominous quip, bidding just $1,000 on the final question — assuring a win over former Million Dollar Challenge competitor Rutter –and writing below the bid, “I for one welcome our new computer overlords.” Ken, man… don’t encourage it!

The damage had already been done. Only a massive upswing for the humans in last night’s final round would have clinched a win. The final count for the three-day, two-round IBM Challenge puts Watson on top with $77,147, Jennings in the runner-up spot with $24,000 and Rutter not far behind with $21,600.

Have we seen the birth of Skynet this week, the first indication of a machine-dominated future to come? Probably not. We know for sure now that machines have faster reflexes than humans, that much is true. Watson’s key advantage over the three day tourney seemed to be its ability to ring in ahead of its competitors. We can also applaud IBM’s technical achievement. Sure, this whole tournament feels and in many ways is a product placement gimmick. It’s also a remarkable achievement in artificial intelligence development. A computer appeared on a quiz show and puzzled out mostly correct responses to clues with an unusual syntax (ie not framed as questions).

In unrelated news, a baby named John Connor was born in a small California hospital late yesterday…

Adam Rosenberg
Former Digital Trends Contributor
Previously, Adam worked in the games press as a freelance writer and critic for a range of outlets, including Digital Trends…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more