Skip to main content

Jeopardy! pits man vs. machine, no winners yet

jeopardyibmWatch out, world. Skynet could be just around the corner. Chess grandmaster Gary Kasparov lost a match to an IBM super-computer in 1997, but the chess-playing Deep Blue has nothing on IBM’s latest artificial intelligence competitor to human dominance, Watson. Following a January test run, the AI program made its official debut on the popular quiz show Jeopardy! last night, the first of a three-night showdown between the machine and two of the shows most notable former champions, Ken Jennings and Brad Rutter.

At the end of the first round, Rutter and Watson stand tied at $5,000 while Jennings trails with just $2,000. An early success for the machine, though not a perfect performance. The AI software, which is designed to answer Jeopardy! questions in three seconds or less, slipped up on a few questions, notably one concerning J.K. Rowling’s Harry Potter series. All of the facts in the world will do you no good when Alex Trebek is asking you to consider wizard/Muggle relations, apparently.

Watson is housed in no mere desktop PC. Instead, a powerful array of computers running on IBM’s POWER7 processors provide the lightning-quick response time and growing parade of trivial information. IBM has revealed that the AI program is housed in a cluster of 90 IBM Power 750 servers, with additional bits and pieces — “I/O, network and cluster controller nodes” — occupying 10 racks, which is the part of the machine you see on the show. There are 2,880 POWER7 processor cores in all — spread across 3.5 GHz POWER7 octo-core CPUs — and 16TB of RAM. Forget Jeopardy!… let’s see what this thing does with Crysis!

Adam Rosenberg
Former Digital Trends Contributor
Previously, Adam worked in the games press as a freelance writer and critic for a range of outlets, including Digital Trends…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more