Skip to main content

SXSW 2014: IBM’s Watson stars in the kitchen as well as TV’s Jeopardy

ibm watson internet of things investment headquarters munich
Image used with permission by copyright holder

A couple of years back, IBM’s supercomputer, dubbed Watson, dazzled audiences and tore through Jeopardy rounds with ease, dispatching the game show’s uber-champions including Ken Jennings, who was a Jeopardy Champion an unthinkable 74 times.

At SXSW 2014 though, Watson is showcasing its seemingly limitless potential by conquering an entirely different topic: cooking. At this point, Watson knows roughly 35,000 recipes, but perhaps even more impressive is its ability to employ chemistry to take a set of ingredients and sort them out, including details like amount, and conjure something up that humans would enjoy.

Attendees of the annual show, which kicked off on March 7 and will close its doors on March 16, have been able to taste recipes thought up by Watson, which are then put together by Carly DeFilippo and her colleagues from the Institute of Culinary Education. Chefs from ICE then take the recipes, cook ’em up, and then serve them via a food truck outside of the show. IBM calls it Cognitive Cooking, and hopes that with this experiment, it can help people to think about the potential nontraditional applications that computers like Watson could have.

You can learn more about IBM’s Cognitive Cooking initiative here. If you want to follow the discussion on Twitter, check out this page, which is plugged into #IBMFoodTruck.

What do you think? Sound off in the comments below.

Konrad Krawczyk
Former Digital Trends Contributor
Konrad covers desktops, laptops, tablets, sports tech and subjects in between for Digital Trends. Prior to joining DT, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more