Skip to main content

MIT’s new robot can play everyone’s favorite block-stacking game, Jenga

MIT Robot Learns How to Play Jenga

Not content with getting freakishly good at cerebral games like chess and Go, it seems that artificial intelligence is now coming for the kind of fun games we played as kids (and childish adults). With that in mind, researchers from the Massachusetts Institute of Technology (MIT) have developed a robot which uses the latest machine learning computer vision to play everyone’s favorite tower-toppling game Jenga.

If it’s been a while since you played Jenga, the game revolves around a wooden tower constructed from 54 blocks. Players take it in turns to remove one block from the tower and place it on top of the stack. Over time, the tower gets taller and, crucially, more unstable. The result is a game of impressive physical skill for humans — and, now, for robots as well.

MIT’s Jenga-playing bot is equipped with a soft-pronged gripper, force-sensing wrist cuff, and external camera, which it uses to perceive the block-based tower in front of it. When it pushes against a block, the robot takes visual and tactile feedback data from the camera and cuff, and weighs these up against its previous experiences playing the game. Over time, it figures out when to keep pushing and when to try a new block in order to stop the Jenga tower from falling.

“Playing the game of Jenga … requires mastery of physical skills such as probing, pushing, pulling, placing, and aligning pieces,” Alberto Rodriguez, assistant professor in the Department of Mechanical Engineering at MIT, said in a statement. “It requires interactive perception and manipulation, where you have to go and touch the tower to learn how and when to move blocks. This is very difficult to simulate, so the robot has to learn in the real world, by interacting with the real Jenga tower. The key challenge is to learn from a relatively small number of experiments by exploiting common sense about objects and physics.”

On face value, the idea of a robot whose only mission is to play Jenga doesn’t sound like it has much real-world applicability. But the concept of a robot that can learn about the physical world, both from visual cues and tactile interactions, has immense applicability. Who knew a Jenga-playing robot could be so versatile?

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Scientists are using A.I. to create artificial human genetic code
Profile of head on computer chip artificial intelligence.

Since at least 1950, when Alan Turing’s famous “Computing Machinery and Intelligence” paper was first published in the journal Mind, computer scientists interested in artificial intelligence have been fascinated by the notion of coding the mind. The mind, so the theory goes, is substrate independent, meaning that its processing ability does not, by necessity, have to be attached to the wetware of the brain. We could upload minds to computers or, conceivably, build entirely new ones wholly in the world of software.

This is all familiar stuff. While we have yet to build or re-create a mind in software, outside of the lowest-resolution abstractions that are modern neural networks, there are no shortage of computer scientists working on this effort right this moment.

Read more
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more