Skip to main content

Meet WorldKit, the projector that turns everything into a touchscreen

WorldKit kitchen cooking demo
A demonstration of WorldKit that shows an interactive recipe guide. Image used with permission by copyright holder

When it comes to technological innovation, there are two basic approaches. You can start big, flashy, and expensive, and hope that eventually your tech invention comes down enough in price for an average user to afford – think of GPS devices, for instance, which were the realm of high-budget military agencies long before ordinary civilians could dream of buying one; or, you can set out from the beginning to design something life-changing that everyone can have access to, rather than just an elite few.

The goal is to transform all of your surroundings into touchscreens, equipping walls, tables, and couches with interactive, intuitive controls.

The research team behind WorldKit, a new, experimental technology system, is trying to straddle the gulf between these two extremes. The goal is to transform all of your surroundings into touchscreens, equipping walls, tables, and couches with interactive, intuitive controls. But the team wants to do so without installing oversized iPads into every surface in your home, which could easily run up a six-figure price tag.

So how does the magic happen? With a simple projector – a projector paired with a depth sensor, to be precise. “It’s this interesting space of having projected interfaces on the environment, using your whole world as a sort of gigantic tablet,” said Chris Harrison, a soon-to-be professor in human-computer interaction at Carnegie Mellon University. Robert Xiao, a PhD candidate at Carnegie Mellon and lead researcher on the project, explained that WorldKit uses a depth camera to sense where flat surfaces are in your environment. “We allow a user to basically select a surface on which they can ‘paint’ an interactive object, like a button or sensor,” Xiao said.

We recently chatted with both Harrison and Xiao about their work on the WorldKit project, and learned just how far their imaginations run when it comes to the future of touch technology and ubiquitous computing. Below, we talk about merging the digital and the physical worlds, as well as creative applications for WorldKit that involve really thinking outside the box (or outside the monitor, in this case).

Understanding WorldKit’s workings

We know; the concept of a touchscreen on any surface is a little far out there, so let’s break it down. WorldKit works by pairing a depth-sensing camera lens, such as the one that the Kinect uses, with a projector lens. Then, programmers write short scripts on a MacBook Pro using Java, similar to those they might write for an Arduino, to tell the depth camera how to react when someone makes certain gestures in front of it. The depth camera interprets the gestures and then tells the projector to react by projecting certain interfaces. For instance, if someone makes a circular gesture, the system can interpret that by projecting a dial where the gesture was made. Then, when someone “adjusts” the dial by gesturing in front of it, the system can adjust a volume control elsewhere.

The brilliance – and the potential frustration – of this system lies in its nearly endless possibilities. Currently, whatever you want WorldKit to do, you must program it to do yourself. Xiao and Harrison expressed hope that one day, once WorldKit reaches the consumer realm, there might be an online forum where people can upload and download programming scripts (much like apps) in order to make their WorldKit system perform certain tasks. However, at the moment, WorldKit remains in an R&D phase in the academic realm, allowing its creators to dream big about what they would like to make it do eventually.

In any case, the easiest way to understand how WorldKit works is to watch a demo video of it in action. In the video, researchers touch various surfaces to “paint” them with light from the projector. Afterward, the WorldKit system uses the selected area to display a chosen interface, such as a menu bar or a sliding lighting-control dial, which can then be manipulated through touch gestures.

WorldKit Demo
Robert Xiao demonstrates how to use WorldKit to create a radial dial interface on any available flat surface – in this case, a table Image used with permission by copyright holder

Currently, WorldKit’s depth sensor is nothing other than a Kinect – the same one that shipped with the Xbox 360 – that connects to a projector that’s mounted to a ceiling or tripod. While this combo is already sensitive enough to track individual fingers and multi-directional gestures down to the centimeter, it does have one major drawback: size. “Certainly the system as it is right now is kind of big, and we all admit that,” Xiao said.

Lights, user, action: Putting WorldKit to use

But the team has high hopes for the technology on the near horizon. “We’re already seeing cell phones on the market that have projectors built in,” Xiao said. “Maybe the back camera, one day, is a depth sensor  … You could have WorldKit on your phone.” Harrison added that WorldKit could allow users to take full advantage of their phones for the first time. “A lot of smartphones you have nowadays are easily powerful enough to be a laptop, they just don’t have screens big enough to do it,” Harrison said. “So with WorldKit, you could have one of these phones be your laptop, and it would just project your desktop onto your actual desk.”

With projection, you can do some very clever things that basically alter the world in terms of aesthetics.

If Harrison and Xiao can imagine the mobile version of WorldKit on a smartphone in five years’ time, they have an even crazier vision for 10 or 15 years down the line. “We could actually put the entire WorldKit setup into something about the size of a lightbulb,” Xiao said. For these researchers, a lightbulb packed full of WorldKit potential has truly revolutionary implications. “We’re looking at that as almost as big as the lighting revolution of the early 1800s,” Xiao added.

The possibilities for WorldKit, as you might imagine, are limitless. So far, Harrison and Xiao’s ideas have included an away-from-office status button – the virtual version of a post-it note – and a set of digital TV controls. “You won’t ever have to find your remote again,” Xiao said.

The team’s already envisioning much more ambitious applications, such as experimental interior design. According to Harrison, you could make your own wallpaper, or change the look of your couch. “With projection, you can do some very clever things that basically alter the world in terms of aesthetics,” Harrison said. “Instead of mood lighting, you could have mood interaction.”

Xiao, meanwhile, fantasized about the system’s gaming potential. “You could augment the floor so that you didn’t want to step on it, and then play a lava game,” he said, describing a game where you have to cross from one end of the floor to the other, using only the tables and chairs. “You can imagine this being a very exciting gaming platform if you want to do something physical, instead of just using a controller.”

Blurring the boundaries between digital and physical

Xiao has good reason to be enthusiastic. He believes WorldKit gets at the heart at one of the biggest goals of computing research. “Eventually we’d like to see computers sort of fade into the background, and just become the way you do things,” he said. “Right now, it’s very explicit whenever you’re operating a computer that you are interacting with a computer.”

WorldKit: TV controls
Robert Xiao demonstrates how a single WorldKit system can create various interfaces on multiple surfaces at once – in this case, a drop-down menu and volume and lighting controls for watching a movie. Image used with permission by copyright holder

Indeed, part of what makes WorldKit so exciting is that it incorporates real, physical materials into its virtual play. But Harrison is  more hesitant to claim that this is always a good thing, especially when it comes to broad, philosophical questions about aesthetics. “In art, there’s a lot that’s nice about having it be rich, and physical, and also enduring,” Harrison argued, talking about digitally “painting” a surface using WorldKit. “So when you go over to the digital domain, are we using some of the things that make art a fundamental part of the human experience? Or are we losing something?”

Google Glass and WorldKit: Seeing vs. touching

There is one realm in which Harrison seems certain that WorldKit’s unique blend of physical and digital properties are at an advantage, and that’s in contrast to Google Glass. While both approaches attempt to augment reality through embedded computing, Harrison believes that Google Glass’s reliance on virtual gestures falls a bit flat.

The problem with clicking virtual buttons in the air is that’s not really something that humans do…

“The problem with clicking virtual buttons in the air is that’s not really something that humans do,” Harrison said. “We work from tables, we work on walls … that’s something we do on a daily basis … we don’t really claw at the air all that often.” To really understand what he means, just remember when Bluetooth first came out. Not only did everyone look crazy talking to themselves on street corners, it was hard not to feel self-conscious starting a conversation into empty air without the physical phone as a prop.

Xiao agreed, emphasizing that WorldKit is able to promote instinctual, unforced interaction by relying on physical objects. “One of the advantages of WorldKit is that all the interactions are out in the world, so you are interacting with something very real and very tangible,” Xiao said. “People are much more willing, much more able, to interact with it in a fluid and natural way.” In this case, perhaps touching – rather than seeing – means believing.

A ray of light: looking into the future

Like true academics, Xiao and Harrison agreed on one of the future applications they would most like to see from WorldKit in the days to come: “A digital whiteboard,” they chimed simultaneously. Why? Unlike a traditional board, a digital whiteboard would allow computerized collaboration in real-time.

Indeed, Xiao and Harrison are no strangers to collaboration – they strongly encourage crowdsourcing of their new technology. Instead of wanting to protect and commercialize WorldKit at this point, they would rather see it developed to its full potential. They are in the process of releasing WorldKit’s source code, and after attending the CHI 2013 Conference on Human Factors in Computing Systems, the “premier international conference on human-computer interaction” held in Paris last April, they’re hoping to get some of the 3,600 other attendees and researchers tinkering with the system soon.

“We’re primarily engineers,” Harrison said. “There are a lot of designers and application builders out there that I’m sure are going to have crazy awesome ideas of what to do with this, [and] just the two of us cannot possibly explore that entire space.”

Even now, researchers in other fields have already started applying WorldKit in ways Xiao and Harrison might never have anticipated. Harrison and Xiao are actually collaborating on a study at the moment with the Human Engineering Research Labs over in Pittsburgh. “They’re primarily concerned with people with cognitive disabilities,” Xiao said. “These are people who may need extra instructions for doing things.”

In the study, cognitively disabled participants are asked to follow a recipe to cook a dish. To help them, WorldKit projects descriptions of the necessary ingredients onto the kitchen table, such as three tomatoes or a cup of water, and doesn’t move on to the next step of the recipe until all the ingredients are physically in place on the table. Essentially, Xiao argued, WorldKit can act as a kind of prosthetic to help the cognitively disabled navigate through daily tasks in their environment.

Ultimately, whether we’re talking about an interactive whiteboard or a digital cooking assistant, the goal of WorldKit is the same: using embedded computing to make the interactions between people and computers as seamless, natural, and effortless as possible. Once that happens – once we are actually able to take advantage of computing everywhere without ever touching a computer  – all of our lives have the potential to get better.

Mika Turim-Nygren
Former Digital Trends Contributor
Mika Turim-Nygren writes about technology, travel, and culture. She is a PhD student in American literature at the University…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more