Skip to main content

What would it take to build a Matrix-level simulation of reality?

The Matrix Image used with permission by copyright holder

Released almost exactly 20 years ago, The Matrix has gone on to become a cultural phenomenon well beyond the science fiction genre. While it was generally considered science fiction at the time, it helped popularize the Simulation Hypothesis: the idea that we’re all living inside a computer simulation.

Rizwan Virk is Executive Director of Play Labs at MIT, a video game program at the Massachusetts Institute of Technology, and a co-founder and investor in a number of video game startups including Tapjoy, Telltale Games and Discord.  His new book is The Simulation Hypothesis: An MIT Computer Scientist Shows Why AI, Quantum Physics, and Eastern Mystics All Agree We Are in a Video Game

While Nick Bostrom’s article in 2003 popularized the discussion in academia and among scientists, it was Elon Musk’s eye-popping declaration at the Code Conference in 2016 about video games that really got many of us in the tech industry to take the idea more seriously. Musk pointed out that, 40 years ago, video games consisted of Pong — basically two squares and a dot — while today we have fully 3D MMORPGs and stunningly realistic VR and AR.

As a video game industry insider and technologist, I’ve started to wonder — what would it take to build something like The Matrix: a simulation that’s so realistic that it’s effectively indistinguishable from physical reality?

Clearly, our technology is not quite there yet, but not in ways you might think. It’s not just a matter of image resolution, pixel density, or visual realism. Rather, it’s about creating interface technologies that can create full immersion and record our responses in real time.

The road to the Simulation Point

So how far away are we from the Simulation Point, the theoretical point where we’re capable of creating virtual worlds indistinguishable from physical reality? In my book, The Simulation Hypothesis, I lay out the 10 stages of technology that would be required to create an all-encompassing virtual world like the Matrix. Let’s run through this roadmap, and then we can answer that question.

Stage  Technology Timeframe
0 Text Adventures 1970s-1980s
1 Graphical Arcade Games 1970s-1980s
2 Graphical RPG Games 1980s
3 3D Rendered MMORPGs and Virtual Worlds 1990s-2000s
4 Immersive Virtual Reality 2010s-2020s
5 (*) Photo-realistic Augmented and Mixed Reality 2020s
6 (*) Real World Rendering: Light Fields and 3D Printing 2010s-2020s
7 (*) Mind Interfaces 2020s-?
8 (*) Implanted Memories 2030s-?
9 (*) Artificial Intelligence and NPCs 2020s-2100?
10 (*) Downloadable Consciousness 2040s-2100?
11 The Simulation Point 2100-?

The Stages on the road to the Simulation Point

Let’s travel down the road any civilization might take to reach the Simulation Point, starting with a brief history of Earth’s video games.

Stages 0-3: From text adventures to MMORPGs

The idea of an explorable “world” inside a computer started with text-based games like Colossal Cave Adventure in the 1970s, and reached its peak with the Infocom games like Zork I-III and The Hitchhikers Guide to the Galaxy. The first graphical game that was widely available, Pong, led directly to the arcade and home video console craze of the 1980s, with games like Space Invaders and Pac Man.

MMO return World of Warcraft
Image used with permission by copyright holder

The introduction of 3D perspective and avatars

It wasn’t until the tools of graphical arcade games were combined with elements of text adventures that we really started down the road to the simulation point. These primitive RPGs included Kings Quest, Legend of Zelda, and more. Although these were simple, 2D single-player games, they had many of the elements of today’s 3D MMORPGs like World of Warcraft and Fortnite: worlds that are rendered and can be explored, and characters/avatars that can be moved around.

In this sense, Toy Story (1995) and Doom (1993) were landmark events which really marked an evolutionary leap forward with 3D graphics and rendering technology. These two were at opposite ends of the spectrum — to render a movie like Toy Story took many hours per frame, while Doom’s main achievement was that was that you could move left and right and the scene would shift in real time. Doom’s chief programmer, John Carmac, would later go on to the be the CTO at Oculus, which contributed heavily to the modern virtual reality boom. Today we have millions of players interacting with 3D virtual avatars, and we are well on our way to the Simulation Point.

Stage 5: VR, AR, MR and approaching full immersion

Building on top of 3D MMORPGs, today’s virtual and augmented reality systems are starting to bring science fiction closer to reality. In last year’s Ready Player One, for example, characters could not only experience VR through a headset, but also use haptic gloves, full body suits, and even omni-directional treadmills to increase the sense of realism. Here in the real world, these items are already being developed, and in many cases are already available on the market today.

Warner Bros. Studios

VR Worlds like the OASIS in Ready Player One

Stage 6: Building Star Trek’s Replicators and Holodeck

Stage 6 includes 3D printers and light field technology, which represent significant leaps forward in making virtual objects. In fact, these technologies are starting to look more like Star Trek’s replicators or its Holodeck than any video game. The basic idea of 3D printers is that almost any physical object can be modeled as information and then “printed” as a series of 3D pixels. While today’s 3D printers can generally only print using one type of “ink” (usually a single colored thermoplastic), there have been 1/3 scale models of an Aston Martin car, an actual gun, and recently, an Israeli team was able to use the cells of a living patient to create a 1/3 scale.  If this trend continues, pretty soon, like Captain Picard, you’ll be able to say, “Tea. Earl Gray. Hot” and have it fabricated right before your eyes.

Star Trek - Picard "Tea, Earl Grey, Hot" Clips

While today’s AR headsets rely on having a physical headset, there is research going on at BYU and MIT to use light-field technology to simulate how light bounces off objects. This suggests the possibility that, within a decade or two, we will be able to create realistic holograms that look like actual objects without the need for headsets.

Stages 7-8: Mind Interfaces and Implanted Memories

Now let’s move beyond where we are today into more speculative areas. One of the main reasons the Matrix was so convincing to humans like Neo was that the images were beamed directly into their brains, in this case via a wire that attached to the cerebral cortex. Basically, the brain was tricked into thinking the experience was real. Neo then woke up in a pod with a wire into his cerebral cortex which was responsible for sending images to his brain and recording his responses.

To truly build something like this, we will need to bypass today’s VR and AR goggles and interface directly with the brain to read our intentions and to visualize the game-world.

Advances made in the last decade suggest that mind interfaces are not as far off as we might think. Startups in this field include Neurable, which is working on BCI (Brain Computer Interfaces) for controlling objects within virtual reality using nothing but your mind. Another startup, Neuralink (funded by Elon Musk) claims to develop “high bandwidth and safe” brain-machine interfaces which involve implants, based on a concept from science fiction writer Iain Banks.

Image used with permission by copyright holder

Recently, a team of  researchers from the University of Washington and Carnegie Mellon were able to use skull caps and brain waves to send information about how to move a Tetris piece between 3 players; two who could see the screen and one who couldn’t — effectively an electronic form of telepathy.

In 2011 and 2016, researchers from University of California, Berkeley were able to reconstruct low resolution versions of what participants had been watching (movie trailers) by measuring their brain activity.  This research shows that recording our dreams may be possible in the near future. Unlike in the Matrix, when Morpheus’ teammates needed to look at the now famous stream of green symbols to figure out what was going on in the user’s mind in the simulation, we could just display it on a screen.

So, we are well on the road to being able to read intentions and interpret them. But what about the opposite: broadcasting into the mind?

Experiments done in the 1950s by Wilder Penfield suggest that memories can be triggered inside the brain by electrical signals. But, in what sounds like a science fiction scene out of Blade Runner, there are much newer experiments which suggest that memories can also be implanted.

In 2013, a team of researchers at MIT, while researching Alzheimer’s, found that they could implant false memories in the brains of mice, and these memories ended up having the same neural structure as real memories. This was done in a very limited way, but the techniques are promising.

If memories can be falsified, then we may be entering the world that Stephen Hawking warned us about. “The history books and our memories,” he said, “could just be illusions. It is the past that tells us who we are. Without it, we lose our identity.”

Stages 9-10: Artificial, simulated, and downloadable consciousness

A.I. and artificial consciousness are relatively common today — but only in very primitive forms. Take NPCs (Non-player characters) from video games, for example. These are artificial beings that can move through virtual worlds and interact with you, but they can’t yet pass the infamous Turing Test. Created by computer pioneer Alan Turing, the test is basically a game wherein a conversation with an A.I. is indistinguishable from a conversation with a human being.

Even though we don’t fully understand consciousness, A.I. is one of the most rapidly advancing fields in computer science today.  Already, A.I. is giving humans serious competition in traditional games like Chess and Go. China’s Xinhua news agency recently introduced virtual news anchors that can read the news like real humans. “Deepfake” photographs are being generated by A.I. which are indistinguishable from “real” photographs, and a video went viral recently of A.I. removing cars from scenes with pretty astonishing results.

One of the leaders of transhumanist movement, Google futurist Ray Kurzweil, believes that we are approaching the singularity in both superintelligent A.I. and in another way: downloading consciousness to silicon-based devices, preserving our minds forever.

Ray Kurzweil - Biotechnology and AI

Those who believe this think that all we need to do is to duplicate the neurons and neural connections of the brain – which would be 1012 neurons through 1015 synapses. While this task seemed insurmountable twenty years ago, today, teams have already simulated the neurons in a rat’s brain using a much smaller number of neurons and connections. Kurzweil thinks we’ll be there by 2045.

Others believe consciousness is more complicated, bordering on the philosophical and religious discussion. Most of the worlds’ religions (Eastern and Western traditions) already teach of transmission of consciousness: downloading it at birth and uploading it at death of the body.

The video game metaphor raises the possibility that there are both PCs (player characters), and NPCs (non-player characters) that are purely artificial.

The Simulation Point and the world as information

A famous Silicon Valley venture capitalist, Marc Andreeson famously said that “software is eating the world.” However, part of the reason that I wrote the book about the Simulation Hypothesis is that it seems that computer science is actually providing new understanding and underpinning for the other sciences.

Once upon a time, physics and biology were thought of as the study of physical objects. Today, physicists and biologists are coming to the conclusion that information is the key to unlocking their sciences. Genes, for example, are nothing if not a way to store information inside biological computers. Physicist John Wheeler, who was one of the last to work with Albert Einstein, decided that there was no material world and that everything came down to bits of information, when he coined the phrase “It from bit”.

If everything is information, then our current technology development trends will lead us to the Simulation Point soon. Looking at these stages, many of them will be done before 2050, but a few, like downloading of consciousness, may prove more elusive while we understand what consciousness is. Even in those instances, my estimate is that in 100-200 years at the most, we will have the technical underpinnings required to reach the Simulation Point and build our own version of the Matrix.

Nick Bostrom from Oxford in his paper “Are You Living In A Simulation?” argued that that if such technology can ever be created, then chances are it has already been created by some advanced civilization somewhere in the universe.

If that’s the case, then who is to say that we aren’t already living inside a giant video game?  As Morpheus said to Neo, “You have been living in a dream world.”

Rizwan Virk
Rizwan Virk is Executive Director of Play Labs @ MIT, a video game program at the Massachusetts Institute of Technology, and…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more