Skip to main content

Microsoft plans to use Minecraft to test our future robotic overlords

ben wheatley minecraft free fire
Image used with permission by copyright holder
Microsoft has announced plans to make the hugely popular video game Minecraft into a potent tool for AI researchers. By installing an open source software platform called AIX, which is set to be distributed in July, the game will act as a test bed that an AI entity can be taught to explore.

Something as simple as climbing a hill might seem like a basic AI research project, but creating a robot that can carry out the necessary movements is often prohibitively expensive. By limiting that motion to the virtual world of Minecraft, researchers can perform similar experiments without the associated costs.

Given that Minecraft uses a sandbox structure, it’s well-suited to research projects that teach AIs to make decisions about the world around them. In the game, these choices might relate to avoiding a fiery death in a pool of lava or staying inside at night to hide from nocturnal enemies, but the fundamentals could have real world applications.

Project lead Katja Hofmann is quoted as saying that the expected scope of the project “provides a way to take AI from where it is today up to human-level intelligence, which is where we want to be, in several decades time,” in a report from the BBC.

When Microsoft acquired development studio Mojang and its biggest release Minecraft for $2.5 billion in 2014, it was immediately clear that the company was looking for more than just rights to the game and its significant potential for merchandising revenue. For comparison, Disney’s purchase of Lucasfilm — which secured both Star Wars and Indiana Jones — was completed for $4 billion.

Given that Microsoft made such a significant financial investment, Minecraft was always destined to be implemented as more than just a video game product. Between a recent push to use the title in education, its constant presence as HoloLens briefings and this new application in AI research, it seems that the brand is being put to good use.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more