Skip to main content

Microsoft releases Project Malmo, its education-focused ‘Minecraft’ AI tool

minecraft hour of code tutorial updated 2016 version 1479203131 feature
Image used with permission by copyright holder
Microsoft has released Project Malmo, a platform that uses Minecraft as an environment for advanced research into artificial intelligence. The system was previously only open to scientists invited to take part in a private preview, but is now available via GitHub.

Project Malmo, previously referred to as Project AIX, has been spearheaded by a team working at a Microsoft research lab based in the English city of Cambridge. The platform is intended to facilitate research into general artificial intelligence, rather than systems that are designed to solve a singular problem.

Minecraft is ideally suited to the task, as it’s built upon a sandbox structure where players can go wherever they like and do whatever they want. By placing an AI in that environment and seeing how they respond, researchers can learn about how to “teach” their creation to respond to a wide range of different situations.

Moving from a specific intelligence to a general intelligence is a necessary stepping stone as we attempt to progress from AIs that can play board games to AIs that can perform a greater function in society. Even if an AI is only going to be tasked with folding laundry or stacking supermarket shelves, it’s crucial that they have a working knowledge of how to behave in the wider world.

One big advantage of Project Malmo over similar systems is that it allows researchers to compare their work against other projects that are using the same environment, according to a blog post announcing its release.

Microsoft hopes that Project Malmo will engage a broad range of users, and that its links to Minecraft will help entice more novice coders. To get started, download the package hosted here and launch the PC version of the game with the mod installed.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more