Skip to main content

DeepMind has an AI bot that maneuvers through mazes and grabs objects on its own

DeepMind - Reinforcement Learning with Unsupervised Auxiliary Tasks
Google’s DeepMind release a paper this week called Reinforcement Learning with Unsupervised Auxiliary Tasks, which describes a method to increase the learning speed of artificial intelligence and the final performance of agents — or bots. This method includes adding two main additional tasks to perform while the AI trains, and builds on the standard deep reinforcement learning foundation, which is basically a trial-and-error reward/punishment method where AI learns from its mistakes.

The first added task for speeding up AI learning is the ability to understand how to control the pixels on the screen. According to DeepMind, this method is similar to how a baby learns to control his/her hands by moving them and watching those movements. In the case of AI, the bot would understand visual input by controlling the pixels, thus leading to better scores.

“Consider a baby that learns to maximize the cumulative amount of red that it observes. To correctly predict the optimal value, the baby must understand how to increase ‘redness’ by various means, including manipulation (bringing a red object closer to the eyes); locomotion (moving in front of a red object); and communication (crying until the parents bring a red object),” DeepMind’s paper states. “These behaviors are likely to recur for many other goals that the baby may subsequently encounter.”

The second added task is used to train the AI to predict what the immediate awards will be based on a brief history of prior actions. To enable this, the team provided equal amounts of previous rewarding and non-rewarding histories. The end result is that the AI can discover visual features that will likely lead to rewards faster than before.

“To learn more efficiently, our agents use an experience replay mechanism to provide additional updates to the critics. Just as animals dream about positively or negatively rewarding events more frequently, our agents preferentially replay sequences containing rewarding events,” the paper adds.

With these two auxiliary tasks added to the previous A3C agent, the resulting new agent/bot is based on what the team calls Unreal (UNsupervised REinforcement and Auxiliary Learning). The team virtually sat this bot in front of 57 Atari games and a separate Wolfenstein-like labyrinth game consisting of 13 levels. In all scenarios, the bot was given the raw RGB output image, providing it direct access to the pixels for 100 percent accuracy. The Unreal bot was rewarded across the board for tasks like shooting down aliens in Space Invaders to grabbing apples in a 3D maze.

Because the Unreal bot can control the pixels and predict if actions will produce rewards, it’s capable of learning 10 times faster than DeepMind’s previous best agent (A3C). Even more, it produces better performance than the previous champion as well.

“We can now achieve 87 percent of expert human performance averaged across the Labyrinth levels we considered, with super-human performance on a number of them,” the company said. “On Atari, the agent now achieves on average 9x human performance.”

DeepMind is hopeful that the work that went into the Unreal bot will enable the team to scale up all of its agents/bots to handle even more complex environments in the near future. Until then, check out the video embedded above showing the AI moving through labyrinths and grabbing apples on its own without any human intervention.

Kevin Parrish
Former Digital Trends Contributor
Kevin started taking PCs apart in the 90s when Quake was on the way and his PC lacked the required components. Since then…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more