Skip to main content

OpenAI bot crushes top players at Dota 2 tournament

Dendi vs. OpenAI at The International 2017
The OpenAI team, supported by tech maven Elon Musk, showcased an AI bot at a tournament in Seattle that decisively beat several of the world’s best Dota 2 players in one-on-one matches. The stunning upset over pro gamer and crowd favorite Danylo “Dendi” Isutin was broadcast live from the stage at The International, a $24 million Dota 2 tournament backed by Valve.

In the first match, the machine-learning algorithm defeated Dendi in ten minutes. Dendi then resigned from the second match, and declined a third. On the OpenAI blog, developers boasted that the bot had previously conquered the top 1v1 player in the world and the top overall player in the world.

Musk’s billion-dollar OpenAI venture has a noble goal — nothing less that saving humanity from the impending apocalypse unleashed by our AI overlords. On a far less grander scale, the OpenAI algorithm for Dota 2 was developed by playing many games against itself, also known as “learned bot behavior,” and then utilizing techniques that could take human players years to master. In a new video, OpenAI detailed some of the rather esoteric strategies used in its demonstration matches, such as last hitting (scoring extra gold by dealing the last blow) and raze dodging (using spell-casting lag to their advantage).

Learned Bot Behaviors

One-on-one matches are far less involved than the standard five-on-five bouts in tournament play, which feature a much wider range of techniques and strategies. Still, it’s an impressive accomplishment, and OpenAI plans to have its bots ready for full five-on-five matches at next year’s Invitational.

Greg Brockman from OpenAI, in a video released before the match, remarked that “Dota is a great test for artificial intelligence,” due to the game’s complexity and open-ended style of play. “Our bot is trained entirely through self-play. It starts out completely random with no knowledge of the world.” The bot then plays against itself for thousands of matches, developing strategies and gaining insight as it goes.

In an interview with Business Insider, Brockman expressed his hope that their “self-playing” style of machine learning will lead to far greater advances in AI. “At OpenAI, we’re not just about publishing a paper,” he said. “It’s really about building systems and doing something that would have been impossible before.”

Editors' Recommendations

Mark Austin
Former Digital Trends Contributor
Mark’s first encounter with high-tech was a TRS-80. He spent 20 years working for Nintendo and Xbox as a writer and…
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
Even OpenAI has given up trying to detect ChatGPT plagiarism
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

OpenAI, the creator of the wildly popular artificial intelligence (AI) chatbot ChatGPT, has shut down the tool it developed to detect content created by AI rather than humans. The tool, dubbed AI Classifier, has been shuttered just six months after it was launched due to its “low rate of accuracy,” OpenAI said.

Since ChatGPT and rival services have skyrocketed in popularity, there has been a concerted pushback from various groups concerned about the consequences of unchecked AI usage. For one thing, educators have been particularly troubled by the potential for students to use ChatGPT to write their essays and assignments, then pass them off as their own.

Read more
Top authors demand payment from AI firms for using their work
Person typing on a MacBook.

More than 9,000 authors have signed an open letter to leading tech firms expressing concern over how they're using their copyrighted work to train AI-powered chatbots.

Sent by the Authors Guild to CEOs of OpenAI, Alphabet, Meta, Stability AI, IBM, and Microsoft, the letter calls attention to what it describes as “the inherent injustice in exploiting our works as part of your AI systems without our consent, credit, or compensation.”

Read more