Timeline Of Games Mastered By Artificial Intelligence

DeepMind came with an even more improved called AlphaGo Zero. The game had zero involvement from humans, which means that it did not even …

Timeline Of Games Mastered By Artificial IntelligenceTimeline Of Games Mastered By Artificial Intelligence

One of the earliest experiments to have the Turing test is to make AI learn and master games played by humans. Games form the perfect test-bed for AI skills. Today, an AI called Pluribus has thoroughly learnt to play a game of Poker and we can now say that AI has conquered the game. Here is a look at the timeline of all the games that AI have mastered so far.

1951: The first working AI programs were written by the University of Manchester. This computer program which was run on the Ferranti Mark 1 machine had learnt how to play Checkers and Chess.


Timeline Of Games Mastered By Artificial IntelligenceTimeline Of Games Mastered By Artificial Intelligence


1952: Arthur Samuel of IBM began working towards the making of the first game-playing program that was capable of competing against human players in the game of Checkers.

1955: Arthur Samuel wrote the program version that learnt to play two years later.


W3SchoolsW3Schools


1990: IBM program to learn Backgammon was written by Gerald Tesauro called as TD-Gammon was dedicated to demonstrating the power of reinforcement learning. It showed that an AI with capabilities to compete in a championship-level backgammon game could be built with the technology and the computer programming skills that we had during that time.

1994: A computer programme called Chinook played Checkers. It beat the second-highest rated player Don Lafferty and won the National Tournament of the game in the US with a big margin.

1997: IBM invented the Deep Blue machine which had learnt to play the game of Chess and defeated Garry Kasparov, the world champion of those times. This was the first time that an AI had defeated a world champion in a match play.

Timeline Of Games Mastered By Artificial IntelligenceTimeline Of Games Mastered By Artificial Intelligence
Garry Kasparov playing chess against the IBM Deep Blue.

2007: Checkers was solved by sifting through 500 billion checkers positions by a computer program. This program could not be beaten by human players.

2011: IBM’s supercomputer Watson having NLP capabilities had mastered the game of Jeopardy. It competed against two champions of the game at the time. After three matches, Watson was able to win $77,147 in prizes while the other two human opponents had collected $24,000 and $21,600.

2014: Google started to work on a deep learning neural network called AlphaGo, which after some years was able to compete and even beat the world champions of the game.

2015: AI mastered not just the board games where the winning could be mathematically calculated and is restricted, but also in real-time strategy games like DotA 2 Elon Musk co-owned firm called OpenAI using reinforcement learning was trying to make an AI capable of playing DotA. The same year, Google DeepMind’s AlphaGo defeated 3 time European Go champion by 5 games to 0.

2016: Deepmind’s AI proved its skills with the board game Go, difficult board games that has the most number of possible moves. Lee Sedol, the Go world champion by this AI. After observing thousands of games and after playing hundreds of them against, the AI had learnt to play the game.

Timeline Of Games Mastered By Artificial IntelligenceTimeline Of Games Mastered By Artificial Intelligence
Lee Sedol playing Go against AlphaGo

2017: OpenAI competed in The International, which is the world’s biggest DotA 2 tournament, against professional DotA 2 players. The AI was trained on a one-player version of the game which is rather simpler than the team version.

Researchers from Carnegie Mellon University (CMU) made an AI system called Libratus that played against 4 expert players of Texas Hold ‘em Poker, a poker game. The AI had spent 20 days learning. The tournament lasted for 20 days and spanned over 120,000 hands of poker. Libratus improved itself with every game, also improving its strategy. Libratus individually defeated each of its 4 human opponents which were world champions at the time, by a huge margin.

Timeline Of Games Mastered By Artificial IntelligenceTimeline Of Games Mastered By Artificial Intelligence
(R) Professional poker player Jason Les plays Texas Hold’em Poker with Libratus. (L) Computer scientist Tuomas Sandholm, one of the bot’s creators.

DeepMind came with an even more improved called AlphaGo Zero. The game had zero involvement from humans, which means that it did not even have to learn from watching humans play or by playing with them, anymore. It played against itself a million times to master the board game. In this kind of AI, the AI is only given the basic rules and made to learn itself. AlphaZero masters chess in 4 hours and defeating the best chess engine, StockFish 8. It won 28 out of 100 games, and the rest was a draw.

University of Alberta’s DeepStack showcased an AI that could dominate Poker players using an artificially intelligent form of intuition.

Deep learning startup acquired by Google called Maluuba had an ML method called the Hybrid Reward Architecture (HRA). Using this method, it created more than 150 individual agents with different tasks applying it to Ms Pac-Man. With its method, the AI learnt how to obtain a top score of 999,990, impossible for any human to achieve.

2018: The OpenAI AI competed another time with players at The International against an AI that had learnt to play in a team of five. It won during a 1v1 demonstration game against professional Dota 2 player and champion, Dendi. Although the AI could not win the final round, it portrayed an exceptional level of capabilities.

2019: An AI called Pluribus that competed and won against professionals at Texas Hold’em. This Poker AI was capable of calculating several moves ahead and making a decision based on it. It could also learn strategies not adopted by human players.

Future Of Games In AI

With the advancement of games in AI, it is clear that the technology is capable enough to beat human champions in several games. It is today not only restricted to board games but also strategy games like DotA and Poker. It is because of these capabilities that we know that the prediction about AI changing the world by 2050 is indeed going to be true.


Timeline Of Games Mastered By Artificial IntelligenceTimeline Of Games Mastered By Artificial Intelligence

Provide your comments below

comments

Related Posts:

  • No Related Posts

The Evolution of OpenAI

OpenAI began life in December of 2015. Their mission is to expand the capabilities of current AI systems and open the door for AI to become an …

It’s easy to quip about AI being the end of humanity, making references to Terminator all the while.

You’ll see that we’re not entirely above such basic humor later on, but for now, let’s introduce the evolution of OpenAI and bring you up to speed on its impact on the Dota 2 pro scene.

OpenAI is an artificial intelligence company founded by Sam Altman and SpaceX and Tesla CEO Elon Musk. Their general mission is to advance AI for the betterment of humanity. A noble aspiration to be sure, but what does this have to do with Dota 2?

We’ve got all the answers right here, so come with us if you want to live (told you…)

The Evolution of OpenAI

OpenAI began life in December of 2015. Their mission is to expand the capabilities of current AI systems and open the door for AI to become an extension of human will. When looking for a testing ground for this new AI system, the developers chose Dota 2.

The OpenAI team chose Dota 2 because of the high degree of complexity, the need for adaptation and the multitude of potential combinations of moves, actions and reactions. In order to prepare itself to take on the pros, OpenAI had to get smart.

To this end, OpenAI uses a reinforcement learning algorithm called Proximal Policy Optimization (PPO). If that sentence made your eyelids feel heavy don’t worry, here’s breakdown.

The Evolution of OpenAI
Image courtest of OpenAI

PPO is essentially a method of reinforcing AI behavior through trial and error. The AI is presented with a task or a problem and through trial and error it works out the most efficient way to solve it. This allows the developers to introduce hazards and hindrances that the AI must overcome, learning all the while.

Never is this more evident than in this hilarious clip of an AI avatar attempting to reach an orb while being pelted by digital snowballs. The AI will utilize PPO to overcome the challenge and learn skills useful to its current environment. Skills like how to maintain its balance and how to stand up when it gets knocked down.

Introduction of OpenAI to Dota 2

To start with, OpenAI knew nothing about the game of Dota 2. When starting out the AI doesn’t know about last hitting or even the objective of the game. The AI doesn’t even know it’s playing Dota, it’s just attempting to solve a problem in the most efficient way possible.

OpenAI doesn’t have any concept of a UI, or what a hero or ability looks like. Its thinking is purely mathematical. All OpenAI ‘sees’ is a selection of numbers, with its objective being to simply optimize the numbers in its favor.

What OpenAI knows is that when it moves or casts an ability some numbers changed – numbers related to health, mana, hero position, creep behavior, gold, etc. It doesn’t know initially if the change is good or bad, it just knows something has changed. It then (through a very long process of trial and error) works out if the change is beneficial or detrimental.

The Evolution of OpenAI
Image courtesy of Valve

To learn a new patch, the developers take the latest version of OpenAI and drop it into the new patch. It notices that certain numbers change differently to how they did before, or perhaps don’t change at all. OpenAI will modify its behavior accordingly in favor of solving the puzzle (in this case winning a game of Dota 2).

OpenAI’s CTO Greg Brockman explained that the OpenAI bot was able to learn to play the game at a professional level from scratch in the span of just two weeks of real time (336 hours). Even more amazingly, after just 1 hour of training the OpenAI bot is able to crush the in-game bots. The evolution of OpenAI as a competitive gaming opponent is incredible.

OpenAI’s First Live Match

By August of 2017 OpenAI was ready to move into the big league. On August 11th 2017, during The International 17 (TI17) in Seattle in front of 20,000 people, OpenAI played a 1v1 mid-only game against the ever-charismatic Danil “Dendi” Ishutin.

The Evolution of OpenAI
Image courtesy of Natus Vincere

The match consisted of a best of 3 series, with both combatants using Shadow Fiend. In a hotly contested first match, OpenAI utilized incredible Shadowraze faking and masterful positioning to secure two kills without reply, winning the first game.

In the second, decisive game, Dendi realized almost immediately that his chances of winning were almost zero. He calls GG with less than 90 seconds played.

The Future

The evolution of OpenAI is fascinating to watch. It has gone from a general learning system to mastering a game as complex as Dota 2 in the space of two weeks. It is the tip of the iceberg for OpenAI as a system to enhance and build value for humanity.

AI is not without its risks, as OpenAI themselves have discussed, but with diligent professionals committed to improving AI such that it rivals human performance on almost every intellectual task, the future is certainly exciting for AI.

Tune in later in the week as we explore OpenAI’s recent performances in Dota 2 on a true 5v5 scale. While you’re here, why not check out some of our other great sports and esports articles at The Game Haus?

Follow us

You can like The Game Haus on Facebook and follow us on Twitter for more sports and esports articles

Follow Matt on Twitter @MattyMead2006.

From Our Haus to Yours.

Sharing is caring:

Like this:

LikeLoading…

Related

Related Posts:

  • No Related Posts

OpenAI Wants to Make Ultrapowerful AI. But Not in a Bad Way

OpenAI’s stated mission is to ensure that all of humanity benefits from any future AI that’s capable of outperforming “humans at most economically …

One Saturday last month, five men ages 19 through 26 strode confidently out of a cloud of magenta smoke in a converted auto showroom in San Francisco. They sat at a line of computer keyboards to loud cheers from a crowd of a few hundred. Ninety minutes of intense mouse-clicking later, the five’s smiles had turned sheepish and the applause consolatory. Team OG, champions at the world’s most lucrative videogame, Dota 2, had lost two consecutive games to a collective of artificial intelligence bots.

The result was notable because complex videogames are mathematically more challenging than cerebral-seeming board games like chess or Go. Yet leaning against a wall backstage, Sam Altman, CEO of OpenAI, the research institute that created the bots, was as relieved as he was celebratory.

Tom Simonite covers artificial intelligence for WIRED.

“We were all pretty nervous this morning—I thought we had like a 60-40 chance,” said Altman, a compact figure in a white T-shirt and whiter, showy sneakers. He became OpenAI’s CEO in March after stepping down as president of influential startup incubator YCombinator and had reason to be measured about the day’s win. To succeed in his new job, Altman needs bots to do more than beat humans at videogames—he needs them to be better than people at everything.

OpenAI’s stated mission is to ensure that all of humanity benefits from any future AI that’s capable of outperforming “humans at most economically valuable work.” Such technology, dubbed artificial general intelligence, or AGI, does not seem close, but OpenAI says it and others are making progress. The organization has shown it can produce research on par with the best in the world. It has also been accused of hype and fearmongering by AI experts critical of its fixation on AGI and AI technology’s potential hazards.

Under Altman’s plans, OpenAI’s research—and provocations—would accelerate. Previously chair of the organization, he took over as CEO after helping flip most of the nonprofit’s staff into a new for-profit company, in hopes of tapping investors for the billions he claims he needs to shape the destiny of AI and humanity. Altman says the big tech labs at Alphabet and elsewhere need to be pressured by a peer not driven to maximize shareholder value. “I don’t want a world where a single tech company creates AGI and captures all of the value and makes all of the decisions,” he says.

At an MIT event in late 2014, Tesla CEO Elon Musk described AI research as like “summoning the demon.” In the summer of 2015, he got talking with Altman and a few others over dinner about creating a research lab independent of the tech industry to steer AI in a positive direction. OpenAI was announced late that year, with Altman and Musk as cochairs. Musk left the board early in 2018, citing potential conflicts with his other roles.

In its short life, OpenAI has established itself as a serious venue for AI research. Ilya Sutskever, a cofounder of the organization who left a plum position in Google’s AI group to lead its research, oversees a staff that includes fellow ex-Googlers and alumni of Facebook, Microsoft, and Intel. Their work on topics such as robotics and machine learning has appeared at top peer-reviewed conferences. The group has teamed up with Google parent Alphabet to research AI safety; beating Team OG in Dota 2earned respect from experts in AI and gaming.

OpenAI’s metamorphosis into a for-profit corporation was driven by a feeling that keeping pace with giants such as Alphabet will require access to ever-growing computing resources. In 2015, OpenAI said it had $1 billion in committed funding from Altman, Musk, LinkedIn cofounder Reid Hoffman, early Facebook investor Peter Thiel, and Amazon. Altman now says a single billion won’t be enough. “The amount of money we needed to be successful in the mission is much more gigantic than I originally thought,” he says.

OpenAI CTO Greg Brockman, center, shakes hands with members of professional e-gaming team OG after they lost two games of Dota 2 to his researchers’ artificial intelligence bots.

OpenAI

IRS filings show that in 2017, when OpenAI showed its first Dota-playing bot, it spent $8 million on cloud computing. Its outlay has likely grown significantly since. In 2018, OpenAI disclosed that a precursor to the system that defeated Team OG tied up more than 120,000 processors rented from Google’s cloud division for weeks. The champion-beating version trained for 10 months, playing the equivalent of 45,000 years of Dota against versions of itself. Asked how much that cost, Greg Brockman, OpenAI’s chief technology officer, says the project required “millions of dollars” but declined to elaborate.

Altman isn’t sure if OpenAI will continue to rely on the cloud services of rivals—he remains open to buying or even designing AI hardware. The organization is keeping close tabs on new chips being developed by Google and a raft of startups to put more punch behind machine learning algorithms.

To raise the funds needed to ensure access to future hardware, Altman has been trying to sell investors on a scheme wild even for Silicon Valley. Sink money into OpenAI, the pitch goes, and the company will pay you back 100-fold—once it invents bots that outperform humans at most economically valuable work.

Altman says delivering that pitch has been “the most interesting fundraising experience of my life—it doesn’t fit anyone’s model.” The strongest interest comes from AI-curious wealthy individuals, he says. Hoffman and VC firm Khosla Ventures have invested in the new, for-profit OpenAI but didn’t respond to requests for comment. No one is told when to expect returns, but betting on OpenAI is not for the impatient. VC firms are informed they’ll have to extend the duration of their funds beyond the industry standard decade. “We tell them upfront, you’re not going to get a return in 10 years,” Altman says.

LEARN MORE

The WIRED Guide to Artificial Intelligence

Even as it tries to line up funding, OpenAI is drawing criticism from some leading AI researchers. In February, OpenAI published details of language processing software that could also generate remarkably fluid text. It let some news outlets—including WIRED—try out the software but said the full package and specifications would be kept private out of concern they could be used maliciously, for example to pollute social networks.

That annoyed some prominent names in AI research, including Facebook’s chief AI scientist Yann LeCun. In public Facebook posts, he defended open publication of AI research and joked that people should stop having babies, since they could one day create fake news. Mark Zuckerberg clicked “like” on the baby joke; LeCun did not respond to a request for comment.

For some, the episode highlighted how OpenAI’s mission leads it to put an ominous spin on work that isn’t radically different from that at other corporate or academic labs. “They’re doing more or less identical research to everyone else but want to raise billions of dollars on it,” says Zachary Lipton, a professor who works on machine learning at Carnegie Mellon University and also says OpenAI has produced some good results. “The only way to do that is to be a little disingenuous.”

Altman concedes that OpenAI may have sounded the alarm too early—but says that’s better than being too late. “The tech industry has not done a good enough job trying to be proactive about how things may be abused,” he says. A Google cloud executive who helps implement the company’s internal AI ethics rules recently spoke in support of OpenAI’s self-censorship.

After the defeated Team OG departed the stage last month to sympathetic acclaim, OpenAI cued up a second experiment designed to demonstrate the congenial side of superhuman AI. Dota experts—and a few novices, including WIRED—played on teams alongside bots.

The AI software unlucky enough to get WIRED as a teammate mostly evinced superhuman indifference to helping a rookie player. It focused instead on winning the game, following instincts honed by months of expensive training.

Narrow hyper-competence is a hallmark of existing AI systems. A WIRED reporter could play Dota badly while taking occasional notes and talking with an OpenAI researcher, before riding a bicycle home in city traffic. Despite millions spent on training, the Dota bots could only play the specific version of the game they were designed for.

There’s little consensus on how to make AI software more flexible, or what components might be needed to make AGI more than a technological fantasy. Even Altman is daunted by the scale of the challenge. “I have days where I’m convinced it’s all going to happen and others where it all feels like a pipe dream,” he says.


More Great WIRED Stories

Related Posts:

  • No Related Posts

OpenAI Five Gets 99.4 Percent Winrate Vs Humans In Dota 2

The humans were given their chance and at the end of it all, the OpenAI Five stands triumphant and makes its mark in Dota 2 history. At the end of a …

The humans were given their chance and at the end of it all, the OpenAI Five stands triumphant and makes its mark in Dota 2 history. At the end of a four-day Arena, the OpenAI Five recorded total wins of 7,215 against 42 loses, for a winrate at 99.4%.

The Arena matches followed the exhibition match where the Five beat out TI8 Champion OG 2-0. This is the first time that an AI worked as a team and defeated a human team.

In the exhibition game, the Five made use of buybacks. In most human vs. human tournaments, buybacks are typically not used in the early stages of the game. However for the Five, buyback decision went in their favor as the AI team managed to keep the pressure on OG. In a statement made by OpenAI Chief Technology Officer Greg Brockman, AI using buybacks at such an early stage is not surprising as AI generally favors short-term gains unlike human-led teams that incorporate long-term planning in their decisions.

Registration for the Arena matches began April 13, with the actual matches being played between April 18 and April 21. Over that period, a total of 30,937 human players participated in the OpenAI challenge, with the total time the AI spent playing amounting to 10.7 years.

The performance of the Five was so impressive that it took 459 games before the human-led teams recorded a win. A total of 22 teams were able to get a win against the Five.

Of the 42 wins, humans won 27 times on the Radiant side and 15 on the Dire side. The highest team kill was 57, though the Five managed to get 68 team kills of its own. The longest game took one hour and 11 minutes, with the shortest at 26 minutes and 32 seconds.

It is worth noting that a player with username “ainodehna” led a team that recorded a 10 win streaks against the Five.

It should also be noted that while this is indeed a big leap for AI, the conditions arguably favored the AI. For example, the picks were limited to 17 heroes since these are the heroes that are supported so far. Bans were not allowed as well. Additional limitations include illusion runes not spawning and not being able to buy the recipe for certain items including Helm of the Dominator, Manta Style, and Necronomicon. Players could still buy the component items.

Even with these limitations, it goes to show that AI has indeed come a long way and many are excited with the prospect of seeing AI play the full game.

Related Posts:

  • No Related Posts

Bumble unleashes ML on your privates, humans thrash Dota-2 bots, AI in criminal justice…

Roundup Let’s start the week with some bits and bytes of machine-learning news. Is this the year of the transformer and attention? OpenAI has been …

Roundup Let’s start the week with some bits and bytes of machine-learning news.

Is this the year of the transformer and attention? OpenAI has been working on two separate AI systems that can predict the next items in a given sequence of patterns. One model, Sparse Transformer, can work with text, images, and sounds, and the other, MuseNet, works with sounds only to produce musical compositions.

The Sparse Transformer system is a deep neural network that makes use of attention, the same technique used in its large language GPT-2 model. Attention helps neural networks analyze the overall structure of the input data so it can be modeled and fill in the blanks given a specific input.

Here, the Sparse Transformer is asked to do things like complete an image when half of it has been blacked out, or listen to a classical music clip and add more notes to finish the composition. You can read all the nitty gritty mathematical details of how it works here.

A similar approach is applied for MuseNet, another deep neural network that can create four minute clips of AI music. You can listen to some of them at the OpenAI site.

Some people don’t find them that musically interesting, whilst others are pretty impressed. Using machines to generate music has definitely been improving, but the songs are pretty hit or miss.

OpenAI’s Dota-2 playing bots can be beaten: While we’re still on the topic of OpenAI, the San-Francisco based research lab released its Dota-2 bots to the public to play against.

The experiment lasted for a few days only, from Thursday 18 April to Sunday 21 April. OpenAI Five played over 7000 games and won 99.4 per cent of the time. Not too shabby. A Dota 2 fan who played against the computer told El Reg it has “really good coordination”.

“It definitely has superhuman response times in some ways, but it’s also surprisingly subpar in efficiency. I’m surprised that it coordinates better than humans, but it can’t beat people 1V1 as easily, compared to 5V5,” said the fan, who wanted to remain anonymous.

“As for teaming up with other humans players, however, it doesn’t seem to adapt to humans, instead the humans adapt to play with it – when the bot is losing, it feels like they have no idea what to do.”

Although it’s difficult to beat the bot, it’s not impossible. The scoreboard shows that there were a few teams who must have learnt how to exploit its weaknesses. The best human team won 10 times in a row, the second best beat the bot nine times in a row, and the third team did it eight times.

No lewd nudes please, this is Bumble: Bumble is cracking down on pics of people’s private parts sent via its dating app with a AI-based tool known as the ‘Private Detector’.

If a dodgy image is sent in a chat, it’s blurred out and the user can decide if he or she wants to reveal, block, or report the filth, according to The Verge.

Bumble is seen as a friendlier option for women as they can choose whether to instigate a conversation or not. Unlike other dating apps like Tinder, however, it allows users to send pictures to one another. Unsurprisingly, some guys like to show their junk off, unsolicited, to prospective romantic or sexual interests.

Not only can these advances be gross, it’s hope that the Private Detector will also crack down on fake profiles too.

New AI in criminal justice report: The Partnership on AI, an non-profit consortium made up of leading AI organizations in industry and academia, have published a report outlining the risks of using AI in the criminal justice system.

It outlines ten shortcomings of using the technology for pretrial detentions, ranging from concerns about bias and accuracy of machines to how legal experts should use these kind of tools.

If this is something you’re interested in then give it a thorough read. The report is written to educate people rather than to implement any strict policies.

“Going forward, we hope that this report sparks a deeper discussion about these concerns with the use of risk assessment tools and spurs collaboration between policymakers, researchers, and civil society groups to accomplish much needed standard-setting and reforms in this space,” it said.

What’s the current health status of the Internet? It seems like a weird question to ask, but folks over at Mozilla have tried to answer it anyway.

The company’s annual Internet Health Report looks at the current trends unfolding in the virtual world. It looks like we’re obsessed with making sure AI is being used responsibly, and have watched with growing alarm as countries like the US and China roll out AI systems, like facial recognition, that raise privacy and security issues for its civilians.

People are also calling out for more internet watchdogs to scrutinize big tech companies like Google and Facebook over concerns about how their personal data is handled. Elsewhere, oppressive governments are slowing down internet access as a way of exercising control over its people.

You can read the full report here. ®

Related Posts:

  • No Related Posts