“$1,000,000,000,” it read.
The investment from Microsoft, signed early this month and announced Monday, signals a new direction for Altman’s research lab.
In March, Altman stepped down from his daily duties as the head of Y Combinator, the startup “accelerator” that catapulted him into the Silicon Valley elite. Now, at 34, he is the chief executive of OpenAI, the artificial intelligence lab he helped create in 2015 with Elon Musk, the billionaire chief executive of the electric carmaker Tesla.
Musk left the lab last year to concentrate on his own AI ambitions at Tesla. Since then, Altman has remade OpenAI, founded as a nonprofit, into a for-profit company so it could more aggressively pursue financing. Now he has landed a marquee investor to help it chase an outrageously lofty goal.
He and his team of researchers hope to build artificial general intelligence, or AGI, a machine that can do anything the human brain can do.
AGI still has a whiff of science fiction. But in their agreement, Microsoft and OpenAI discuss the possibility with the same matter-of-fact language they might apply to any other technology they hope to build, whether it’s a cloud-computing service or a new kind of robotic arm.
“My goal in running OpenAI is to successfully create broadly beneficial AGI,” Altman said in a recent interview. “And this partnership is the most important milestone so far on that path.”
In recent years, a small but fervent community of artificial intelligence researchers have set their sights on AGI, and they are backed by some of the wealthiest companies in the world. DeepMind, a top lab owned by Google’s parent company, says it is chasing the same goal.
Most experts believe AGI will not arrive for decades or even centuries — if it arrives at all. Even Altman admits OpenAI may never get there. But the race is on nonetheless.
In a joint phone interview with Altman, Microsoft’s chief executive, Satya Nadella, later compared AGI to his company’s efforts to build a quantum computer, a machine that would be exponentially faster than today’s machines. “Whether it’s our pursuit of quantum computing or it’s a pursuit of AGI, I think you need these high-ambition North Stars,” he said.
Altman’s 100-employee company recently built a system that could beat the world’s best players at a video game called Dota 2. Just a few years ago, this kind of thing did not seem possible.
Dota 2 is a game in which each player must navigate a complex, three-dimensional environment along with several other players, coordinating a careful balance between attack and defense. In other words, it requires old-fashioned teamwork, and that is a difficult skill for machines to master.
OpenAI mastered Dota 2 thanks to a mathematical technique called reinforcement learning, which allows machines to learn tasks by extreme trial and error. By playing the game over and over again, automated pieces of software, called agents, learned which strategies are successful.
The agents learned those skills over the course of several months, racking up more than 45,000 years of game play. That required enormous amounts of raw computing power. OpenAI spent millions of dollars renting access to tens of thousands of computer chips inside cloud computing services run by companies like Google and Amazon.
Eventually, Altman and his colleagues believe, they can build AGI in a similar way. If they can gather enough data to describe everything humans deal with on a daily basis — and if they have enough computing power to analyse all that data — they believe they can rebuild human intelligence.
Altman painted the deal with Microsoft as a step in this direction. As Microsoft invests in OpenAI, the tech giant will also work on building new kinds of computing systems that can help the lab analyse increasingly large amounts of information.
“This is about really having that tight feedback cycle between a high-ambition pursuit of AGI and what is our core business, which is building the world’s computer,” Nadella said.
That work will likely include computer chips designed specifically for training artificial intelligence systems. Like Google, Amazon and dozens of startups across the globe, Microsoft is already exploring this new kind of chip.
Most of that $1 billion, Altman said, will be spent on the computing power OpenAI needs to achieve its ambitions. And under the terms of the new contract, Microsoft will eventually become the lab’s sole source of computing power.
Nadella said Microsoft would not necessarily invest that $1 billion all at once. It could be doled out over the course of a decade or more. Microsoft is investing dollars that will be fed back into its own business, as OpenAI purchases computing power from the software giant, and the collaboration between the two companies could yield a wide array of technologies.
Because AGI is not yet possible, OpenAI is starting with narrower projects. It built a system recently that tries to understand natural language. The technology could feed everything from digital assistants like Alexa and Google Home to software that automatically analyzes documents inside law firms, hospitals and other businesses.
The deal is also a way for these two companies to promote themselves. OpenAI needs computing power to fulfill its ambitions, but it must also attract the world’s leading researchers, which is hard to do in today’s market for talent. Microsoft is competing with Google and Amazon in cloud computing, where AI capabilities are increasingly important.
The question is how seriously we should take the idea of artificial general intelligence. Like others in the tech industry, Altman often talks as if its future is inevitable.
“I think that AGI will be the most important technological development in human history,” he said during the interview with Nadella. Altman alluded to concerns from people like Musk that AGI could spin outside our control. “Figuring out a way to do that is going to be one of the most important societal challenges we face.”
But a game like Dota 2 is a far cry from the complexities of the real world.
Artificial intelligence has improved in significant ways in recent years, thanks to many of the technologies cultivated at places like DeepMind and OpenAI. There are systems that can recognize images, identify spoken words, and translate between languages with an accuracy that was not possible just a few years ago. But this does not mean that AGI is near or even that it is possible.
“We are no closer to AGI than we have ever been,” said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, an influential research lab in Seattle.
Geoffrey Hinton, the Google researcher who recently won the Turing Award — often called the Nobel Prize of computing — for his contributions to artificial intelligence over the past several years, was recently asked about the race to AGI.
“It’s too big a problem,” he said. “I’d much rather focus on something where you can figure out how you might solve it.” The other question with AGI, he added, is: Why do we need it?
c.2019 New York Times News Service