GPT-3 Is Amazing—And Overhyped

Elon Musk, Sam Altman and four colleagues co-founded OpenAI in 2015. Vanity Fair. The Internet is buzzing about GPT-3, OpenAI’s newest AI language …
Elon Musk, Sam Altman

Elon Musk, Sam Altman and four colleagues co-founded OpenAI in 2015.

Vanity Fair

The Internet is buzzing about GPT-3, OpenAI’s newest AI language model.

GPT-3 is the most powerful language model ever built. This is due more than anything to its size: the model has a whopping 175 billion parameters. To put that figure into perspective, its predecessor model GPT-2—which was considered state-of-the-art and shockingly massive when it was released last year—had 1.5 billion parameters.

After originally publishing its GPT-3 research in May, OpenAI gave select members of the public access to the model last week via an API. Over the past few days, samples of text generated by GPT-3 have begun circulating widely on social media.

GPT-3’s language capabilities are breathtaking. When properly primed by a human, it can write creative fiction; it can generate functioning code; it can compose thoughtful business memos; and much more. Its possible use cases are limited only by our imaginations.

Yet there is widespread misunderstanding and hyperbole about the nature, and limits, of GPT-3’s abilities. It is important for the technology community to have a more clear-eyed view of what GPT-3 can and cannot do.

At its core, GPT-3 is an extremely sophisticated text predictor. A human gives it a chunk of text as input, and the model generates its best guess as to what the next chunk of text should be. It can then repeat this process—taking the original input together with the newly generated chunk, treating that as a new input, and generating a subsequent chunk—until it reaches a length limit.

How does GPT-3 go about generating these predictions? It has ingested effectively all of the text available on the Internet. The output it generates is language that it calculates to be a statistically plausible response to the input it is given, based on everything that humans have previously published online.

Amazingly rich and nuanced insights can be extracted from the patterns latent in massive datasets, far beyond what the human mind can recognize on its own. This is the basic premise of modern machine learning. Having trained on a dataset of half a trillion words, GPT-3 is able to identify and dazzingly riff on the linguistic patterns contained therein.

But GPT-3 possesses no internal representation of what these words actually mean. It has no semantically-grounded model of the world or of the topics on which it discourses. It cannot be said to understand its inputs and outputs in any meaningful way.

Why does this matter? Because it means that GPT-3 lacks the ability to reason abstractly; it lacks true common sense. When faced with concepts, content, or even phrasing that the Internet’s corpus of existing text has not prepared it for, it is at a loss.

It is straightforward to ask GPT-3 simple questions that it is unable to handle intelligently. As an example, consider the following exchange with GPT-3, courtesy of Kevin Lacker:

Human: How many eyes does a giraffe have?

GPT-3: A giraffe has two eyes.

Human: How many eyes does my foot have?

GPT-3: Your foot has two eyes.

Human: How many eyes does a spider have?

GPT-3: A spider has eight eyes.

Human: How many eyes does the sun have?

GPT-3: The sun has one eye.

Human: How many eyes does a blade of grass have?

GPT-3: A blade of grass has one eye.

This does not mean that GPT-3 is not a useful tool or that it will not underpin many valuable applications. It does mean, however, that GPT-3 is unreliable and susceptible to basic errors that an average human would never commit.

A related shortcoming stems from the fact that GPT-3 generates its output word-by-word, based on the immediately surrounding text. The consequence is that it can struggle to maintain a coherent narrative or deliver a meaningful message over more than a few paragraphs. Unlike humans, who have a persistent mental model—a point of view that endures from moment to moment, from day to day—GPT-3 is amnesiac, often wandering off confusingly after a few sentences.

As the OpenAI researchers themselves acknowledged: “GPT-3 samples [can] lose coherence over sufficiently long passages, contradict themselves, and occasionally contain non-sequitur sentences or paragraphs.”

Put simply, the model lacks an overarching, long-term sense of meaning and purpose. This will limit its ability to generate useful language output in many contexts.

There is no question that GPT-3 is an impressive technical achievement. It has significantly advanced the state of the art in natural language processing. It has an ingenious ability to generate language in all sorts of styles, which will unlock exciting applications for entrepreneurs and tinkerers.

Yet a realistic view of GPT’s limitations is important in order for us to make the most of the model. GPT-3 is ultimately a correlative tool. It cannot reason; it does not understand the language it generates. Claims that GPT-3 is sentient or that it represents “general intelligence” are silly hyperbole that muddy the public discourse around the technology.

In a welcome dose of realism, OpenAI CEO Sam Altman made the same point earlier today on Twitter: “The GPT-3 hype is way too much….AI is going to change the world, but GPT-3 is just a very early glimpse.”

Elon Musk: ‘A very serious danger to the public’ – tech giant’s dire warning revealed

ELON MUSK offered a grave warning over the dangers of Artificial intelligence, claiming they could be more dangerous than nuclear warheads.

In July this year, Microsoft invested $1billion (£823,825,000) into Musk’s AI venture that plans to mimic the human brain using computers.

OpenAI said the investment would go towards its efforts of building artificial general intelligence (AGI) that can rival and surpass the cognitive capabilities of humans.

CEO Sam Altman said: “The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity.

“Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI.”

Does lived or learnt experience matter in social enterprise?

Now operating across ten London boroughs, her venture improves social capital by connecting young people with professionals that they wouldn’t …

In a now famous essay offering advice to would-be entrepreneurs, doyen of the startup world and co-founder of Y Combinator Paul Graham writes: “When searching for ideas, look in areas where you have some expertise.” This expertise, he adds, lies in “look[ing] for problems, preferably problems you have yourself”.

Is this useful advice for the social enterprise sector? This is a question that we’ve been trying to answer at Catch22, a social business that delivers public services throughout the social welfare cycle.

Since 2016, we’ve been running an incubation programme for social entrepreneurs who have what we call ‘lived or learnt experience’. This means they have direct exposure to the problems they are trying to tackle, either as a result of their own life experiences or because they’ve delivered relevant services on the frontline.

Current participants include Emmanuel Akpan-Inwang, who has used his own experiences of foster care to inform the creation of Lighthouse, a new type of children’s care home that will open its doors in 2020.

Another is Rachael Box, who founded London Village Network in response to her frustrations at the lack of opportunities for young people on her Islington council estate. Now operating across ten London boroughs, her venture improves social capital by connecting young people with professionals that they wouldn’t normally meet in their day to day lives.

More opportunities needed

While we’ve backed them because we believe in their ideas, we also believe there should be more opportunities for people to solve the problems that they’ve experienced directly.

Why?

Firstly, because social entrepreneurs with lived and learnt experience can be compelling role models for their communities, where there are often few.

We’ve seen this with one of the entrepreneurs we support, Mifta Choudhury. He founded Youth Ink after spending 12 years in the criminal justice system starting at the age of 13. His charity aims to reduce reoffending rates by bringing young people, who have varying degrees of involvement in the youth justice system, together with ex-offenders, who are trained as peer supporters. Coming from the same background “helps to build trust” with the people Mifta is trying to support, he says, but also shows them “they have more options than they think”.

Emmanuel Akpan-Inwang - Lighthouse

Emmanuel Akpan-Inwang, who has used his own experiences of foster care to inform the creation of Lighthouse, speaking to practitioners who work with children across the social care sector. (Credit: Sunil Suri)

Secondly, direct experience of the problems can also mean a better understanding of what is needed. Jacob Hill wrote the business plan for what would become Offploy, an employment agency for ex-offenders, while he was in prison. Talking to people with direct experience of the challenges of securing work on release informed the design of his nine-step candidate journey, which Offploy has now used to support over 140 ex-offenders into employment.

Third, supporting new voices with different life experiences to come to the fore can also help to mitigate against the risk of social enterprises and charities working within their own echo chambers – which can result in potentially transformative ideas left off the table (a theme which Anand Giridharas explored is his polemic Winners Take All).

Jacob Hill wrote the business plan for what would become Offploy, an employment agency for ex-offenders, while he was in prison

The numbers clearly show the need for greater diversity. White males, for example, comprise over 60% of charity board trustees, while research by the Diversity Forum reveals that almost one in five social investment board directors studied at either Oxford or Cambridge university. The same research draws attention to concerns about unconscious class bias in decisions to invest: investors are more likely to back organisations whose proposals have the hallmarks of a university education.

By giving more people the agency to solve their own problems and promoting their voices within the social enterprise sector, we can challenge these entrenched ways of thinking.

Supporting entrepreneurs with lived experience

But let’s be clear: we don’t think having lived or learnt experience necessarily leads to better social enterprises. Sometimes being especially close to a problem due to your life experiences can be problematic. You might be overly wedded to a particular solution, or too conditioned or defeated by the system to think of an innovative approach.

It clearly isn’t zero sum. There is space for different motivations for – and approaches to – social entrepreneurship. But if we accept that there are benefits to encouraging people with lived and learnt experience to craft solutions to overcome problems that they’ve faced, how do we best support them?

At Catch22, our efforts are very much a work in progress. But from our work to date it’s clear there needs to be greater discussion about the lack of diversity in the social enterprise sector at all levels and the impact this has. Amol Rajan’s recent BBC documentary ‘How to Break into the Elite’ powerfully illustrates how conformity to unwritten social codes and behaviours can determine an applicant’s success within an interview process. While he focuses on professional occupations like banking, we believe the social enterprise sector suffers from many of the same problems.

There needs to be more intensive support to help social entrepreneurs to do things like submit funding applications and pitch to a room full of people. At Catch22, we provide intensive, tailored support over a period of two years for this very reason, rather than the three to six-month period many incubator and accelerator programmes offer.

We need to ensure that a social entrepreneur is not defined by their life experiences alone

And we need to ensure that a social entrepreneur is not defined by their life experiences alone, remembering that this sits alongside all their other skills and expertise. Reflecting on an uncomfortable conversation where he was indelicately probed about his own experiences of care, Lighthouse founder Emmanuel warns against tokenism, where one’s life experiences alone are “what qualifies you to sit at the table”.

Lived and learnt experience may sound like the latest buzzword – and in some senses, it is. But at a moment when people don’t feel represented by their institutions, the social enterprise sector is well-placed to offer a practical example of how to put power back in their hands – and at the same time, answer Paul Graham’s call for new startup ideas.

Header photo: Staff of Offploy. Over half of the company’s employees have convictions themselves, helping them to understand what support is most needed and to build trusting relationships. (Credit: Offploy)

Offploy was shortlisted for a NatWest SE100 Social Business Award in 2019, in the Trailblazing Newcomer category. Catch22 was listed among the SE100.

Thanks for reading our stories. As an entrepreneur yourself, you’ll know that producing quality work doesn’t come free. We rely on our subscribers to sustain our journalism – so if you think it’s worth having an independent, specialist media platform that covers social enterprise stories, please consider subscribing. You’ll also be buying social: Pioneers Post is a social enterprise itself, reinvesting all our profits into helping you do good business, better.

How $1bn from Microsoft could help to mimic the brain

Now, at 34, he is the chief executive of OpenAI, the artificial intelligence lab he helped create in 2015 with Elon Musk, the billionaire chief executive of …

As the waitress approached the table, Sam Altman held up his phone. That made it easier to see the dollar amount typed into an investment contract he had spent the last 30 days negotiating with Microsoft. “$1,000,000,000,” it read.

The investment from Microsoft, signed early this month and announced on Monday, signals a new direction for Altman’s research lab. In March, Altman stepped down from his daily duties as the head of Y Combinator, the startup “accelerator” that catapulted him into the Silicon Valley elite.

Now, at 34, he is the chief executive of OpenAI, the artificial intelligence lab he helped create in 2015 with Elon Musk, the billionaire chief executive of the electric carmaker Tesla.

Musk left the lab last year to concentrate on his own AI ambitions at Tesla. Since then, Altman has remade OpenAI, founded as a nonprofit, into a for-profit company so it could more aggressively pursue financing. Now he has landed a marquee investor to help it chase an outrageously lofty goal.

He and his team of researchers hope to build artificial general intelligence, or AGI, a machine that can do anything the human brain can do. AGI still has a whiff of science fiction. But in their agreement, Microsoft and OpenAI discuss the possibility with the same matter-of-fact language they might apply to any other technology they hope to build, whether it’s a cloud-computing service or a new kind of robotic arm.

Sam Altman, who manages the company OpenAI, at the Microsoft Campus in Redmond, Washington. “My goal in running OpenAI is to successfully create broadly beneficial AGI.” Photograph: Ian C Bates/New York Times
Sam Altman, who manages the company OpenAI, at the Microsoft Campus in Redmond, Washington. “My goal in running OpenAI is to successfully create broadly beneficial AGI.” Photograph: Ian C Bates/New York Times

“My goal in running OpenAI is to successfully create broadly beneficial AGI,” Altman said in a recent interview. And this partnership is the most important milestone so far on that path.”

In recent years, a small but fervent community of artificial intelligence researchers have set their sights on AGI, and they are backed by some of the wealthiest companies in the world. DeepMind, a top lab owned by Google’s parent company, says it is chasing the same goal.

Most experts believe AGI will not arrive for decades or even centuries – if it arrives at all. Even Altman admits OpenAI may never get there. But the race is on nonetheless. In a joint phone interview with Altman, Microsoft’s chief executive, Satya Nadella, later compared AGI to his company’s efforts to build a quantum computer, a machine that would be exponentially faster than today’s machines.

“Whether it’s our pursuit of quantum computing or it’s a pursuit of AGI, I think you need these high-ambition North Stars,” he said.

Altman’s 100-employee company recently built a system that could beat the world’s best players at a video game called Dota 2. Just a few years ago, this kind of thing did not seem possible. Dota 2 is a game in which each player must navigate a complex, three-dimensional environment along with several other players, co-ordinating a careful balance between attack and defence. In other words, it requires old-fashioned teamwork, and that is a difficult skill for machines to master.

Computer chips

OpenAI mastered Dota 2 thanks to a mathematical technique called reinforcement learning, which allows machines to learn tasks by extreme trial and error. By playing the game over and over again, automated pieces of software, called agents, learned which strategies are successful.

The agents learned those skills over the course of several months, racking up more than 45,000 years of game play. That required enormous amounts of raw computing power. OpenAI spent millions of dollars renting access to tens of thousands of computer chips inside cloud computing services run by companies like Google and Amazon.

Eventually, Altman and his colleagues believe, they can build AGI in a similar way. If they can gather enough data to describe everything humans deal with on a daily basis – and if they have enough computing power to analyse all that data – they believe they can rebuild human intelligence.

Altman painted the deal with Microsoft as a step in this direction. As Microsoft invests in OpenAI, the tech giant will also work on building new kinds of computing systems that can help the lab analyse increasingly large amounts of information.

Satya Nadella of Microsoft

“This is about really having that tight feedback cycle between a high-ambition pursuit of AGI and what is our core business, which is building the world’s computer,” Nadella said.

That work will likely include computer chips designed specifically for training artificial intelligence systems. Like Google, Amazon and dozens of startups across the globe, Microsoft is already exploring this new kind of chip.

Most of that $1 billion, Altman said, will be spent on the computing power OpenAI needs to achieve its ambitions. And under the terms of the new contract, Microsoft will eventually become the lab’s sole source of computing power.

Nadella said Microsoft would not necessarily invest that $1 billion (€900 million) all at once. It could be doled out over the course of a decade or more. Microsoft is investing dollars that will be fed back into its own business, as OpenAI purchases computing power from the software giant, and the collaboration between the two companies could yield a wide array of technologies.

Because AGI is not yet possible, OpenAI is starting with narrower projects. It built a system recently that tries to understand natural language. The technology could feed everything from digital assistants like Alexa and Google Home to software that automatically analyses documents inside law firms, hospitals and other businesses.

The deal is also a way for these two companies to promote themselves. OpenAI needs computing power to fulfill its ambitions, but it must also attract the world’s leading researchers, which is hard to do in today’s market for talent. Microsoft is competing with Google and Amazon in cloud computing, where AI capabilities are increasingly important.

Real world

The question is how seriously we should take the idea of artificial general intelligence. Like others in the tech industry, Altman often talks as if its future is inevitable.

“I think that AGI will be the most important technological development in human history,” he said during the interview with Nadella. Altman alluded to concerns from people like Musk that AGI could spin outside our control.

“Figuring out a way to do that is going to be one of the most important societal challenges we face.”

But a game like Dota 2 is a far cry from the complexities of the real world. Artificial intelligence has improved in significant ways in recent years, thanks to many of the technologies cultivated at places like DeepMind and OpenAI.

There are systems that can recognise images, identify spoken words, and translate between languages with an accuracy that was not possible just a few years ago. But this does not mean that AGI is near or even that it is possible.

“We are no closer to AGI than we have ever been,” said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, an influential research lab in Seattle.

Geoffrey Hinton, the Google researcher who recently won the Turing Award – often called the Nobel Prize of computing – for his contributions to artificial intelligence over the past several years, was recently asked about the race to AGI.

“It’s too big a problem,” he said. “I’d much rather focus on something where you can figure out how you might solve it.”

The other question with AGI, he added, is: Why do we need it?

– The New York Times News Service

Microsoft is investing $1 billion in OpenAI in building Azure AI technology

Microsoft is investing $1 billion to build a platform to generate fresh AI systems and fulfill the promise of general artificial intelligence. OpenAI is …

Microsoft is investing $1 billion to build a platform to generate fresh AI systems and fulfill the promise of general artificial intelligence. OpenAI is developing new technologies.

In conjunction with Microsoft and OpenAI, new Azure AI supercomputing systems will be created and Microsoft will become OpenAI’s favorite marketing partner for new AI technologies.

AGI is being created, with the capacity to shape the course of humanity, to be the most significant technological development in human history

Sam Altman, CEO, OpenAI

Our task is to guarantee AGI technology advantages all of humankind and to create the supercomputing basis on which AGI is based, we are working with Microsoft

Sam Altman, CEO, OpenAI, further added.

It is said that they consider it essential to ensure the safe and secure deployment of AGI and the widespread use of its financial advantages.

The firms will concentrate on constructing a Computer Platform in Azure to educate and operate AI models, including hardware technologies building on supercomputer technology from Microsoft, and adhering to the shared values of ethics and trust between the two businesses.

To further expand Microsoft Azure’s ability in broad-scale AI systems, both Microsoft and OpenAI have partnered.

The firm has shown that this collaboration will support the speeding up of AI breakthroughs and enable OpenAI to develop artificial general intelligence (AGI).

AI is one of the technologies that are most transforming today and it is capable of solving many of our most urgent problems worldwide

Microsoft’s CEO Satya Nadella

The successful improvements to the Azure platform will also assist designers to create the next generation of apps based on Microsoft’s AI system. The objective is to bring OpenAI’s breakthrough technology with fresh Azure Ai supercomputing systems together.