16 Minutes on the News #37: GPT-3, Beyond the Hype

What’s real, what’s hype when it comes to all the recent buzz around the language model GPT-3? What is “it”, how does it work, where does it fit into …

In this special “2x” explainer episode of 16 Minutes — where we talk about what’s in the news, and where we are on the long arc of various tech trends — we cover all the buzz around GPT-3, the pre-trained machine learning model that’s optimized to do a variety of natural-language processing tasks. The paper about GPT-3 was released in late May, but OpenAI (the AI “research and deployment” company behind it) only recently released private access to its API or application programming interface, which includes some of the technical achievements behind GPT-3 as well as other models.

It’s a commercial product, built on research; so what does this mean for both startups AND incumbents… and the future of “AI as a service”? And given that we’re seeing all kinds of (cherrypicked!) examples of output from OpenAI’s beta API being shared — from articles and press releases and screenplays and Shakespearean poetry to business advice to “ask me anything” search and even designing webpages and plug-ins that turn words into code and even does some arithmetic too — how do we know how good it really is or isn’t? And when we things like founding principles for a new religion or other experiments that are being shared virally (like “TikTok videos for nerds“), how do we know the difference between “looks like” a toy and “is” a toy (especially given that many innovations may start out so)?

And finally, where are we, really, in terms of natural language processing and progress towards artificial general intelligence? Is it intelligent, does that matter, and how do we know (if not with a Turing Test)? Finally, what are the broader questions, considerations, and implications for jobs and more? Frank Chen (who’s shared a primer on AI/machine learning/deep learning as well as resources for getting started in building products with AI inside and more) explains what “it” actually is and isn’t; where it fits in the taxonomy of neural networks, deep learning approaches, and more in conversation with host Sonal Chokshi. And the two help tease apart what’s hype/ what’s real here… as is the theme of this show.

Extra Crunch Live: Join our Q&A tomorrow at noon PDT with Y Combinator’s Geoff Ralston

From Airbnb to Zapier, and Coinbase to Instacart, many of the tech world’s most valuable companies spent their earliest days in Y Combinator’s …

From Airbnb to Zapier, and Coinbase to Instacart, many of the tech world’s most valuable companies spent their earliest days in Y Combinator’s accelerator program.

Steering the ship at Y Combinator today is its president, Geoff Ralston . We’re excited to share that Ralston will be joining us on Extra Crunch Live tomorrow at noon pacific.

Extra Crunch Live is our virtual speaker series, with each session packed with insight and guidance from the top investors, leaders and founders. This live Q&A is exclusive to Extra Crunch members, so be sure to sign up for a membership here.

Ralston took on the YC President role a little over a year ago shortly after Sam Altman stepped away to focus on OpenAI.

In the months since, Y Combinator has had to reimagine much about the way it operates; as the pandemic spread around the world, YC (like many organizations) has had to figure out how to work together while far apart. In the earliest weeks of the pandemic, this meant quickly shifting their otherwise in-person demo day online; later, it meant adapting the entire accelerator program to be completely remote.

While still relatively new to the president seat, Ralston is by no means new to YC. He joined the accelerator as a partner in 2012, and his edtech-focused accelerator Imagine K12 was fully merged into YC’s operations in 2016.

A/B testing OpenAI’s GPT-3

A/B testing OpenAI’s GPT-3. This is a friendly competition between human copywriters and copy generated by the new VWO feature powered by …
Challenge

A/B testing OpenAI’s GPT-3

This is a friendly competition between human copywriters and copy generated by the new VWO feature powered by OpenAI’s GPT-3 API. In this competition, we will test AI-generated copy for headlines, buttons or product descriptions against existing (or new) human-written copy for participating websites. The tests will be conducted on VWO or any A/B testing platform you are using today.

Participate

How Do You Know a Human Wrote This?

OpenAI has given just a few hundred software developers access to GPT-3, and many have been filling Twitter over the last few weeks with …

Another company, Latitude, is using GPT-3 to build realistic, interactive characters in text-adventure games. It works surprisingly well — the software is not only coherent but also can be quite inventive, absurd and even funny.

Stew Fortier, a writer, created a zany satire using the software as a kind of muse.

Fortier fed GPT-3 a strange prompt: “Below is a transcript from an interview where Barack Obama explained why he was banned from Golden Corral for life.” The system then filled in the rest of the interview, running with the concept that Obama had been banned from an all-you-can-eat buffet.

Obama: Yes. It’s true. I am no longer allowed in Golden Corral.

Interviewer: Is this because of your extensive shrimp-n-crab legs policy?

Obama: Absolutely.

Interviewer: What is your extensive shrimp-n-crab legs policy?

Obama: Oh, well, in brief, they were offering an all-you-can-eat shrimp-n-crab leg buffet, and I did not hesitate. After I ate so much shrimp and crab that my stomach hurt, I would quietly sneak in and throw more shrimp and crab onto my plate. I did this over and over again until I had cleaned out the buffet and was full of shrimp-n-crab.

Yet software like GPT-3 raises the prospect of frightening misuse. If computers can produce large amounts of humanlike text, how will we ever be able to tell humans and machines apart? In a research paper detailing GPT-3’s power, its creators cite a litany of dangers, including “misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting.”

There are other problems. Because it was trained on text found online, it’s likely that GPT-3 mirrors many biases found in society. How can we make sure the text it produces is not racist or sexist? GPT-3 also isn’t good at telling fact from fiction. “I gave it my own original three sentences about whales, and it added original text — and the way I could tell it was original was that it was pretty much dead wrong,” Janelle Shane, who runs a blog called AI Weirdness, told me.

To its credit, OpenAI has put in place many precautions. For now, the company is letting only a small number of people use the system, and it is vetting each application produced with it. The company also prohibits GPT-3 from impersonating humans — that is, all text produced by the software must disclose that it was written by a bot. OpenAI has also invited outside researchers to study the system’s biases, in the hope of mitigating them.

Elon Musk Warns That AI Could Overtake Humanity in 5 Years

Musk is also busy bringing out new tech advancements via Neuralink, a startup he founded in 2016 to develop “ultra-high bandwidth brain-machine …

Elon Musk is sounding the alarm that there is a strong possibility that humans will be overtaken by artificial intelligence within the next five years.

The billionaire engineer, who co-founded the artificial intelligence research lab OpenAI in 2015 and was an early investor in DeepMind, has often warned in recent years about the species-ending threat posed by advanced AI.

“My assessment about why AI is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are. And this is hubris and obviously false,” Musk told The New York Times.

Musk added that the invaluable experience of working with different types of AI at Tesla has given him the confidence to say “that we’re headed toward a situation where AI is vastly smarter than humans, and I think that time frame is less than five years from now. But that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.”

In 2016, the CEO of Tesla warned that human beings could become the equivalent of “house cats” amid the rise of new AI rulers. He has since repeatedly called for more stringent regulations when it comes to next-generation AI technology.

Musk noted that his “top concern” is DeepMind, the highly secretive London-based lab run by Demis Hassabis and owned by Google.

“Just the nature of the AI that they’re building is one that crushes all humans at all games,” he said. “I mean, it’s basically the plotline in ‘WarGames.’”

In the 1983 film “WarGames,” a teen hacker played by Matthew Broderick unwittingly connects to an AI-controlled government supercomputer that is used to run war simulations. After starting a game titled “Global Thermonuclear War,” the computer activates the nation’s nuclear arsenal in response to his accidentally-simulated threat as the Soviet Union.

Musk is also busy bringing out new tech advancements via Neuralink, a startup he founded in 2016 to develop “ultra-high bandwidth brain-machine interfaces.” The startup has been able to create flexible threads—thinner than a human hair—that can be injected into the brain to detect neuron activity.

In addition to streaming music directly into a user’s brain, Musk claims Neuralink’s chips will one day be able to cure addiction and depression and enable users to construct emails and text messages just by thinking about the words.

Ethen Kim Lieser is a Minneapolis-based Science and Tech Editor who has held posts at Google, The Korea Herald, Lincoln Journal Star, AsianWeek and Arirang TV. Follow or contact him on LinkedIn.

Image: Reuters