Dario Amodei, OpenAI’s research director, informs us they have created a language modeling program which is very imaginative, to say the least.
Imagine you go to a bookstore, and you notice and exciting cover. You pick the book, read the summary at the back, and the rave reviews. The plot seems intriguing enough, but when you check for the writer, it says “ by AI-something.” Would you buy the book, or would you think that was a waste of money? We will have those decisions moving into the future, and who will be responsible for such writings? But, that shows how AI is learning to play with words.
You may as well decide now if you will purchase content written by AI. That’s what the future will bring — AI is learning to play with words.
All of us have gotten used to chatbots and their limited capacity, but it appears their boundaries will be surpassed. Dario Amodei, OpenAI’s research director, informs us they have created a language modeling program which is very imaginative, to say the least. Its latest achievement was creating counterarguments and discussions with the researchers.
The program was fed a variety of articles, blogs, websites, and other content from the internet. Surprisingly, it managed to produce an essay worthy of any reputable writing service, and on a particularly challenging topic, by the way (Why Recycling Is Bad for the World).
Did the researchers do anything to help the program by providing specific, additional input? Certainly not. GPT-2, OpenAI’s new algorithm, did everything on its own. It excelled in different tests, such as storytelling, and predicting the next line in a sentence. Admittedly, it’s still far from inventing an utterly gripping story from beginning to the end as it tends to stray off topic — but it has great potential.
What sets GPT-2 apart from other similar AI programs is its versatility. Typically, such programs are skilled only for certain areas and can complete only specific tasks. However, this AI language model uses its input and successfully deals with a variety of topics.
What exactly can this AI program do?
For starters, the program could be used for summarizing articles or translating. Chatbots would be more informative and flexible. Ultimately, the program could turn into an excellent personal assistant, summarizing the reports for you, and sending out a proper company or business information. These types of programs would more or less be doing what a human PA already does.
There are several very negative possibilities.
At present, this type of AI program pretty much comes down to work best depending on what kind of material it is being fed with. For instance, if this AI program is given offensive, violent content. If the AI is given specific articles degrading individual nations or races (take some Hitler’s propaganda, for example), GPT-2 would respond similarly and produce vile and offensive content.
The system could be abused for automated and refined trolling.
People would be convinced they are receiving someone’s real opinions, instead of an automated response by a machine. Also, phishing schemes would become much easier to pull off. Certainly, the cyber hacks will be all over this AI code and work. The internet could be overfilled with artificial content. Do we want this as our only future? Let’s remember the example of pornographic deepfakes — a classic example of how top technology was abused when released into the public.
What is to come out of all the new AI code?
As the proponents of AI and AI code freely themselves admit — only the future will show what is really in the works at present. In the meantime, OpenAI will continue to invest resources in their innovative language model and feed it with more and more data, hoping for the best. Will we be on the receiving end of “the best?” The answer to that question will remain to be seen.
San Francisco-based OpenAI LP was founded in 2015 as a nonprofit research lab. Since its founding, OpenAI has employed artificial intelligence …
K&L Gates LLP advised Microsoft Corporation on the deal.
Microsoft Cororation (Nasdaq “MSFT”) completed a US$1 billion equity investment in OpenAI LP to build secure, trustworthy, and ethical artificial intelligence (AI) to serve the public.
Through this partnership the companies will focus on building a platform that OpenAI will use to create new AI technologies and deliver on the promise of artificial general intelligence.
San Francisco-based OpenAI LP was founded in 2015 as a nonprofit research lab. Since its founding, OpenAI has employed artificial intelligence researchers to make advances in the field, such as teaching a robotic hand to perform human-like tasks entirely in software, cutting down the cost and time to train robots.
The group has also focused on the safety and social implications of AI, researching how computers can generate realistic news stories with little more than headline suggestions and warning researchers to consider how their work and algorithms might be misused by bad actors before publishing them.
K&L Gates team was led by Seattle corporate partner Annette Becker (Picture), and included Seattle partners Won-Han Cheng, Carley Andrews, and Mike Gearin; Wilmington (Del.) partners Eric Feldman and Lisa Stark; Washington, D.C., partner Barry Hart; and San Francisco partner Rikki Sapolich-Krol, as well as Seattle associates Teresa Teng, Caitlin Velasco, Andrea Templeton, and Washington, D.C., associate Laura Gregory.
Hoffman is the biggest backer of OpenAI, a hugely ambitious project founded in San Francisco by Musk — before he left, citing conflicts of interest as …
The list of existential threats to mankind on which wealthy philanthropists have focused their attention — catastrophic climate change, pandemics and the like — has a new addition: artificially intelligent machines that turn against their human creators.
Artificial intelligence (AI) could pose a threat “greater than the danger of nuclear warheads, by a lot”, according to Elon Musk, the entrepreneur behind electric car maker Tesla. As the author James Barrat put it, a superhuman intelligence, equipped with the ability to learn but without the ability to empathise, might well be Our Final Invention.
Even if the machines are not going to kill us, there are plenty of reasons to worry AI will be used for ill as well as for good, and that advances in the field are coming faster than our ability to think through the consequences.
Between facial recognition and autonomous drones, AI’s potential impact on warfare is already obvious, stirring employee concern at Google and other pioneers in the field. Faced with an internal revolt, Google last year said it would drop much of its work for the Pentagon and withhold AI technology that could be used for weapons. That may restore harmony at the Googleplex, but it is hardly likely to end the AI arms race. Russian president Vladimir Putin puts it this way: “Whoever becomes the leader in this sphere will become the ruler of the world.”
Other fears include whether AI algorithms are reinforcing racial stereotypes, gender biases and other prejudices as a result of a lack of diversity among scientists in the field — and of what happens to society when robots can do most jobs. All of which is to say promoting debate on the ethics and consequences of AI, and nudging the science, business and regulation of AI in the right direction, seems a worthy use of philanthropic dollars. It is also fascinating — an often underestimated reason for picking a philanthropic cause.
Explaining his gift of $150m to Oxford university, part of which will go to creating an Institute for Ethics in AI, Steve Schwarzman, founder of private equity house Blackstone, told Forbes in June he wanted “to be part of this dialogue, to try and help the system regulate itself so innocent people who’re just living their lives don’t end up disadvantaged. If you start dislocating people, and your tax revenues go down, your social costs go up, your voting patterns change . . . you could endanger the underpinnings of liberal democracies.” Last year he wrote an even bigger cheque to Massachusetts Institute of Technology to create a new centre for AI research.
Those donations together make Schwarzman probably AI’s largest donor. Pierre Omidyar, founder of eBay, Nicolas Berggruen, the financier once known as the “homeless billionaire” before he settled in Los Angeles, and LinkedIn founder Reid Hoffman are among the others. In this field, however, there is a limit to the power of philanthropy.
Hoffman is the biggest backer of OpenAI, a hugely ambitious project founded in San Francisco by Musk — before he left, citing conflicts of interest as Tesla boosts the autonomous capabilities of its cars. OpenAI was set up as a non-profit with the aim of building a neural network the size of a human brain — a so-called artificial general intelligence — making its work open source so that it would be a blueprint for the safe and ethical development of AI.
But that put OpenAI in direct competition with big tech companies, which have the resources to pay for scientific talent and computing power. Last year it switched to a for-profit structure, saying it needed billions of dollars in investment, and this summer it announced it was aligning itself with Microsoft, which is putting in $1bn to help OpenAI pay for computing services from Azure, Microsoft’s cloud.
In the field of AI, charitable dollars seem best channelled to the ethical debate rather than the technology itself. Berggruen, whose ventures include a $1m annual prize for philosophy, sounds like he is having the most fun. The Berggruen Institute’s “Transformations of the Human” project places philosophers and artists in key research sites to foster dialogue with technologists, to “contribute to both human and non-human flourishing”. When it comes to philanthropy, robots may need support, too.
Stephen is reading . . .
Human Compatible, by Stuart Russell. The British AI researcher says we must rethink machine learning to make sure superhuman computers never have power over us. We must make them uncertain what we want, so they have to keep asking.
Last week, artificial intelligence research lab OpenAI decided to release a more expanded version of GPT-2, the controversial text-generating AI model …
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.
Last week, artificial intelligence research lab OpenAI decided to release a more expanded version of GPT-2, the controversial text-generating AI model it first introduced in February. At the time, the lab refrained from releasing the full AI model, fearing it would be used for malicious purposes.
Instead, OpenAI opted for a staged release of the AI, starting with a limited model (124 million parameters), and gradually releasing more capable models. In May, the research lab released the 355-million-parameter version of GPT-2, and last week, it finally released the 774-million-model, at 50 percent capacity of the text generator.
“We are considering releasing the 1.5 billion parameter version in the future,” OpenAI researchers wrote in a paper they released last week. “By staggering releases, we allow time for risk analyses and use findings from smaller models to inform the actions taken with larger ones.”
As usual, business and tech publications used click-bait headlines to declare the release of the text-generating AI. “Dangerous AI offers to write fake news,” wrote the BBC. “OpenAI just released a new version of its fake news-writing AI,” read the Futurism’s headline. Observer described it as such: “OpenAI Can No Longer Hide Its Alarmingly Good Robot ‘Fake News’ Writer.” Other outlets used similar sensational headlines and made references to Elon Musk, OpenAI’s co-founder, to create even more hype around the topic.
But while most publications were busy warning about threat of an AI-triggered fake news apocalypse (and drawing money-making clicks on their websites), they missed the important points that the OpenAI researchers raised (and didn’t raise) in their GPT-2 paper.
Generating coherent text is not enough to produce fake news
In its paper, OpenAI mentions fake news as just one of the potential malicious uses of its AI. “We chose a staged release process, releasing the smallest model in February, but withholding larger models due to concerns about the potential for misuse, such as generating fake news content, impersonating others in email, or automating abusive social media content production,” the researchers wrote.
But fake news got the greatest attention in the media possibly because the topic has gotten so much attention in the wake of the 2016 U.S. presidential elections. The authors of the paper discuss the threat thoroughly.
According to the OpenAI researchers, “Humans can be deceived by text generated by GPT-2 and other successful language models, and human detectability will likely become increasingly more difficult.”
The researchers further note that as they increase the size of the AI model, the quality of the text increases. And here’s what they mean by “quality”: “With a human-in-the-loop, GPT-2 can generate outputs that humans find credible.”
The researchers further describe that in some experiments, humans considered the output created by their AI model “credible” about 66 percent of the time. Samples of the 774-million-parameter GPT-2 model were “statistically” similar to New York Times articles 75 percent of the time.
These are all interesting achievements, but the problem with what the OpenAI paper and the articles covering the text-generating machine learning model is that they suppose that generating coherent text is enough to generate fake news. “Credible” and “quality” are vague words, and from the text of the paper, the authors assume that readers will consider anything that is coherent and passes as something written by an English-speaking human as credible.
Say you find a nondescript piece of paper lying on the floor, which contains a very coherent and eloquent excerpt about a nuclear war between the U.S. and Russia. Would you believe it? Probably not. Coherence is just one of the many requirements of spreading fake news.
The more important factor is trust. If you read the same text on the front page of The New York Times or The Washington Times, you’re more likely to believe it, because you trust them as credible news sources. Even if you see a minor grammatical flaw in the story, you’ll probably dismiss it as a human mistake and still believe what you read.
If you can deceive your readers into trusting you, you won’t even need to be a very good English writer. In fact, the group of Macedonian teens who created a fake news crisis during the 2016 presidential elections didn’t even have proper English skills. The key to their success was websites that looked authentic and trustworthy, in which they published fake stories with sensational headlines that triggered reactions from users across social media and gamed trending algorithms.
Even authentic news websites often get caught up in the propagation of false stories. A recent example is a non-existent flaw in the VLC media player, which was reported by several reputable tech publications and resulted in unwarranted panic around one of the most popular media players. The case neither involved artificial intelligence, nor loads of content. A few, well-placed stories triggered the damage.
Metadata is key to fight AI-generated fake content
While the OpenAI researchers lay out the ongoing cat-and-mouse competition between AI techniques to generate and detect synthetic text, they raise a very important point that has gone mostly unnoticed to the publications that covered the GPT-2 paper.
“Preventing spam, abuse, or disinformation online does not rely entirely on analyzing message content,” the researchers write, adding, “Metadata about text, such as time taken to write a certain amount of text, number of accounts associated with a certain IP, and the social graph of participants in an online platform, can signal malicious activity.”
Most online platforms such as Twitter, Facebook and Amazon already use metadata as cues to discover and fight bot-driven activities. While the method might sound trivial, it would be very effective against an AI model that could create large volumes of coherent text such as tweets or product reviews.
A person or entity armed with the most advanced AI-powered text generator would still need to employ thousands of devices with different IPs and in different geographical locations to be able to game trending algorithms in social media or ecommerce platforms. Again, in this case, a well-placed, sensational post by an influencer account can create the kind of organic reaction that no large-scale AI algorithm could synthesize.
Malicious actors are not interested in AI-generated text
The researchers note in their paper, “Our threat monitoring did not find evidence of GPT-2 direct misuse in publicly-accessible forums but we did see evidence of discussion of misuse.”
Those discussions were mostly inspired by the string of sensational articles that followed the initial release of GPT-2. Interestingly, the writers note that most of those discussions had declined by mid-May, a few months after the first version of GPT-2 was made available to the public.
“We believe discussion among these actors was due to media attention following GPT-2’s initial release; during follow-up monitoring there was no indication that these actors had the resources, capabilities, or plans to execute at this time,” the researchers observed.
While it’s easy to spin fantastic stories about the destruction AI models like GPT-2 can cause, the reality is that the people who would want to use it as a weapon know that it takes much more than coherent text to run a successful fake news campaign.