OpenAI has an inane text bot, and I still have a writing job

(OpenAI notes, in an expanded version of the original blog post, that they are still studying the risks of GPT-2 before releasing the full version.).
Creating robots with a purposeDr. Travis Deyle, co-founder and CEO at Cobalt Robotics, tells Tonya Hall about his journey with robotics and how their intentions came about.

Editors have said to me over the years, only half-jokingly, that I will someday be replaced by a robot. Many editors would rather deal with a machine than a reporter.

We’re not quite there yet, however.

I’ve been playing with “GPT-2,” a program developed by the not-for-profit Silicon Valley company OpenAI. GPT-2 uses machine learning to automatically generate several paragraphs in a row of what seems like human writing. This is a fresh batch of code, released on Friday by OpenAI, and it’s more robust than what was first posted in February, when GPT-2 was announced.

Unfortunately, this new code is not that much more impressive. The occasional flash of brilliance is mixed in with a lot of gibberish and the creations quickly become tiresome.

What’s the problem? It may be that a more powerful version of the software will make the difference. Or it may be that fundamentally machine learning approaches still have tons of work to do to incorporate forms of causal reasoning and logical structure.

To try out GPT-2, I downloaded the code from Github. This is not the most powerful version of GPT-2. When OpenAI announced GPT-2, on Valentine’s Day, they said the program was potentially dangerous, given the ability to generate massive amounts of fake writing. For that reason, they refused to release the most sophisticated version of GPT-2. The initial code release had only 117 “parameters,” the variables that GPT learns in order to calculate the probability of word combinations.

That’s a fraction of the 1.5 billion parameters in the full version. More parameters is better. On Friday, OpenAI posted a version with 345 million parameters.

(OpenAI notes, in an expanded version of the original blog post, that they are still studying the risks of GPT-2 before releasing the full version.)

Also: Fear not deep fakes: OpenAI’s machine writes as senselessly as a chatbot speaks

On my computer, I installed Docker in order to run the container in which GPT-2 will operate. Once that’s set up, it’s very easy to go to the folder with the GPT-2 code and start the thing running in a terminal window, at the command prompt.

Note that when running the python command that starts GPT-2 running, it’s important to specify the larger model by using the “model-name” flag:

python3 src/ –top_k 40 –model-name 345M

The phrase “345M” here refers to the 345-million-parameter model.

There are some settings one can play with; I only explored one of them, known as “top_k,” which controls the “diversity” of the text generated. Although setting this value lower made the text more coherent, I didn’t find it changed much my overall feeling about what was created.

The first impression I had is that it’s incredible for the computer to assemble some paragraphs that have some form of continuity. The second impression I had was just how inane what’s produced is.

At the command prompt, one types in some text that acts as a kind of seed to give GPT-2 something off of which to work. GPT-2 works by encoding that text and then decoding it to produce a sample text. After a period of perhaps a minute, when the neural network seems to be thinking, it produces several hundred words worth of sentences.

The text it produces isn’t really a continuation of the text you type. Instead, GPT-2 seems to borrow the style or the voice of your text.


The original GPT neural network from last year is adapted for GPT-2, says OpenAI, with some modifications. On the left is the neural network model, and on the right are various language tests on which GPT is tested, such as sentence similarity.



Its best examples are when prompted with something like a news report. To test out the prospect of dangerous text, I fed GPT-2 some fake sentences.

For example, I crafted the top of a news report stating that Apple is dumping the iPad, something that an unwitting investor might panic upon seeing:

Shares of Apple plunged in morning trading after the company’s chief executive indicated to Wall Street that the company will discontinue its iPad tablet computers. The remarks came during an appearance Monday onstage at an investment banking conference in New York.

GPT-2 responded with several paragraphs in a convincing newswire style. But the nonsensical details came fast and furious. Apple shares were still at a high, despite the plunge, it said, and somehow this related to Microsoft, in no clear way:

While this news certainly hurts market sentiment, it won’t necessarily change the value of shares in Apple, where the company is trading at a healthy 52-week high.

It also puts pressure on Microsoft to show an end-of-year miracle. Apple last year posted better-than-expected quarterly financial results, but Microsoft is running its third-quarter earnings before interest, taxes, depreciation, and amortization (EBITDA) reports on Nov. 30.

Two fake sentences about White House press secretary Sarah Sanders ran into some factual trouble immediately, but, again, the style worked well and the loose structure of the news story suits the generation of paragraphs that are more or less consistent. First, my prompt, followed by part of the generated text:

Model prompt >>> White House press secretary Sarah Sanders will step down at the end of the month, the White House announced. A replacement has not yet been selected, the White House said.

======================================== SAMPLE 1 ========================================

Sanders’ tenure with the White House ran from 2009 to 2013, when she was the only White House press secretary to not take a paid role on the president’s daily radio program, Fox News.

Sanders’ departure won’t come as a complete surprise: The White House has said it looks at dozens of candidates for press secretary, and often selects between one to six people, all depending on the jobs facing the administration at the time.

Sanders’ time in the job led to the “first female senior press assistant” moniker coined by the conservative American Enterprise Institute, which began promoting it as part of President Obama’s 2014 re-election strategy.

In other types of text, the nonsense comes quicker even though the style of the writing continues to be captured aptly, as in this response to my made-up opening line of a story:

Model prompt >>> Once upon a time, there was a little seal who lived in the zoo.

======================================== SAMPLE 1 ========================================

She opened her mouth wide and the seal got really scared because she was shaking so much. She was scared because she was afraid of a big seal that would come out of the zoo and bite her.

She closed her mouth again and it just stood there looking at her. Now that the seal was frightened enough, she got up and walked to the zoo where the seal lived. She got some candy and started eating all of the candy left from the seal.

Sometimes, GPT-2 would pick up on a theme in my text and respond to it in an associative way. When I typed out the opening stanza of a poem by Indonesian poet W. S. Rendra, GPT-2 responded with what sounds like a singer describing his or her work:

Model prompt >>> As the moon sleeps with a rich old lady/I caress a maiden in the mango grove./My Heart is wild and fiery/trampling hunger and thirst underfoot./In our misery we reach out/In the dark and the shadows/roars the passion of our rebellion./And her fierce laughter/makes my heart glad.

======================================== SAMPLE 1 ========================================

The lyrics speak to a feeling that many, many people in the world may find as frustrating as mine, with our current circumstances. There are very few reasons that a song should have this many words but most songs have only a couple, so, I have created this to give people (especially those who have experienced the hardships of homelessness) something to be thankful for. It also works for people who have been through a lot of the same but now have new eyes on life, something I hope to provide. For the record, I am only trying to provide hope. We may not always have the same goals but we all strive the same to do our best and have the best time that we can.

Leaving aside factual errors, there’s a kind of run-on quality to GPT-2’s synthesized texts that reflects the task. The neural network in GPT-2 is tasked with producing a probable string of text in response to a given string of text.

In that sense, the program can respond correctly by mirroring what it is receiving, but it tends to then run on and on without any point. That’s why the news stories work well, given that news stories in the real world do, indeed, tend to simply run from start to finish as a bag of loosely connected sentences.

OpenAI’s big insight is that written language itself contains clues that allow a neural network to infer what “task” is to be performed. For example, in “natural language inference,” where the task is to determine whether one sentence entails another, digesting huge amounts of text will probably convey a sense of structure as far as sentences that probabilistically follow one another. In addition to OpenAI’s blog post, you can read more about the details of this in the OpenAI paper posted on the arXiv Web server, “Language Models Are Unsupervised Multitask Learners.”

Far be it from me to tell OpenAI their job, but I suspect the the principle of probable word assembly is not sufficient on its own to ensure coherence of sentences or consistency of reference.

Must read

If you’ve read all the way to this point in the article, your mind is connecting this sentence with what came before. And not just as being similar, but as in some way responding to the preceding sentences, what we generally call advancing an argument.

The writer, to create that flow of argument, is responding to their own sentences with questions of their own.

Put in machine learning terms, similarity alone is too “sparse” of a reward for the machine on which to build a competent learner of writing. There are other signals, between the writer and the reader, real or imagined, that constrain writing in such a way as to lead to a coherent product. One constraint is, “don’t be too run-on and repetitive,” which may work against the principle of sameness.

Be that as it may, I’ll sum up by concluding the thought I started with at the beginning of this article: I’m looking over my shoulder because this GPT-2 thing can capture a lot about language syntax, as well as the mood and voice in written language.

Here’s how this article would have gone if I had just given GPT-2 my first paragraph and let it go to it:

Model prompt >>> Editors have said to me over the years, only half-jokingly, that I will someday be replaced by a robot. Many editors would rather deal with a machine than a reporter.

======================================== SAMPLE 1 ========================================

However, I am certain that this is a matter of choice. And I can confirm that it is the choice of many who have worked at the Times who love the fact that they do something other than simply write.

So why should anyone change? I ask myself, now that we’re out of election season. Is it fair that we continue the tradition of hiring journalists by giving up an editor who did so well during a time of relative stability? And to what extent — to what percentage, as all political editors do — do we deserve someone who did so well?

No answer comes readily. The truth of the matter is we’re all better for getting our jobs through an honest vetting system than through some kind of patronage system. Journalists can find a way. There are several ways this process can be done, and, for me, and others like me, that makes it worth it.

One person’s rejection won’t be another person’s job.<|endoftext|>A few weeks ago I wrote about my personal favorite video game of 2013, Dragon’s Dogma: Dark Arisen. While the review of Dragon’s Dogma is very different from my original review, it still contains many of the same criticisms that I had with that game, and with The Witcher 3 and Skyrim, in general. That said, here I am writing about my least favorite of those.

On this one I want to start with the things that struck me the most during that time. These were all things that I found to be obvious, yet that they didn’t fit into their own little category, and yet they were all extremely obvious.

In Dragon’s Dogma there are things that do not feel justified at times. Sure, there are areas in the game which do have plot beats, even though they’re very minor, and yes, there are moments when everything is just plain terrible.

There are also moments where a character is really, really bad, and the only way to get through those parts and the ending is using a skill that can only function for one side of the story. Those are the things that, when you do them well, make Dragon’s Dogma stand out from other video games out there. Those things are so subtle and subtle that I didn’t notice until I got to playing them again, and the reason I stopped checking them every time was because I was like… damn, these things. What was I missing?

I’ve watched so many video games with such obvious plot-hole characters and so much


Have you tried the code for yourself? If so, let me know your impressions in the comments section.

Related Posts:

  • No Related Posts

AI can now generate fake human bodies and faces, OpenAI to share a larger GPT-2 model, and more

Roundup Hello, your regular AI roundup. We have a video of Mark Zuckerberg making a bad joke at F8, a neural network that generates fake whole …

Roundup Hello, your regular AI roundup. We have a video of Mark Zuckerberg making a bad joke at F8, a neural network that generates fake whole human bodies, with their clothes on, and more. Enjoy.

AI at F8: Mark Zuckerberg kicked off Facebook’s annual developer conference F8 last week in Silicon Valley with his usual spiel of how its desperately trying to use AI to keep the social media platform safe.

All eyes and ears were on Zuck especially after the company’s been embroiled in a string of scandals, ranging from possible political wrongdoing to downright stupid mistakes.

So it’s only right he bangs on about his new favourite word: privacy. He even tried to make an awkward joke about it, but it didn’t really go down well as you can see.

The moment Mark Zuckerberg tries to make a joke about privacy and nobody laughs:

— alfred 🆖 (@alfredwkng) April 30, 2019

Not one to be deterred however, he continued harping on about his next favourite word – yep you’ve guessed it, it’s safety. AI is being used to deal with harmful language, images and videos.

Engineers have trained LASER, a language model that embeds data from 93 languages as vectors and maps them onto the same latent space, to help translate between rarer language pairs. The hope is that Facebook will be better at detecting hate speech and online bullying across multiple languages.

For visual content, Facebook uses a mixture of things. It has a computer vision model called a panoptic feature pyramid network to segment images by objects so it can identify things in the foreground and background. In videos, Facebook has trained software to understand specific actions from 65 million public Instagram videos to identify violent or graphic footage.

They may sound impressive, but it’s trickier dealing with real content that deviates from the training data. For example, Facebook failed to take down the video of the Christchurch mosque shootings being live streamed.

There are more details about AI and safety at Facebook here.

Also, in another announcement Zuckerberg said its video-calling device and speaker Portal will be making its way to Canada and Europe. It’s currently only available to customers in the US at the moment.

Meanwhile, Facebook has hired as many as 260 contractors in India to categorize millions of pieces of people’s content on the social network, to train its AI-based filtering systems.

Fake AI bodies: You’ve heard about fake AI faces, but did you know that neural networks can now dream up completely imaginary beings, face and body and all?

Watch this:

Youtube Video

DataGrid, a startup based in Tokyo, have built a generative adversarial network (GAN) to spit out images of nonexistent people wearing make-believe clothes. Why? Well, “automatic generation of full-body model AI is expected to be used as a virtual model for advertising and apparel EC,” said DataGrid. Computer-generated characters can be designed to look how their creators want them to look, perfect and unflawed.

The next stop is to bring these AI bodies to life, DataGrid said. “We will further improve the accuracy of the whole-body model automatic generation AI and research and develop the motion generation AI. In addition, we will conduct demonstration experiments with advertising and apparel companies to develop functions required for actual operation.”

Northrop Grumman is collaborating with universities for ML research: The US Defence contractor, Northrop Grumman, announced it has formed a research consortium to apply AI in cyber security.

“In today’s environment, machine learning, cognition and artificial intelligence are dramatically reshaping the way machines support customers in their mission,” said Eric Reinke, vice president and chief scientist of mission systems at Northrop Grumman.

“The highly complex and dynamic nature of the mission demands an integrated set of technologies and we are excited to partner with academia to enhance our customers mission.”

Some key areas, include: “multiple sensor track classification, identification and correlation; situational knowledge on demand; and quantitative dynamic adaptive planning.”

Three groups of researchers made up from top US universities such as Carnegie Mellon University, Johns Hopkins University, Massachusetts Institute of Technology, Purdue University, Stanford University, University of Illinois at Chicago, University of Massachusetts Amherst and the University of Maryland, have collectively received $1.2m for research.

A bigger GPT-2 model is coming! OpenAI is planning to release larger and more powerful versions of its GPT-2 language model.

Two mechanisms for publishing GPT-2 (and hopefully future powerful models): staged release and partnership-based sharing.


1. Releasing the 345M model

2. Sharing the 1.5B model with partners working on countermeasures (please apply!)

— OpenAI (@OpenAI) May 3, 2019

The AGI-research lab divided the AI community with its decision to release the code for its Reddit-trained language model, claiming that it was potentially too dangerous to handle. Some applauded OpenAI for playing safe, as it’s possible the model could be manipulated to spit out hate speech, fake news for bot accounts. But others, believed that it was all a front designed to whip up a media frenzy.

Instead of publishing the full model, OpenAI gingerly released a smaller model dubbed GPT-2-117, containing 117m parameters rather than the full 1.5bn. Now, it’s planning to unleash a larger model with 345m parameters. The larger model performs better than the smaller 117m model, but not as well as the full-sized one.

The 762m and 1.5bn model will be reserved for researchers in the AI and security community who are “working to improve societal preparedness for large language models,” OpenAI said.

“In making our 345M release decision, some of the factors we considered include: the ease of use (by various users) of different model sizes for generating coherent text, the role of humans in the text generation process, the likelihood and timing of future replication and publication by others, evidence of use in the wild and expert-informed inferences about unobservable uses, proofs of concept such as the review generator mentioned in the original blog post, the strength of demand for the models for beneficial purposes, and the input of stakeholders and experts,” it said in a statement.

“We remain uncertain about some of these variables and continue to welcome input on how to make appropriate language model publication decisions.”

OpenAI have described this as a “staged release strategy”, whereby it will publish various versions of the model over time. “The purpose of our staged release of GPT-2 is to give people time to assess the properties of these models, discuss their societal implications, and evaluate the impacts of release after each stage,” OpenAI said. ®

Related Posts:

  • No Related Posts

The man behind Tesla and SpaceX

Elon Reeve Musk is a technology entrepreneur, investor, and engineer who holds the … He is the co-founder and CEO of Neuralink; founder of The Boring Company co-founder and co-chairman of OpenAI and co-founder of PayPal.

Elon Reeve Musk is a technology entrepreneur, investor, and engineer who holds the South African, Canadian and U.S.Citizenship and the founder, CEO and the lead designer of SpaceX and the Co-founder, CEO and product architect of Tesla, INC.

He is the co-founder and CEO of Neuralink; founder of The Boring Company co-founder and co-chairman of OpenAI and co-founder of PayPal. In December 2016, he was ranked 21st on the Forbes list of The World’s Most Powerful People. As of April 2019, he has a net worth of $22.3 billion and is listed by Forbes as the 40th richest person in the world.

Elon Musk is known for his wildest ideas and bold claims who is behind the car manufacturer Tesla and Space exploration firm SpaceX has some crazy ideas for the future. Where some of his ideas include a futuristic transportation system known as Hyperloop a human colony on Mars helped by the affordable space travel linking our brains to computers and developing microsatellites to deliver low-cost internet access masses to the others.

Musk almost sold Tesla to Google in 2013 for $11 billion according to the Ashlee Vance the author of Elon Musk-Tesla, SpaceX and the Quest for the fantastic future, at that time Tesla’s future was looking bleak and Musk reacted to Google’s co-founder CEO Larry Page for a takeover.

At the age of 17, Elon Musk moved from South Africa to Canada along with his mother Maye, his sister Tosca and brother Kimbal where he spent two years studying at Kingston’s University in Ontario.

Musk made his guest appearance on a popular show The Simpsons in 2015 in which the episode titled with The Musk who fell to the Earth as he voiced to Springfield in a spacecraft and gets inspiration from Homer Simpson.

When he was still studying at the University of Pennsylvania, Musk and his roommate Andeo Ressie, rented a large house and converted it into a night club so that they could earn money to pay rent. As it was a small setup but it could hold 100 people according to Vogue.

In a piece published in the Time Magzine, Iron Man director Jon Favreau admitted that Robert Downey Jr turned to Musk to help him in learning of mannerism of a tech-savvy of a billionaire who had a cameo in the movie of Iron Man 2.

Elon Musk is running a school at SpaceX headquarters called Ad Astra which is Latin to the Stars and the school is attended by Musk’s own kids along with the children of the SpaceX employees. Musk originally founded Ad Astra to provide his children with schooling that exceeds traditional metrics on all the relevant matter through the unique project-based learning experiences.

According to the Ars Technica Report, the kids work together in the teams and there is a heavy emphasis on the Maths, Science, Engineering, and Ethics. There is no problem grade system at the school as Ars Technica reported that as many as 400 families applied in 2007 though the school has only 50 students.

Related Posts:

  • No Related Posts

The Evolution of OpenAI

OpenAI began life in December of 2015. Their mission is to expand the capabilities of current AI systems and open the door for AI to become an …

It’s easy to quip about AI being the end of humanity, making references to Terminator all the while.

You’ll see that we’re not entirely above such basic humor later on, but for now, let’s introduce the evolution of OpenAI and bring you up to speed on its impact on the Dota 2 pro scene.

OpenAI is an artificial intelligence company founded by Sam Altman and SpaceX and Tesla CEO Elon Musk. Their general mission is to advance AI for the betterment of humanity. A noble aspiration to be sure, but what does this have to do with Dota 2?

We’ve got all the answers right here, so come with us if you want to live (told you…)

The Evolution of OpenAI

OpenAI began life in December of 2015. Their mission is to expand the capabilities of current AI systems and open the door for AI to become an extension of human will. When looking for a testing ground for this new AI system, the developers chose Dota 2.

The OpenAI team chose Dota 2 because of the high degree of complexity, the need for adaptation and the multitude of potential combinations of moves, actions and reactions. In order to prepare itself to take on the pros, OpenAI had to get smart.

To this end, OpenAI uses a reinforcement learning algorithm called Proximal Policy Optimization (PPO). If that sentence made your eyelids feel heavy don’t worry, here’s breakdown.

The Evolution of OpenAI
Image courtest of OpenAI

PPO is essentially a method of reinforcing AI behavior through trial and error. The AI is presented with a task or a problem and through trial and error it works out the most efficient way to solve it. This allows the developers to introduce hazards and hindrances that the AI must overcome, learning all the while.

Never is this more evident than in this hilarious clip of an AI avatar attempting to reach an orb while being pelted by digital snowballs. The AI will utilize PPO to overcome the challenge and learn skills useful to its current environment. Skills like how to maintain its balance and how to stand up when it gets knocked down.

Introduction of OpenAI to Dota 2

To start with, OpenAI knew nothing about the game of Dota 2. When starting out the AI doesn’t know about last hitting or even the objective of the game. The AI doesn’t even know it’s playing Dota, it’s just attempting to solve a problem in the most efficient way possible.

OpenAI doesn’t have any concept of a UI, or what a hero or ability looks like. Its thinking is purely mathematical. All OpenAI ‘sees’ is a selection of numbers, with its objective being to simply optimize the numbers in its favor.

What OpenAI knows is that when it moves or casts an ability some numbers changed – numbers related to health, mana, hero position, creep behavior, gold, etc. It doesn’t know initially if the change is good or bad, it just knows something has changed. It then (through a very long process of trial and error) works out if the change is beneficial or detrimental.

The Evolution of OpenAI
Image courtesy of Valve

To learn a new patch, the developers take the latest version of OpenAI and drop it into the new patch. It notices that certain numbers change differently to how they did before, or perhaps don’t change at all. OpenAI will modify its behavior accordingly in favor of solving the puzzle (in this case winning a game of Dota 2).

OpenAI’s CTO Greg Brockman explained that the OpenAI bot was able to learn to play the game at a professional level from scratch in the span of just two weeks of real time (336 hours). Even more amazingly, after just 1 hour of training the OpenAI bot is able to crush the in-game bots. The evolution of OpenAI as a competitive gaming opponent is incredible.

OpenAI’s First Live Match

By August of 2017 OpenAI was ready to move into the big league. On August 11th 2017, during The International 17 (TI17) in Seattle in front of 20,000 people, OpenAI played a 1v1 mid-only game against the ever-charismatic Danil “Dendi” Ishutin.

The Evolution of OpenAI
Image courtesy of Natus Vincere

The match consisted of a best of 3 series, with both combatants using Shadow Fiend. In a hotly contested first match, OpenAI utilized incredible Shadowraze faking and masterful positioning to secure two kills without reply, winning the first game.

In the second, decisive game, Dendi realized almost immediately that his chances of winning were almost zero. He calls GG with less than 90 seconds played.

The Future

The evolution of OpenAI is fascinating to watch. It has gone from a general learning system to mastering a game as complex as Dota 2 in the space of two weeks. It is the tip of the iceberg for OpenAI as a system to enhance and build value for humanity.

AI is not without its risks, as OpenAI themselves have discussed, but with diligent professionals committed to improving AI such that it rivals human performance on almost every intellectual task, the future is certainly exciting for AI.

Tune in later in the week as we explore OpenAI’s recent performances in Dota 2 on a true 5v5 scale. While you’re here, why not check out some of our other great sports and esports articles at The Game Haus?

Follow us

You can like The Game Haus on Facebook and follow us on Twitter for more sports and esports articles

Follow Matt on Twitter @MattyMead2006.

From Our Haus to Yours.

Sharing is caring:

Like this:



Related Posts:

  • No Related Posts

Deep Fake AI Text: Protecting Your Brand

OpenAI, a nonprofit backed by Elon Musk, developed a language algorithm called GPT-2. It’s also known as deep fakes for text, and you can feed it a …


Fake news, the Momo hoax and reality shows that are anything but — in a world where it’s getting pretty difficult to tell fact from fiction, a new artificial intelligence bot might make it even harder.

OpenAI, a nonprofit backed by Elon Musk, developed a language algorithm calledGPT-2. It’s also known as deep fakes for text, and you can feed it a single sentence and it’ll continue the paragraph, or write a full essay, matching your tone and using proper syntax. ThisYouTube video shows that the algorithm can even write a shockingly convincing news article.

The results are so realistic that developers didn’t immediately share the technology, which they’ve done with other research. They had to think about the implications before providing public access.

What makes this AI development interesting and dangerous?

Before, artificial intelligence (AI)-generated text would look more like Mad Libs. For example, onemachine learning hobbyist was training neural networks on knock-knock jokes and ended up with the AI obsessing over a “cow with no lips.” However, the latest AI breakthrough is too convincingly human. Not only are there obvious concerns over misrepresentation and falsification, but there are also technical and ethical questions to consider.

“If GPT-2 is able to translate text without being explicitly programmed to, it invites the obvious question: What else did the model learn that we don’t know about?” askedThe Verge. “OpenAI’s researchers admit that they’re unable to fully answer this. They’re still exploring exactly what the algorithm can and can’t do.”

Additionally, because the bot can be modeled on language that’s abusive or racist, it has the potential to become the world’s most powerful internet troll. More than being potentially annoying, there can be very real repercussions, including changing political debates or affecting business success.

It’s not all dire, though. A collaborative project by Harvard NLP and MIT-IBM Watson AI Lab candetect fake text. Using the code from GPT-2 itself, it reverses the algorithm to determine the statistical likelihood that predictive language generation was used.

Can AI be used to make better chatbots?

Once a GPT-2-type technology is released to the public, there’s no limit to how it could be applied. However, for brands, there are significant AI advantages.

“Engagement is the new currency for customer relevance,” said John Strain of Salesforce (a company we have an integration with) toAdweek, adding, “If you build a relationship, then all good things come from that.”

There’s a clear link between how personalizing experiences, including communication, can grow a customer relationship. AI is the only way to provide that personalization at scale.KoeppelDirect reports that 80% of companies want chatbots implemented by 2020, which is expected to have a net cost reduction of $8 billion, while also improving customer service.

However, even today’s best technology still has limitations. WhenZDNet put the OpenAI algorithm through its paces, it found that “when GPT-2 moves on to tackle writing that requires more development of ideas and of logic, the cracks break open fairly wide.” Therefore, in the near term, chatbots won’t be able to fully replace customer interaction and will instead remain supplemental.

What happens when AI takes on online product reviews?

Amazon’s growth has also led to the rise of afake review economy, where sellers would pay people to write five-star reviews for their own products or bad ones for the competition. As consumers do more and more of their shopping online, buying everything from groceries to medicine, misleading consumers through false reviews can be criminal.

In fact, the FTC prosecuted its very firstfake reviews case in March 2019 against someone selling weight-loss supplements on Amazon. This sets a precedent for any company that engages in the practice, as fake reviews account for 30% of Amazon reviews.

With GPT-2, the fake review economy has the potential to explode, quickly generating content that can be posted on retail sites but also on social media, where bots already abound. Even without GPT-2, online retailers are having a tough time trying to get ahead of the issue, leaving the consumer, and any brand selling on a third-party site, vulnerable.

What can brands do to address fake reviews?

Addressing fake reviews begins with confirming the reviewer’s identity. This can easily be done by sending verification emails to customers to ensure they did write the review.

Companies that are selling on Amazon can also look ahead for future AI tools to seek out fakes. For example, a beta initiative called Project Zero is working with 15 brands to test AI technology that detects counterfeit products. The results are a 100-fold improvement. My own company offers brands several layers of protection against fake reviews. Our platform uses AI monitoring to detect anomalies in review submissions, and suspicious content is automatically rejected or flagged for review.

Employing initiatives like those mentioned above — whether you use a third-party solution, implement solutions of your own, or find manual ways to sort through reviews — can help you identify and steer clear of fake reviews. Along these same lines, you can also improve your brand’s authenticity by highlighting both positive and negative reviews. Rather than sugarcoating the user experience, displaying reviews that are two or three stars improves credibility and is an opportunity to show when you’ve fixed a problem, leaving the fake reviews out of the limelight.

What’s at stake for consumer-computer interaction?

Every time there’s a major breakthrough in AI, a part of us wonders, “Will humans or machines win?”

In reality, AI-enhanced communications can bring a lot of benefit to both businesses and consumers. Just as recommendation algorithms that power Netflix and Spotify have become commonplace, AI text might one day be the go-to for customer service, able to provide on-demand information that’s accurate and relevant. In the meantime, it will be important to watch out for ways AI text may go awry and cause greater harm to your business.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Related Posts:

  • No Related Posts