Elon Musk’s ‘malicious’ AI too dangerous to release, say developers

An artificial intelligence system developed by Elon Musk’s OpenAI organisation is too dangerous to be released, the group believes. OpenAI is a …

An artificial intelligence system developed by Elon Musk’s OpenAI organisation is too dangerous to be released, the group believes.

OpenAI is a non-profit research organisation founded in 2015 with $1bn in backing from Mr Musk and others to promote the development of artificial intelligence technologies that benefit humanity.

The system its researchers have developed, officially called GPT-2, can generate text as it would naturally occur in language and has been released in part.

However researchers are withholding the fully-trained algorithm “due to our concerns about malicious applications of the technology”.

“The model is chameleon-like, it adapts to the style and content of the conditioning text,” claimed the researchers, and included a number of examples to show how it worked.

AI can write just like me. Brace for the robot apocalypse

I’ve seen how OpenAI’s GPT2 system can produce a column in my style. We must heed Elon Musk’s warnings of AI doom. Contact author. @ladyhaja.

Elon Musk, recently busying himself with calling people “pedo” on Twitter and potentially violating US securities law with what was perhaps just a joke about weed – both perfectly normal activities – is now involved in a move to terrify us all. The non-profit he backs, OpenAI, has developed an AI system so good it had me quaking in my trainers when it was fed an article of mine and wrote an extension of it that was a perfect act of journalistic ventriloquism.

As my colleague Alex Hern wrote yesterday: “The system [GPT2] is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.” GPT2 is so efficient that the full research is not being released publicly yet because of the risk of misuse.

And that’s the thing – this AI has the potential to absolutely devastate. It could exacerbate the already massive problem of fake news and extend the sort of abuse and bigotry that bots have already become capable of doling out on social media (see Microsoft’s AI chatbot, Tay, which pretty quickly started tweeting about Hitler). It will quash the essay-writing market, given it could just knock ‘em out, without an Oxbridge graduate in a studio flat somewhere charging £500. It could inundate you with emails and make it almost impossible to distinguish the real from the auto-generated. An example of the issues involved: in Friday’s print Guardian we ran an article that GPT2 had written itself (it wrote its own made-up quotes; structured its own paragraphs; added its own “facts”) and at present we have not published that piece online, because we couldn’t figure out a way that would nullify the risk of it being taken as real if viewed out of context. (Support this kind of responsible journalism here!)

The thing is, Musk has been warning us about how robots and AI will take over the world for ages – and he very much has a point. Though it’s easy to make jokes about his obsession with AI doom, this isn’t just one of his quirks. He has previously said that AI represents our “biggest existential threat” and called its progression “summoning the demon”. The reason he and others support OpenAI (a non-profit, remember) is that he hopes it will be a responsible developer and a counter to corporate or other bad actors (I should mention at this point that Musk’s Tesla is, of course, one of these corporate entities employing AI). Though OpenAI is holding its system back – releasing it for a limited period for journalists to test before rescinding access – it won’t be long before other systems are created. This tech is coming.

Traditional news outlets – Bloomberg and Reuters, for example – already have elements of news pieces written by machine. Both the Washington Post and the Guardian have experimented – earlier this month Guardian Australia published its first automated article written by a text generator called ReporterMate. This sort of reporting will be particularly useful in financial and sports journalism, where facts and figures often play a dominant role. I can vouch for the fact newsrooms have greeted this development with an element of panic, even though the ideal would be to employ these auto-generated pieces to free up time for journalists to work on more analytical and deeply researched stories.

But, oh my God. Seeing GPT2 “write” one of “my” articles was a stomach-dropping moment: a) it turns out I am not the unique genius we all assumed me to be; an actual machine can replicate my tone to a T; b) does anyone have any job openings?

A glimpse at GPT’s impressiveness is just piling bad news on bad for journalism, which is currently struggling with declining ad revenues (thank you, Google! Thank you, Facebook!); the scourge of fake news and public distrust; increasingly partisan readerships and shifts in consumer behaviour; copyright abuses and internet plagiarism; political attacks (the media is “the enemy of the people”, according to Donald Trump) and, tragically, the frequent imprisonment and killings of journalists. The idea that machines may write us out of business altogether – and write it better than we could ourselves – is not thrilling. The digital layoffs are already happening, the local papers are already closing down. It’s impossible to overstate the importance of a free and fair press.

In a wider context, the startling thing is that once super-intelligent AI has been created and released it is going to be very hard to put it back in the box. Basically, AI could have hugely positive uses and impressive implications (in healthcare, for instance, though it may not be as welcomed in the world of the Chinese game Go), but could also have awful consequences. Take a look at this impressive/horrifying robot built by Boston Dynamics, which keeps me from sleeping at night. We’ve come a long way from Robot Wars.

Play Video
0:45
New dog-like robot from Boston Dynamics can open doors – video

The stakes are huge, which is why Musk – again, in one of his more sensible moods – is advocating for greater oversight of companies well on their way in the AI race (Facebook, Amazon and Alphabet’s DeepMind to take just three examples. AND TESLA). Others have also stressed the importance of extensive research into AI before it’s too late: the late Stephen Hawking even said AI could signal “the end of the human race” and an Oxford professor, Nick Bostrom, has said “our fate would be sealed” once malicious machine super-intelligence had spread.

At least as we hurtle towards this cheering apocalypse we’ll have the novels and poetry that GPT2 also proved adept at creating. Now you just need to work out whether it was actually me who wrote this piece.

Hannah Jane Parkinson is a Guardian columnist

Related Posts:

  • No Related Posts

AI Text Generator Backed By Elon Musk Can Write Seriously Believable Fake News

An artificial intelligence research group, backed by Elon Musk, has created an algorithm that can generate a convincing news story using just a …

An artificial intelligence research group, backed by Elon Musk, has created an algorithm that can generate a convincing news story using just a handful of starting words.

The algorithm was originally designed to answer questions, summarise stories and translate text, but researchers have now found it can also be used to send out fake news to the masses.

It can produce scarily coherent, sophisticated text that looks legitimate, but that is not accurate.

So basically, if you feed it a fake headline, it will spit out a complete article, fake quotations, statistics and all.

Disassembling a computer. Image: Andrey Bukrev/Getty.

OpenAI has published an example, the first two lines are written by a human:

“A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.”

The rest of the story is completely made up, without any human guidance. Here’s a couple of lines:

“The incident occurred on the downtown train line, which runs from Covington and Ashland stations. In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.”

The complete version includes quotes and information that simply isn’t true.

Image: Getty.

Because the content looks and feels so real that the not-for-profit has taken the unusual step of not releasing their research publicly, for fear of potential misuse.

“The texts that they are able to generate from prompts are fairly stunning,” Sam Bowman, a computer scientist at New York University, told Bloomberg.

“It’s able to do things that are qualitatively much more sophisticated than anything we’ve seen before,” he said.

This tweet is unavailable or no longer exists.

This all comes as governments and company’s work to stop the spread of fake news.

READ MORE:WhatsApp Is Making Changes To Fight Fake News

READ MORE:People Believe Fake News If It Comes With A Photo

But we’re being assured the systems abilities aren’t consistent enough to pose an immediate threat.

Elon Musk co-founded OpenAI in 2016 but stepped down from the board last year.

Related Posts:

  • No Related Posts

Jack Dorsey Picks Elon Musk as ‘Most Exciting’ User, Amazon Buys Eero and Prices Spike at …

He also named Elon Musk as his pick for most exciting, influential person on Twitter. (That sparked controversy, because in the past, the Tesla CEO’s …
  •  Photo: Conrad Martin
    Photo: Conrad Martin

Photo: Conrad Martin
Image 1of/1

Caption

Close

Image 1 of 1
Photo: Conrad Martin
Jack Dorsey Picks Elon Musk as ‘Most Exciting’ User, Amazon Buys Eero and Prices Spike at Whole Foods (60-Second Video)
1 / 1
Back to Gallery
  • In a live Twitter interview with Recode editor-at-large Kara Swisher, Twitter CEO Jack Dorsey said he’d give himself a “C” grade as far as his work in responsible tech. He also named Elon Musk as his pick for most exciting, influential person on Twitter. (That sparked controversy, because in the past, the Tesla CEO’s tweets have led to SEC fraud charges and a defamation lawsuit.)
  • Amazon recently bought Eero, a mesh network company that helps consumers expand their Wi-Fi networks to cover their entire homes. Eero’s Alexa compatibility means the companies have been working together for years, but some are worried about what the acquisition could mean for personal data and privacy. In response to customer questions, Eero tweeted that the company “does not track customers’ Internet activity, and this policy will not change with the acquisition.”
  • Prices at Whole Foods are climbing back up after Amazon initially slashed them in 2017, reports The Wall Street Journal. The price increases affected upwards of 550 products.

‘Ofigenno’. Elon Musk tweets in Russian

Word-known businessman Ilan Musk replied in the Russian language to a tweet by the Kremlin-run TV station NTV.

Word-known businessman Ilan Musk replied in the Russian language to a tweet by the Kremlin-run TV station NTV.

хаха офигенно

— Elon Musk (@elonmusk) 13 февраля 2019 г.

“Haha awesome [ofigenno – in Russian],” Musk wrote under the video with the question ‘And how do you like this, @elonmusk?’

NTV published a video story of a driver from Stavropol Krai who customized his Lada car. The customization was so fine that even the founder of the SpaceX saw value in it.

The Internet meme ‘And how do you like this, @elonmusk?’ became very popular in the former Soviet countries. Netizens often use it when showing off stories and photos of devices and inventions on the web.

Catch up and outdo Musk: Belarus president test-drives Tesla car

belsat.eu

Related Posts:

  • No Related Posts