An artificial intelligence system developed by Elon Musk’s OpenAI organisation is too dangerous to be released, the group believes.
OpenAI is a non-profit research organisation founded in 2015 with $1bn in backing from Mr Musk and others to promote the development of artificial intelligence technologies that benefit humanity.
The system its researchers have developed, officially called GPT-2, can generate text as it would naturally occur in language and has been released in part.
However researchers are withholding the fully-trained algorithm “due to our concerns about malicious applications of the technology”.
“The model is chameleon-like, it adapts to the style and content of the conditioning text,” claimed the researchers, and included a number of examples to show how it worked.
To work, the algorithm is fed text of a variable amount and then outputs sentences based on its predictions of the material that should naturally follow next.
This means it appears to be capable of writing legitimate-looking news articles, potentially introducing the risk of people producing fake news to a theme being able to produce their fake content at scale.
More from Elon Musk
Twitter boss names Elon Musk as his favourite tweeter
Twitter boss names Elon Musk as his favourite tweeter despite multiple scandals
SpaceX boss Elon Musk reveals new rocket prototype ‘Starship’
Elon Musk’s SpaceX to cut 10% of staff amid ‘difficult challenges’
Elon Musk asks judge to throw out lawsuit from UK diver he called a ‘paedophile’
Elon Musk unveils subterranean tube for avoiding Los Angeles traffic
OpenAI believe the technology has several large policy implications.
In positive news, the researchers believe the technology could be used to develop AI writing assistants, dialogue agents – such as conversational interfaces for voice assistants – and aid with language translation and speech recognition.
However, the harms could be significant too. Because of the algorithm’s ability to copy the style it had been trained on, it could be used to impersonate others online and generate misleading news articles.
It could also automate the production of abusive or fakes content to post on social media, as well as the production of spam and phishing content.
“These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns,” say the researchers.
“The public at large will need to become more sceptical of text they find online, just as the ‘deep fakes’ phenomenon calls for more scepticism about images.”
The company’s chief technology officer, Greg Brockman, posted one particularly convincing bit of text that OpenAI claimed its original algorithm produced and which one of the employees posted by the recycling bin.
Previously, Mr Musk has criticised Facebook boss Mark Zuckerberg for having a “limited” understanding of artificial intelligence in a spat over the potential dangers of advances in the field.
Mr Musk, alongside scientists such as Stephen Hawking, warned of the potential moment at which artificial intelligence develops the ability to redesign itself.
They warned that if this happens there could be an intelligence explosion as the machine rapidly redesigns itself before humankind could even catch up.
Many researchers fear that this could potentially lead to human extinction.