Elon Musk: ‘A very serious danger to the public’ – tech giant’s dire warning revealed

ELON MUSK offered a grave warning over the dangers of Artificial intelligence, claiming they could be more dangerous than nuclear warheads.

In July this year, Microsoft invested $1billion (£823,825,000) into Musk’s AI venture that plans to mimic the human brain using computers.

OpenAI said the investment would go towards its efforts of building artificial general intelligence (AGI) that can rival and surpass the cognitive capabilities of humans.

CEO Sam Altman said: “The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity.

“Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI.”

Related Posts:

  • No Related Posts

How AI is Learning to Play with Words

Dario Amodei, OpenAI’s research director, informs us they have created a language modeling program which is very imaginative, to say the least.

Imagine you go to a bookstore, and you notice and exciting cover. You pick the book, read the summary at the back, and the rave reviews. The plot seems intriguing enough, but when you check for the writer, it says “ by AI-something.” Would you buy the book, or would you think that was a waste of money? We will have those decisions moving into the future, and who will be responsible for such writings? But, that shows how AI is learning to play with words.

You may as well decide now if you will purchase content written by AI. That’s what the future will bring — AI is learning to play with words.

All of us have gotten used to chatbots and their limited capacity, but it appears their boundaries will be surpassed. Dario Amodei, OpenAI’s research director, informs us they have created a language modeling program which is very imaginative, to say the least. Its latest achievement was creating counterarguments and discussions with the researchers.

The program was fed a variety of articles, blogs, websites, and other content from the internet. Surprisingly, it managed to produce an essay worthy of any reputable writing service, and on a particularly challenging topic, by the way (Why Recycling Is Bad for the World).

Did the researchers do anything to help the program by providing specific, additional input? Certainly not. GPT-2, OpenAI’s new algorithm, did everything on its own. It excelled in different tests, such as storytelling, and predicting the next line in a sentence. Admittedly, it’s still far from inventing an utterly gripping story from beginning to the end as it tends to stray off topic — but it has great potential.

What sets GPT-2 apart from other similar AI programs is its versatility. Typically, such programs are skilled only for certain areas and can complete only specific tasks. However, this AI language model uses its input and successfully deals with a variety of topics.

What exactly can this AI program do?

For starters, the program could be used for summarizing articles or translating. Chatbots would be more informative and flexible. Ultimately, the program could turn into an excellent personal assistant, summarizing the reports for you, and sending out a proper company or business information. These types of programs would more or less be doing what a human PA already does.

There are several very negative possibilities.

At present, this type of AI program pretty much comes down to work best depending on what kind of material it is being fed with. For instance, if this AI program is given offensive, violent content. If the AI is given specific articles degrading individual nations or races (take some Hitler’s propaganda, for example), GPT-2 would respond similarly and produce vile and offensive content.

The system could be abused for automated and refined trolling.

People would be convinced they are receiving someone’s real opinions, instead of an automated response by a machine. Also, phishing schemes would become much easier to pull off. Certainly, the cyber hacks will be all over this AI code and work. The internet could be overfilled with artificial content. Do we want this as our only future? Let’s remember the example of pornographic deepfakes — a classic example of how top technology was abused when released into the public.

What is to come out of all the new AI code?

As the proponents of AI and AI code freely themselves admit — only the future will show what is really in the works at present. In the meantime, OpenAI will continue to invest resources in their innovative language model and feed it with more and more data, hoping for the best. Will we be on the receiving end of “the best?” The answer to that question will remain to be seen.

Microsoft Corporation’s $1 Billion Equity Investment in OpenAI

San Francisco-based OpenAI LP was founded in 2015 as a nonprofit research lab. Since its founding, OpenAI has employed artificial intelligence …

K&L Gates LLP advised Microsoft Corporation on the deal.

Microsoft Cororation (Nasdaq “MSFT”) completed a US$1 billion equity investment in OpenAI LP to build secure, trustworthy, and ethical artificial intelligence (AI) to serve the public.

Through this partnership the companies will focus on building a platform that OpenAI will use to create new AI technologies and deliver on the promise of artificial general intelligence.

San Francisco-based OpenAI LP was founded in 2015 as a nonprofit research lab. Since its founding, OpenAI has employed artificial intelligence researchers to make advances in the field, such as teaching a robotic hand to perform human-like tasks entirely in software, cutting down the cost and time to train robots.

The group has also focused on the safety and social implications of AI, researching how computers can generate realistic news stories with little more than headline suggestions and warning researchers to consider how their work and algorithms might be misused by bad actors before publishing them.

K&L Gates team was led by Seattle corporate partner Annette Becker (Picture), and included Seattle partners Won-Han Cheng, Carley Andrews, and Mike Gearin; Wilmington (Del.) partners Eric Feldman and Lisa Stark; Washington, D.C., partner Barry Hart; and San Francisco partner Rikki Sapolich-Krol, as well as Seattle associates Teresa Teng, Caitlin Velasco, Andrea Templeton, and Washington, D.C., associate Laura Gregory.

Involved fees earner: Carley Andrews – K&L Gates; Annette Becker – K&L Gates; Won-Han Cheng – K&L Gates; Eric Feldman – K&L Gates; Michael Gearin – K&L Gates; Laura Gregory – K&L Gates; Barry Hart – K&L Gates; Rikki Sapolich-Krol – K&L Gates; Lisa Stark – K&L Gates; Andrea Templeton – K&L Gates; Teresa Teng – K&L Gates; Caitlin Velasco – K&L Gates;

Law Firms: K&L Gates;

Clients: Microsoft;

Related Posts:

  • No Related Posts

Philanthropists should treat AI as an ethical not a technological challenge

Hoffman is the biggest backer of OpenAI, a hugely ambitious project founded in San Francisco by Musk — before he left, citing conflicts of interest as …

The list of existential threats to mankind on which wealthy philanthropists have focused their attention — catastrophic climate change, pandemics and the like — has a new addition: artificially intelligent machines that turn against their human creators.

Artificial intelligence (AI) could pose a threat “greater than the danger of nuclear warheads, by a lot”, according to Elon Musk, the entrepreneur behind electric car maker Tesla. As the author James Barrat put it, a superhuman intelligence, equipped with the ability to learn but without the ability to empathise, might well be Our Final Invention.

Even if the machines are not going to kill us, there are plenty of reasons to worry AI will be used for ill as well as for good, and that advances in the field are coming faster than our ability to think through the consequences.

A few notes on OpenAI’s “fake news–writing AI”

Last week, artificial intelligence research lab OpenAI decided to release a more expanded version of GPT-2, the controversial text-generating AI model …
robot artificial intelligence fake newsrobot artificial intelligence fake news
Image credit: Depositphotos

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

Last week, artificial intelligence research lab OpenAI decided to release a more expanded version of GPT-2, the controversial text-generating AI model it first introduced in February. At the time, the lab refrained from releasing the full AI model, fearing it would be used for malicious purposes.

Instead, OpenAI opted for a staged release of the AI, starting with a limited model (124 million parameters), and gradually releasing more capable models. In May, the research lab released the 355-million-parameter version of GPT-2, and last week, it finally released the 774-million-model, at 50 percent capacity of the text generator.

“We are considering releasing the 1.5 billion parameter version in the future,” OpenAI researchers wrote in a paper they released last week. “By staggering releases, we allow time for risk analyses and use findings from smaller models to inform the actions taken with larger ones.”

As usual, business and tech publications used click-bait headlines to declare the release of the text-generating AI. “Dangerous AI offers to write fake news,” wrote the BBC. “OpenAI just released a new version of its fake news-writing AI,” read the Futurism’s headline. Observer described it as such: “OpenAI Can No Longer Hide Its Alarmingly Good Robot ‘Fake News’ Writer.” Other outlets used similar sensational headlines and made references to Elon Musk, OpenAI’s co-founder, to create even more hype around the topic.

But while most publications were busy warning about threat of an AI-triggered fake news apocalypse (and drawing money-making clicks on their websites), they missed the important points that the OpenAI researchers raised (and didn’t raise) in their GPT-2 paper.

Generating coherent text is not enough to produce fake news

In its paper, OpenAI mentions fake news as just one of the potential malicious uses of its AI. “We chose a staged release process, releasing the smallest model in February, but withholding larger models due to concerns about the potential for misuse, such as generating fake news content, impersonating others in email, or automating abusive social media content production,” the researchers wrote.

But fake news got the greatest attention in the media possibly because the topic has gotten so much attention in the wake of the 2016 U.S. presidential elections. The authors of the paper discuss the threat thoroughly.

According to the OpenAI researchers, “Humans can be deceived by text generated by GPT-2 and other successful language models, and human detectability will likely become increasingly more difficult.”

The researchers further note that as they increase the size of the AI model, the quality of the text increases. And here’s what they mean by “quality”: “With a human-in-the-loop, GPT-2 can generate outputs that humans find credible.”

The researchers further describe that in some experiments, humans considered the output created by their AI model “credible” about 66 percent of the time. Samples of the 774-million-parameter GPT-2 model were “statistically” similar to New York Times articles 75 percent of the time.

These are all interesting achievements, but the problem with what the OpenAI paper and the articles covering the text-generating machine learning model is that they suppose that generating coherent text is enough to generate fake news. “Credible” and “quality” are vague words, and from the text of the paper, the authors assume that readers will consider anything that is coherent and passes as something written by an English-speaking human as credible.

machine learning natural language processing

machine learning natural language processing

Say you find a nondescript piece of paper lying on the floor, which contains a very coherent and eloquent excerpt about a nuclear war between the U.S. and Russia. Would you believe it? Probably not. Coherence is just one of the many requirements of spreading fake news.

The more important factor is trust. If you read the same text on the front page of The New York Times or The Washington Times, you’re more likely to believe it, because you trust them as credible news sources. Even if you see a minor grammatical flaw in the story, you’ll probably dismiss it as a human mistake and still believe what you read.

If you can deceive your readers into trusting you, you won’t even need to be a very good English writer. In fact, the group of Macedonian teens who created a fake news crisis during the 2016 presidential elections didn’t even have proper English skills. The key to their success was websites that looked authentic and trustworthy, in which they published fake stories with sensational headlines that triggered reactions from users across social media and gamed trending algorithms.

Even authentic news websites often get caught up in the propagation of false stories. A recent example is a non-existent flaw in the VLC media player, which was reported by several reputable tech publications and resulted in unwarranted panic around one of the most popular media players. The case neither involved artificial intelligence, nor loads of content. A few, well-placed stories triggered the damage.

And let’s not forget that machine learning algorithms, used in GPT-2 and other state-of-the-art AI models, are just statistical machines. They find correlations between different text excerpts and create new ones that are statistically similar to those. They have no understanding of the complexities and different nuances of human language, not even as much as a six-year-old kid.

Metadata is key to fight AI-generated fake content

While the OpenAI researchers lay out the ongoing cat-and-mouse competition between AI techniques to generate and detect synthetic text, they raise a very important point that has gone mostly unnoticed to the publications that covered the GPT-2 paper.

“Preventing spam, abuse, or disinformation online does not rely entirely on analyzing message content,” the researchers write, adding, “Metadata about text, such as time taken to write a certain amount of text, number of accounts associated with a certain IP, and the social graph of participants in an online platform, can signal malicious activity.”

Most online platforms such as Twitter, Facebook and Amazon already use metadata as cues to discover and fight bot-driven activities. While the method might sound trivial, it would be very effective against an AI model that could create large volumes of coherent text such as tweets or product reviews.

A person or entity armed with the most advanced AI-powered text generator would still need to employ thousands of devices with different IPs and in different geographical locations to be able to game trending algorithms in social media or ecommerce platforms. Again, in this case, a well-placed, sensational post by an influencer account can create the kind of organic reaction that no large-scale AI algorithm could synthesize.

Malicious actors are not interested in AI-generated text

The researchers note in their paper, “Our threat monitoring did not find evidence of GPT-2 direct misuse in publicly-accessible forums but we did see evidence of discussion of misuse.”

Those discussions were mostly inspired by the string of sensational articles that followed the initial release of GPT-2. Interestingly, the writers note that most of those discussions had declined by mid-May, a few months after the first version of GPT-2 was made available to the public.

“We believe discussion among these actors was due to media attention following GPT-2’s initial release; during follow-up monitoring there was no indication that these actors had the resources, capabilities, or plans to execute at this time,” the researchers observed.

While it’s easy to spin fantastic stories about the destruction AI models like GPT-2 can cause, the reality is that the people who would want to use it as a weapon know that it takes much more than coherent text to run a successful fake news campaign.

Related Posts:

  • No Related Posts