The US has created artificial intelligence with unique properties

Some time ago a non-profit company “OpenAI”, sponsored by Elon Musk, has introduced a new development – artificial intelligence, the ability to think.

В США создали искусственный интеллект с уникальными свойствамиProgram-generated texts are difficult to distinguish from written by a man.

Some time ago a non-profit company “OpenAI”, sponsored by Elon Musk, has introduced a new development – artificial intelligence, the ability to think. Android uses a set of data with a size of 40 GB, making can easily create a quality article, anticipating the thoughts of certain people.

As noted by the developers, their offspring is able to generate sentences of any length and to not make a typical robot errors.

AI was asked to continue the beginning of George Orwell’s novel “1984 year”. Without thinking, the Android gave a few chapters that are very similar in style with George Orwell’s novel. The more AI got, the more his answers were closer to the style of Orwell. Thus, the robot knows how to copy other people’s manners.

The developers taught the machine to write various comments both positive and negative nature in online discussions, thereby generating public opinion. It is noteworthy that distinguish answers robot reviews from real people impossible.

The design was to fall into the hands of programmers for further testing, but that won’t happen: the creators are afraid that their AI will lead the world to unimaginable consequences. Even the company itself, “OpenAI” does not understand, what is the best tool created.

Related Posts:

  • No Related Posts

This technology could ‘absolutely devastate’ the internet as we know it

A group of scientists backed by Elon Musk have designed a predictive text machine that is so eerily good its creators are worried about releasing it to …

Designed by OpenAI, a non-profit artificial intelligence research organisation backed by the eccentric billionaire, the machine can take a piece of writing and spit out many more paragraphs in the same vein.

Called the GPT-2, it was trained on a dataset of 8 million web pages and is so good at mimicking the style and tone of a piece of writing that it has been described as a text version of deepfakes — the emerging AI-based video technology that can realistically replicate celebrities or world leaders in video footage.

MORE: New technology used to weaponise doctored sex videos

You would be familiar with this sort of thing. Google’s Gmail service has predictive text technology and even offers up a selection of pre-written responses to the emails that you receive. But this is on a whole other level.

OpenAI produced a paper, demonstrating the prowess of its predictive text software, including examples of its work.

In one example, the machine was fed these two sentences:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

And here’s how it continued the story:

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

Not bad, right?

It goes on like that for six more paragraphs, complete with fake quotes and a perfectly convincing narrative.

Elon Musk is one of the most notable backers of the OpenAI group and has frequently warned about the dangerous disruption posed by general machine intelligence. Picture: Joshua Lott

Elon Musk is one of the most notable backers of the OpenAI group and has frequently warned about the dangerous disruption posed by general machine intelligence. Picture: Joshua LottSource:AFP

The software has difficulty with “highly technical or esoteric types of content” but otherwise is able to produce “reasonable samples” just over 50 per cent of the time, researchers said.

A couple of journalists from The Guardian were given the chance to take the technology for a spin and were suitably concerned by its power.

“AI can write just like me. Brace for the robot apocalypse,” reads the headline by journalist Hannah Jane Parkinson.

The OpenAI computer was fed an article or hers and “wrote an extension of it that was a perfect act of journalistic ventriloquism”, she said.

“This AI has the potential to absolutely devastate. It could exacerbate the already massive problem of fake news and extend the sort of abuse and bigotry that bots have already become capable of doling out on social media,” she warned.

For decades, machines have struggled with the subtleties and nuance of human language and reproducing realistic imitations. Research like this shows real progress is being made on that front.

But can we handle it?

We’ve trained an unsupervised language model that can generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training: https://t.co/sY30aQM7hUpic.twitter.com/360bGgoea3

— OpenAI (@OpenAI) February 14, 2019

Elon Musk has consistently said artificial intelligence is one of the major potential threats facing the future of humanity. The OpenAI organisation has the goal of developing the technology in a safe and responsible way.

The organisation usually releases the full extent of its research but has withheld the totality of its latest project out of fear it could be abused or misused — a high likelihood given the increased weaponisation of “fake news” thanks to social media.

Policy Director at OpenAI, Jack Clark, said the idea not to make the GPT-2 publicly available was not about hyping the research.

“The main thing for us here is not enabling malicious or abusive uses of the technology. We’ve published a research paper and small model. Very tough balancing act for us,” he wrote on Twitter.

The organisation said it was concerned such software could be used to generate misleading news articles, impersonate others online, automate abusive or fake content on social media and automate email scams.

Ultimately, it will likely be a question of whether the good outweighs the bad.

“These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations,” the paper’s abstract says.

And on that note, it’s probably time I looked for a new job.

Related Posts:

  • No Related Posts

Amazon bets big on new electric car startup Rivian to rival Tesla

Rivian would beat Tesla to market with an electric ute. Tesla boss Elon Musk has been hyping one for years but has yet to release any concrete details …

Electric car startup Rivian has announced it has secured $US700m ($980m) in funding from an Amazon-led group of investors.

The cash injection will help the maker expand and bring its electric ute and SUV to market by 2020.

Rivian turned heads at the Los Angeles motor show in November last year with its futuristic R1T all-electric dual-cab ute concept.

The R1T turned heads when it was revealed late last year.

The R1T turned heads when it was revealed late last year.Source:Supplied

Rivian would beat Tesla to market with an electric ute. Tesla boss Elon Musk has been hyping one for years but has yet to release any concrete details. Late last year Musk tweeted that the company may reveal a prototype this year.

I’m dying to make a pickup truck so bad … we might have a prototype to unveil next year

— Elon Musk (@elonmusk) December 11, 2018

However, Rivian has been more forthcoming with details of its future green workhorse.

The R1T will be available in two versions with 135kWh or 180kWh battery packs, with respective theoretical range of 500km and 640km.

All are all-wheel drive — electric motors power each wheel, with torque vectoring to ensure optimum grip.

Power figures vary with specification: the base makes 563kW/1120Nm and the top-spec prioritises range over power, making 522kW/1120Nm.

The Rivian R1T is due to enter production in 2020.

The Rivian R1T is due to enter production in 2020.Source:Supplied

An 800kg payload and towing capacity of up to 5000kg give the R1T the muscle to back up its claims. It can also handle some of the rough stuff with a wading depth of up to a metre.

Rivian also debuted a R1S seven-seat SUV concept at the LA show. The R1S would have a range of between 330km and 660km depending on battery specification.

The R1S has seven seats and a theoretical range up to 660km.

The R1S has seven seats and a theoretical range up to 660km.Source:Supplied

The R1S features genuine luxury interiors with a digital instrument display and a larger centrally mounted infotainment screen and leather upholstery and wood veneer.

The boot is located in the front of the vehicle because the electric motors and batteries are located underneath the vehicle.

The R1S interior is dominated by large digital screens and wood panelling.

The R1S interior is dominated by large digital screens and wood panelling.Source:Supplied

Rivian boss RJ Scaringe says the company has “spent years developing the technology to deliver the ideal vehicle for active customers”.

“This means having great driving dynamics on any surface on or off-road, providing cargo solutions to easily store any type of gear … and, very importantly, being capable of driving long distances on a single charge.”

Both vehicles will be priced from $US65,000 ($91,000) after electric vehicle incentives which range from $US2500-$7500 ($3500-$10,500) depending on battery size.

If Rivian can get the two vehicles to market by the proposed 2020 launch date, it will have the market segment to itself — no other maker plans a ute or seven-seat SUV so soon.

Jaguar recently launched its mid-size I-Pace SUV, to compete against Tesla’s Model X. Joining them in the next 12 to 18 months will be the Audi e-tron and Mercedes-Benz EQ C SUVs.

In 2020, Porsche will launch the Taycan electric sports car and Volkswagen will launch the ID hatch.

Related Posts:

  • No Related Posts

Fake or F-AI-ke: Elon Musk-founded company shows how artifical intelligence can spin news from …

OpenAI, an artificial intelligence research group co-founded by billionaire Elon Musk (he stepped down from its board last year), has demonstrated a …
Elon Musk, artificial intelligenceElon Musk, artificial intelligenceElon Musk

OpenAI, an artificial intelligence research group co-founded by billionaire Elon Musk (he stepped down from its board last year), has demonstrated a piece of software that can produce authentic-looking fake news articles after being given just a few pieces of information.

In an example published Thursday by OpenAI, the system was given some sample text: “A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabout are unknown.” From this, the software was able to generate a convincing seven-paragraph news story, including quotes from government officials, with the only caveat being that it was entirely untrue.

“The texts that they are able to generate from prompts are fairly stunning,” said Sam Bowman, a computer scientist at New York University who specializes in natural language processing and who was not involved in the OpenAI project, but was briefed on it. “It’s able to do things that are qualitatively much more sophisticated than anything we’ve seen before.”

OpenAI is aware of the concerns around fake news, said Jack Clark, the firm’s policy director. “One of the not so good purposes would be disinformation because it can produce things that sound coherent but which are not accurate,” he said.

As a precaution, OpenAI decided not to publish or release the most sophisticated versions of its software. It has, however, created a tool that lets policymakers, journalists, writers and artists experiment with the algorithm to see what kind of text it can generate and what other sorts of tasks it can perform.

The potential for software to be able to be able to near-instantly create fake news articles comes during global concerns over technology’s role in the spread of disinformation. European regulators have threatened action if tech firms don’t do more to prevent their products helping sway voters, and Facebook has been working since the 2016 U.S. election to try and contain disinformation on its platform.

Clark and Bowman both said that, for now, the system’s abilities are not consistent enough to pose an immediate threat. “This is not a shovel-ready technology today, and that’s a good thing,” Clark said.

Unveiled in a paper and a blog post Thursday, OpenAI’s creation is trained for a task known as language modeling, which involves predicting the next word of a piece of text based on knowledge of all previous words, similar to how auto-complete works when typing an email on a mobile phone. It can also be used for translation, and open-ended question answering.

One potential use is helping creative writers generate ideas or dialog, Jeff Wu, a researcher at OpenAI who worked on the project, said. Others include checking for grammatical errors in texts, or hunting for bugs in software code. The system could be fine-tuned to summarize text for corporate or government decision makers further in the future, he said.

Related Posts:

  • No Related Posts

Elon Musk’s AI company created a fake news generator it’s too scared to make public

Everyone is aware of the problem of fake news online, and now the OpenAI nonprofit backed by Elon Musk has developed an AI system that can …

It’s time for another installment in our ongoing look at the future that will be brought to you thanks to the increasingly worrisome capabilities of artificial intelligence. Everyone is aware of the problem of fake news online, and now the OpenAI nonprofit backed by Elon Musk has developed an AI system that can create such convincing fake news content that the group is too skittish to release it publicly, citing fears of misuse. They’re letting researchers see a small portion of what they’ve done, so they’re not hiding it completely — but, even so, the group’s trepidation here is certainly telling.

“Our model, called GPT-2, was trained simply to predict the next word in 40GB of Internet text,” reads a new OpenAI blog about the effort. “Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

Basically, the GPT-2 system was “trained” by being fed 8 million web pages, until it got to the point where the system could look at a set of text it’s given and predict the words that could come next. Per the OpenAI blog, the model is “chameleon-like — it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing.” Even if you were trying to produce, say, a fake news story.

Here’s an example: The AI system was given this human-generated text prompt:

“In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.”

From that, the AI system — after 10 tries — continued the “story,” beginning with this AI-generated text:

“The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science. Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.” (You can check out the OpenAI blog at the link above to read the rest of the unicorn story that the AI system fleshed out.)

Imagine what such a system could do, say, set loose on a presidential campaign story. The implications of this are why OpenAI says it’s only releasing publicly a very small portion of the GPT-2 sampling code. It’s not releasing any of the dataset, training code, or “GPT-2 model weights.” Again, from the OpenAI blog announcing this: “We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.

“We also think governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems,” the OpenAI blog post concludes.

Image Source: Chris Carlson/AP/Shutterstock

Related Posts:

  • No Related Posts