OpenAI has released the largest version yet of its fake-news-spewing AI

In February OpenAI catapulted itself into the public eye when it produced a language model so good at generating fake news that the organization …

In February OpenAI catapulted itself into the public eye when it produced a language model so good at generating fake news that the organization decided not to release it. Some within the AI research community argued it was a smart precaution; others wrote it off as a publicity stunt. The lab itself, a small San Francisco-based for-profit that seeks to create artificial general intelligence, has firmly held that it is an important experiment in how to handle high-stakes research.

Now six months later, the policy team has published a paper examining the impact of the decision thus far. Alongside it, the lab has released a version of the model, known as GPT-2, that’s half the size of the full one, which has still not been released.

In May, a few months after GPT-2’s initial debut, OpenAI revised its stance on withholding the full code to what it calls a “staged release”—the staggered release of incrementally larger versions of the model in a ramp-up to the full one. In February, it published a version of the model that was merely 8% of the size of the full one. It published another roughly a quarter of the full version before the most recent release. During this process, it also partnered with selected research institutions to study the full model’s implications.

The report details what OpenAI learned throughout this process. It notes that both the staged release and research partnership agreements proved to be processes worth replicating in the future. They helped OpenAI better understand and anticipate the possible malicious uses of GPT-2. And indeed, the research partners were able to better quantify some of the threats that were only previously speculative. A study conducted by collaborators at Cornell University, for example, found that readers on average believed GPT-2’s outputs to be genuine news articles nearly as often as New York Times ones. Several researchers outside of official partnerships also began tackling the challenge of detecting machine-generated text.

The authors concluded that after careful monitoring, OpenAI had not yet found any attempts of malicious use but had seen multiple beneficial applications, including in code autocompletion, grammar help, and developing question-answering systems for medical assistance. As a result, the lab felt that releasing the most recent code was ultimately more beneficial. Other researchers argue that several successful efforts to replicate GPT-2 have made OpenAI’s withholding of the code moot anyway.

The report has received a mixed response. Some have lauded OpenAI for sparking a discussion and introducing a new set of norms that didn’t previously exist. “The staged release of GPT-2 […] was a useful experiment,” says Peter Eckersley, the director of research at the Partnership on AI, of which OpenAI is a member. “Through gathering the AI community to debate these matters, we’ve found there are many subtle pieces that need to be gotten right in deciding when and how to publish research that has a risk of unintended consequences or malicious uses.”

Others, however, have remained critical of OpenAI’s decisions. Vanya Cohen, a recent master’s graduate from Brown University who recreated an open-source version of GPT-2, argues that withholding the model does more to slow down countermeasures research than replication. “Large language models like GPT-2 are the best currently available tools for identifying fake text generated by these same models,” he says.

Still others were more measured: “I don’t think a staged release was particularly useful in this case because the work is very easily replicable,” says Chip Huyen, a deep learning engineer at Nvidia. “But it might be useful in the way that it sets a precedent for future projects. People will see staged release as an alternative option.” Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence, which also adopted a staged release for its language model Grover, echoes the sentiment: “I applaud their intent to design a thoughtful, gradual release process for AI technology but question whether all the fanfare was warranted.”

Jack Clark, the policy director of OpenAI, places GPT-2 in the context of the organization’s broader mission. “If we are successful as an AI community in being able to build [artificial general intelligence], we will need a huge amount of historical examples from within AI” of how to handle high-stakes research, he says. “But what if there aren’t any historical examples? Well, then you have to generate [your own] evidence—which is what we’re doing.”

Related Posts:

  • No Related Posts

Elon Musk: Computers will surpass us ‘in every single way’

In 2015 Musk co-founded OpenAI as an AI research nonprofit. He left the board last year. He said in a tweet earlier this year that OpenAI and Tesla …

He talked about how in the future technology from Neuralink, a start-up he co-founded, could give people a way to boost their skills in certain subjects. The company is seeking to draw on AI for augmenting people’s cognitive capabilities with brain-machine interfaces. The company is not there yet, though.

“The first thing we should assume is we are very dumb,” Musk said. “We can definitely make things smarter than ourselves.”

Ma had a different view, suggesting that a computer has never spawned a human being, or even a mosquito.

“I’ve never in my life, especially [in the] last two years, when people talk about AI, human beings will be controlled by machines,” he said. “I never think about that. It’s impossible.”

Related Posts:

  • No Related Posts

Global Computational Biology Market- Trends, Price, Share and Growth Rate from 2018 to 2024

Our latest research report entitled Computational Biology Market (by … anatomy, neuroscience, visualization, biophysics, biochemistry, ecology, and …

Our latest research report entitled Computational Biology Market (by application (human body simulation software, preclinical drug development), services (in- house, contract), end-user (academics and commercial)) provides complete and deep insights into the market dynamics and growth of computational biology industry. Latest information on market risks, industry chain structure computational biology cost structure and opportunities are offered in this report. The past, present and forecast market information will lead to investment feasibility by studying the essential computational biology growth factors. According to report, the global computational biology market is projected to grow at a CAGR of 21% over the forecast period of 2018-2024.

Ask for Sample Copy of Research Report with Table of Content @ https://www.infiniumglobalresearch.com/reports/sample-request/1799

What is the Market Size and Growth of the Global Computational Biology Market?

Computational biology can be defined as the development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques for studying biological systems. It is a multifaceted field that combines the principles of applied mathematics, animation, computer science, anatomy, neuroscience, visualization, biophysics, biochemistry, ecology, and genetics. Computational Biology includes many appearances of bioinformatics and is the science of using biological data to develop algorithms or models to understand biological systems and relationships. It reduces the number of human candidates required to test drugs in the development stage and this is the key advantage of computational biology. It is useful in formulating drugs for the pediatric population and pregnant women.

What are the Main Growth Drivers of the Computational Biology Market?

Some of the major factors driving the growth of computational biology market are growing number of clinical studies in pharmacokinetics and pharmacogenomics for drug discovery & development, an upsurge of drug designing, personalized medicine, and disease modeling. Furthermore, the growing demand for predictive models and increasing funding from governments and private organizations for R&D in this field are also boosting the growth of computational biology market.

In addition, advantages such as reducing risks involved in human clinical trials for testing drugs during their development phase, offered by computational biology to also augment growth in this market. Extensive use of this technology in a large number of applications in academics, industrial and commercial sectors are further fuelling the growth of this market. On the flip side lack of properly trained professionals is hampering the growth of computational biology market.

Ask Discount for the Latest Research Report @ https://www.infiniumglobalresearch.com/reports/request-discount/1799

Which Region is a Market Leader in the Global Computational Biology Market and What are the Key Reasons?

Among the geography, North America has emerged as the largest market for computational biology market followed by Europe and the Asia Pacific. In the North America region, factors such as growing investments in the R&D of novel drugs and disease modeling and growing technological advancements in biological computations is driving the growth of this market. Factors boosting the growth of computational biology market in the Asia Pacific region is due to increased expenditure in research works in pharmacogenomics and pharmacokinetics in clinical studies for newer drugs.

What Segments Make up the Computational Biology Industry?

The report on global computational biology market covers segments such as applications, services, and end-user. On the basis of applications, the global computational biology market is categorized into human body simulation software, preclinical drug development, cellular & biological simulation, clinical trials, and drug discovery and disease modeling. On the basis of services, the global computational biology market is categorized into in- house and contract. On the basis of end-user, the global computational biology market is categorized into academics and commercial.

Who are the Key Players in the Computational Biology Market?

The report provides profiles of the companies in the global computational biology market such as, Chemical Computing Group Inc., Genedata AG, Nimbus Discovery Llc, Simulation Plus Inc, Dassault Systemes, Compugen Ltd, Rosa & Co. LLC, Insilico Biotechnology AG, Schrodinger and Leadscope Inc.

Browse Detailed TOC, Description, and Companies Mentioned in Report @ https://www.infiniumglobalresearch.com/healthcare-medical-devices/global-computational-biology-market

Reasons to Buy this Report:

  • Comprehensive analysis of global as well as regional markets of computational biology.
  • Complete coverage of all the product type and applications segments to analyze the trends, developments, and forecast of market size up to 2024.
  • Comprehensive analysis of the companies operating in this market. The company profile includes analysis of product portfolio, revenue, SWOT analysis and the latest developments of the company.
  • Infinium Global Research– Growth Matrix presents an analysis of the product segments and geographies that market players should focus to invest, consolidate, expand and/or diversify.

Related Posts:

  • No Related Posts

Hello, This Is Artificial Intelligence. How Can I Help You? Eye on AI

People who call companies to ask questions about their cable bills or complain about their Internet service being out are increasingly talking to …

People who call companies to ask questions about their cable bills or complain about their Internet service being out are increasingly talking to artificial intelligence.

Natural language processing, a subset of A.I. that helps computers understand speech, has become good enough that it’s being used to listen and respond to basic customer questions.

Over the past year, Google, Amazon and business software firm Twilio have all ramped up their marketing of A.I.-powered software for call centers. Their sales pitch is that the technology can handle customer service calls more quickly while freeing human agents to handle more complicated questions.

Less discussed publicly, at least, is that A.I. will also likely let companies save money by reducing the number of call center workers that they need.

In fact, customer service, which includes call center technology, is one of the most common arenas for using A.I., tech publisher O’Reilly Media said in a report earlier this year. Only research and development ranked higher.

The path to this point hasn’t exactly been smooth. Three years ago, Microsoft, Facebook, and Google started touting voice-based digital assistants and online chatbots.

They said that their A.I. could handle complex tasks, like reading users’ calendars, identifying their travel schedules, and then proactively booking them hotel rooms for those dates. But the technology failed to live up the hype.

Since then, A.I. has improved, explained Olive Huang, a vice president of research for analyst firm Gartner. Although it still has room for improvement, the technology is now good enough for some simpler tasks like booking a hotel room when asked.

Companies are increasingly ready to give A.I. another chance, Huang said. That’s particularly true, she said, because customer call volume is rising quickly and increasing staffing to handle it is expensive.

Still, natural language processing has its limits. For instance, the technology often fails to understanding people with certain accents.

“My accent has always been impossible for Amazon Alexa,” said Huang, who described her voice in English as a blend of Singaporean Chinese and German.

Additionally, companies are still figuring out how to smoothly transition callers from digital assistants to human operators. People invariably speak differently based on who or what they’re talking to.

“If I know I’m talking to a human, then I will talk like a human,” Huang said. “If I know I’m talking to a virtual agent, then it’s like talking to a five-year old—I will be precise.”

And while some voice technologies are increasingly sounding more human-like by incorporating “umms” and pauses, people can find this “creepy,” she said. When a virtual assistant sounds too human, “then you don’t know how to talk to it,” Huang said.

Jonathan Vanian

@JonathanVanian

jonathan.vanian@fortune.com

Sign up for Eye on A.I.

A.I. IN THE NEWS

Facial-recognition goes public. Megvii, a Chinese startup specializing in facial-recognition technology, plans to go public in Hong Kong, according to a CNBC report. The company, whose main rival is Chinese tech firm Sensetime, recently raised $750 million in funding at a valuation of over $4 billion, the report said.

Scanning faces in Uganda. Huawei is supplying facial-recognition and other data-crunching technologies to law enforcement in Uganda, The Financial Times reported. A Uganda police spokesperson told the newspaper, “The cameras are already transforming modern day policing in Uganda, with facial recognition and artificial intelligence as part of policing and security.”

DeepMind co-founder is taking a “time out.” DeepMind, the high-profile A.I. research lab that’s part of Google, has placed co-founder Mustafa Suleyman on leave for unspecified reasons, Bloomberg News reported. A spokesperson told the news service that “Mustafa is taking time out right now after 10 hectic years,” but did not say when he would return.

Even Xbox? Microsoft contractors listened to audio recordings of Xbox players in order to use the data to improve Microsoft’s A.I.-powered voice technologies, tech publication Motherboard reported. Several other big tech companies like Amazon and Google have also faced criticism for using contract workers to listen to audio recordings.

AFRICA’S A.I. HOPES

Wim Delva, acting director of the school for data science and computational thinking at Stellenbosch University, in South Africa, writes in Quartz (per The Conversation) about universities debuting data science and A.I. research initiatives in Africa, and how they may differ from projects in other countries. Delva writes: “It is human nature to focus on immediate, locally perceived problems before venturing into fixing more remote ones. So people and organizations from elsewhere in the world may not always identify and try to tackle the African continent’s problems. These issues include improving access and equity in health care; improving road safety and bolstering food security.”

EYE ON A.I. TALENT

Recursion Pharmaceuticals has hiredImran Haque as vice president of data science. Haque, who specializes in machine learning and drug discovery, was previously the chief scientific officer of genomics company Freenome.

Breather, a startup focusing on office rentals, pickedPhilippe Bouffaut to be chief technology officer. Bouffaut was previously the vice president of products and engineering at public relations software company Cision.

EYE ON A.I. RESEARCH

Electrochemical A.I. action. Researchers from New York University’s school of engineering published a paper about using deep learning to improve the process of electrosynthesis, an environmentally-friendly chemical synthesis technique. Miguel Modestino, an NYU assistant professor and co-author of the paper, said in a statement that his team believes “this is the first time AI has been used to optimize an electrochemical process.”

A.I. to predict ozone concentrations. Researchers from the University of Toronto, Carnegie Mellon University, the Jet Propulsion Laboratory, National Center for Atmospheric Research, and University of Science and Technology of China published a paper about using deep learning to predict ozone concentrations during the summer in the U.S. Although the A.I. system was effective, the researchers said “other modern machine learning algorithms have the potential for even greater gains in performance.”

FORTUNE ON A.I.

Huawei Launches New A.I. Chip As Company Enters ‘Battle Mode’ To Survive – By Eamon Barrett

No Humans Needed: Chinese Company Uses AI to Read the News, Books – By Alyssa Newcomb

How Amazon and Silicon Valley Seduced the Pentagon – By James Bandler, Anjali Tsui , and Doris Burke

BRAIN FOOD

I guess we’ll find out how dangerous this really is. Two young graduate students have created A.I. software that can generate convincing prose that they said was based on similar technology created by the high-profile OpenAI research group, Wiredreported. What’s noteworthy about the research is that OpenAI originally said it wanted to keep the secret sauce behind its technology private, because it was worried it would be used by bad actors, like for creating realistic fake news. The graduate students created the language-generating tech to show that its possible for many people to create these kinds of complicated systems, not just well-funded research groups.

Related Posts:

  • No Related Posts

How could artificial intelligence serve Nunavummiut?

Valentine Goddard, founder and executive director of AI Impact Alliance, facilitated a workship on artificial intelligence or AI, in Iqaluit on Saturday, Aug …

Artificial Intelligence, or AI, is a growing reality around the world. How do Nunavummiut want AI to serve their needs?

That’s the question Valentine Goddard posed at a workshop on Saturday, Aug. 24, at the Pinnguaq Makerspace building in Iqaluit.

Goddard facilitated a workshop on AI as part of the Nunavut Arts Festival, which took place in the territory’s capital from Aug. 19 to 25.

“What would Nunavut AI look like? We always have to ask why we need it or want to use it,” she told Nunatsiaq News. Goddard is the founder and executive director of AI Impact Alliance based in Montreal, a non-governmental organization that aims for broad engagement on and ethical use of AI.

AI is the use of computers to perform tasks that usually need human intelligence, such as visual perception, decision making, speech recognition and language interpretation.

AI could help Nunavut artists with their “discoverability,” Goddard said, meaning how easily others can find their work online.

Some artists incorporate AI directly into their work, she added. For example, one Japanese dancer attached wires to her back, which were then translated into notes on a piano through the use of AI.

But AI can go far beyond helping artists directly in their work, Goddard said.

“AI can automate repetitive tasks, and change our thought process.”

AI is already used for everything from the delivery of health care services to bridging distant geographical locations to involvement in deciding who gets to immigrate into a country, she said.

But AI can’t be creative: that is still a feat that only humans can accomplish. Humans input data and the AI outputs results, according to its design. How the AI is designed and what its results are used for should include broad public consultation, Goddard said.

For example, in China the national and regional governments are implementing a social credit system.

Nearly 200 million cameras around the country equipped with facial-recognition software track citizens’ movements.Citizens receive scores based on their behaviour, which can result in restrictions, such as whether they are allowed to fly or what schools they can attend.

Similar visual-recognition software is also used for a very different reason: to detect some kinds of cancer, Goddard said.

“That’s why engagement and governance of AI is so important. We live in a democratic country, not an authoritarian country. So we need to decide how AI should be used,” Goddard said.

These kinds of ethical questions will become more important and frequent with the development of AI technology, Goddard said. She referred to an article called “Two-Eyed AI” published by the Canadian Commission for the United Nations Educational, Scientific and Cultural Organization.

The author of that article, Dick Bourgeois-Doyle, promotes “Two-Eyed Seeing” as a way to approach ethical issues around AI.

Two-Eyed Seeing is an approach honed and advocated for by Mi’kmaw Nation elders Albert and Murdena Marshall, Bourgeois-Doyle wrote.

“The Two-Eyed principle asks that we see issues and life from one eye with the best of Indigenous ways of knowing and from the other eye with the best in the Western or mainstream ways of knowing,” he wrote.

This would ensure respectful collaboration, widespread empowerment and a future built on the best of humanity, he argued.

Inuit may recognize some similar values in Inuit Qaujimajatuqangit principles, such as piliriqatigiinniq (working together for a common cause), qanuqtuurniq (being innovative and resourceful), aajiiqatigiinniq (consensus decision making) and inuuqatigiitsiarniq (respecting others, caring for people.)

Goddard said her organization is not trying to bring AI to every Nunavut community.

“Our role is to ask communities, what AI tools could they use, to serve their needs?”

And artists are well-positioned as educators, culture creators and communicators to help shape the conversation around AI, Goddard said.

To join a national conversation on AI, Goddard said people can find “ArtImpact AI” on Facebook.

Related Posts:

  • No Related Posts