Nonprofit OpenAI looks at the bill to craft a Holy Grail AGI, gulps, spawns commercial arm to bag …

Analysis OpenAI, a leading machine-learning lab, has launched for-profit spin-off OpenAI LP – so it can put investors’ cash toward the expensive task …

Analysis OpenAI, a leading machine-learning lab, has launched for-profit spin-off OpenAI LP – so it can put investors’ cash toward the expensive task of building artificial general intelligence.

The San-Francisco-headquartered organisation was founded in late 2015 as a nonprofit, with a mission to build, and encourage the development of, advanced neural network systems that are safe and beneficial to humanity.

It was backed by notable figures including killer-AI-fearing Elon Musk, who has since left the board, and Sam Altman, the former president of Silicon Valley VC firm Y Combinator. Altman stepped down from as YC president last week to focus more on OpenAI.

Altman is now CEO of OpenAI LP. Greg Brockman, co-founder and CTO, and Ilya Sutskever, co-founder and chief scientist, are also heading over to the commercial side and keeping their roles in the new organization. OpenAI LP stated it clearly it wants to “raise investment capital and attract employees with startup-like equity.”

There is still a nonprofit wing, imaginatively named OpenAI Nonprofit, though it is a much smaller entity considering most of its hundred or so employees have switched over to the commercial side, OpenAI LP, to reap the benefits its stock options.

“We’ve experienced firsthand that the most dramatic AI systems use the most computational power in addition to algorithmic innovations, and decided to scale much faster than we’d planned when starting OpenAI,” the lab’s management said in a statement this week. “We’ll need to invest billions of dollars in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers.”

OpenAI refers to this odd split between OpenAI LP and OpenAI Nonprofit as a “capped-profit” company. The initial round of investors, including LinkedIn cofounder Reid Hoffman and Khosla Ventures, are in line to receive 100 times the amount they’ve invested from OpenAI LP’s profits, if everything goes to plan. Any excess funds afterwards will be handed over to the non-profit side. In order to pay back these early investors, and then some, OpenAI LP will have to therefore find ways to generate fat profits from its technologies.

The reaction to the “capped-profit” model has raised eyebrows. Several machine-learning experts told The Register they were somewhat disappointed by OpenAI’s decision. It once stood out among other AI orgs for its nonprofit status, its focus on developing machine-learning know-how independent of profit and product incentives, and its dedication to open-source research.

Now, for some, it appears to be just another profit-driven Silicon Valley startup stocked with well-paid engineers and boffins.

Conflict of interest?

“Most tech companies have business models that rely on secrecy or monopolistic protections like copyright and patents,” said Daniel Lowd, an associate professor at the department of computer and information science at the University of Oregon in the US.

“These practices are the opposite of openness. Already, there are concerns that companies like Google are patenting machine learning methods and that this could stifle innovation. A profit incentive is a conflict of interest.”

Rachel Thomas, co-founder of and an assistant professor at the University of San Francisco’s Data Institute, agreed. “I already see OpenAI as similar to the research labs at major tech companies: they hire people from the same backgrounds; focus on publishing in the same conferences; are primarily interested in resource-intensive academic research.

“There is nothing wrong with any of this, but I don’t really see it as ensuring that AI is beneficial to humanity, nor as democratizing AI, which was their former mission statement. To me, forming OpenAI LP is just one more step to being indistinguishable from any other major tech company doing lots of academic, resource-intensive research.”

OpenAI’s commitment to openness, as it name would suggest, has already been called into question. Last month, it withheld a trained language model known as GPT-2, claiming it was too dangerous to release: the software is capable of translating text, answering questions, and generating prose from writing prompts.

At first glance, its output appears competent, however, it composes words with little or no regard to facts and balance, and descends into nonsense, contradictions, and repetition. It was feared the algorithms would be used to churn out masses of fake news articles and product reviews, spam and phishing emails, instant messages, and other text that would fool humans, if it fell into the wrong hands. And thus, OpenAI suppressed it.

Some details about the system were published in a paper, along with a smaller version of the model, however, the full payload was withheld. The AI community was split into two on this decision. One half supported OpenAI’s efforts to avoid causing any harm to society through the dissemination of its technology, and the other half thought it was hypocritical and harmful to research.

Now that OpenAI hopes to generate a profit to pay back its investors in spades, will it now sell and monetize this powerful GPT-2 software and similar work? We’re assured it has no plans.


OpenAI has tried to quash concerns that will drift from its primary mission – which is fulfilling its grandiose humanity-benefiting charter – by insisting the for-profit company is ultimately controlled by the nonprofit’s board. That means whatever the for-profit side wants to do, the nonprofit board will ensure it abides by the charter, which states it will “build safe and beneficial artificial general intelligence,” by having the final say on any major decision.

“The mission comes first even with respect to OpenAI LP’s structure,” it stated. “We may update our implementation as the world changes. Regardless of how the world evolves, we are committed — legally and personally — to our mission.”

Looking for the Holy Grail

That mission is to develop so-called artificial general intelligence (AGI) that helps humanity and is safe. Such a system, seen only in science fiction so far, would be able to do pretty much anything a human could, and more. It is the Holy Grail of computer intelligence.

“OpenAI is an outstanding research organization, so it’ll be disappointing if they won’t be as open in the future,” Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, an AI research lab founded by the late Microsoft co-founder Paul Allen, told The Register. “AGI is still the stuff of science fiction and philosophical speculation. I see no evidence that anyone, OpenAI included, is making tangible progress towards it.”


Remember the OpenAI text spewer that was too dangerous to release? Fear not, boffins have built a BS detector for it


We note that there will or may be people – such as OpenAI LP staffers – on the non-profit board, overseeing the for-profit arm’s activities, who have a direct financial interest in seeing the for-profit side succeed: OpenAI said “only a minority of board members are allowed to hold financial stakes in the partnership at one time.”

Some may feel this will still hinder the board’s ability to keep the for-profit side on the right side of the charter, putting safety and humanity ahead of money.

However, OpenAI stated that anyone with a conflict of interest will not be allowed to vote: “Only board members without such [financial] stakes can vote on decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict—including any decisions about making payouts to investors and employees.” When The Register asked how many people on the board were working on the for-profit and non-profit side, we did not get a clear answer.

As mentioned above, OpenAI Nonprofit will only receive leftover profits after the initial investors get a 100X return on their investments. It’s hard to judge how likely OpenAI LP will reach that target. OpenAI didn’t disclose how much the initial investments amounted to, but did tell us that it doesn’t have any immediate plans to make money.

“We don’t plan to monetize GPT-2 or other prior research projects at OpenAI. We’re focused on the mission,” a spokesperson said. “Though, this structure gives us flexibility to do other things if we want — always subject to approval from the board.” ®

Related Posts:

  • No Related Posts

AI Research Group OpenAI Plans To Save The World While Chasing Profits

Open AI has decided to open a new for-profit arm of the famed research think tank in a new shocking twist. OpenAI was in news recently when Elon …

Open AI has decided to open a new for-profit arm of the famed research think tank in a new shocking twist. OpenAI was in news recently when Elon Musk left the board of the world leading research think tank in light of many disagreements. Sam Altman also recently resigned as the president of Y Combinator, a super incubator for startups to focus on activities at Open AI. In a sense, with this move, Open AI transforms into a fancy AI startup with a world class team.

What once started as a nonprofit research lab, co-founded by Musk and Altman in a bid to save the world from the potential dangers of AI and focus on safety of artificial intelligence has now transitioned into a for-profit enterprise. What products could Open AI build as part of its portfolio still remains to be seen. Natural language processing and reinforcement learning are some of the core strengths at Open AI. The new company very well may try selling these AI services to monetise them in the near term.


When the lab was set up, it raised headlines for poaching some of the best researchers from leading universities, leaving a gap in the academic world. With world class researchers and technologists already in place and a market for cutting-edge applications, this for-profit move will push the company to cater to the real world needs of the enterprises along with allowing Open AI to attract the best talent possible. Open AI faced several attritions as leading researchers like Pieter Abbeel and Andrej Karpathy left Open AI to target more commercial gigs of Musk where AI could be applied to real world problems. Move towards profits and investors will encourage more world class researchers and engineers to stay at Open AI.

Altman’s vision to balance capitalism with responsibility has resulted in the company trying out the “capped-profit” model. The statement from Open AI on its blog, says “We’ve created OpenAI LP, a new “capped-profit” company that allows us to rapidly increase our investments in compute and talent while including checks and balances to actualize our mission.” The company pivots to this mode of operation in a hope to actualise its mission in a more fruitful way. Open AI is focussed on making AGI(Artificial general intelligence) safer and ensure a broad adoption of AGI.

Even though the Open AI statement said that the daily work at Open AI would remain the same, it is doubtful how it would make money without building end to end products. Open AI says, “The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission, which allows us to raise investment capital and attract employees with startup-like equity.”

With a firm focus on profits the company has become the most probable place where AGI will actually be developed and sold off to businesses. With motivation to raise billions of dollars in funding and make multiple times the funding amount the company seems poised to serve enterprises who are struggling to get real value for their AI bucks. Open AI with their superior machine learning capabilities can become a competitor to major AI service companies once they decide which products they will focus on.

Open AI LP’s new structure allows its investors to only make 100x on their returns while everything above that will go back to the non profit entity, “Open AI Non Profit.” Open AI realise that their core mission is unrealistic without the flow of steady and huge amounts of capital. Open AI as a company is late in embracing the AI markets and building products for it will be definitely tough.Open AI LP will have a long way to go to cultivate the ability of creating market ready products. Here is where Sam Altman comes in. Sam Altman left his responsibilities at Y Combinator to lead the new Open AI LP and bring some business to the company which has been in research mode for years.

Provide your comments below


Related Posts:

  • No Related Posts

An AI for generating fake news could also help detect it

In a new experiment, researchers at the MIT-IBM Watson AI Lab and HarvardNLP considered whether the same language models that can write such …

Last month OpenAI rather dramatically withheld the release of its newest language model, GPT-2, because it feared it could be used to automate the mass production of misinformation. The decision also accelerated the AI community’s ongoing discussion about how to detect this kind of fake news. In a new experiment, researchers at the MIT-IBM Watson AI Lab and HarvardNLP considered whether the same language models that can write such convincing prose can also spot other model-generated passages.

Sign up for the The Algorithm
Artificial intelligence, demystified

The idea behind this hypothesis is simple: language models produce sentences by predicting the next word in a sequence of text. So if they can easily predict most of the words in a given passage, it’s likely it was written by one of their own.

The researchers tested their idea by building an interactive tool based on the publicly accessible downgraded version of OpenAI’s GPT-2. When you feed the tool a passage of text, it highlights the words in green, yellow, or red to indicate decreasing ease of predictability; it highlights them in purple if it wouldn’t have predicted them at all. In theory, the higher the fraction of red and purple words, the higher the chance the passage was written by a human; the greater the share of green and yellow words, the more likely it was written by a language model.

detecting fake news

A reading comprehension passage from a US standardized test, written by a human.

Hendrik Strobelt and Sebastian Gehrmann
detecting fake news

A passage written by OpenAI’s downgraded GPT-2.

Hendrik Strobelt and Sebastian Gehrmann

Indeed, the researchers found that passages written by the downgraded and full versions of GPT-2 came out almost completely green and yellow, while scientific abstracts written by humans and text from reading comprehension passages in US standardized tests had lots of red and purple.

But not so fast. Janelle Shane, a researcher who runs the popular blog Letting Neural Networks Be Weird and who was uninvolved in the initial research, put the tool to a more rigorous test. Rather than just feed it text generated by GPT-2, she fed it passages written by other language models as well, including one trained on Amazon reviews and another trained on Dungeons and Dragons biographies. She found that the tool failed to predict a large chunk of the words in each of these passages, and thus it assumed they were human-written. This identifies an important insight: a language model might be good at detecting its own output, but not necessarily the output of others.

This story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.

Keep up with the latest in artificial intelligence at EmTech Digital.

The Countdown has begun.

March 25-26, 2019

San Francisco, CA

Register now

Related Posts:

  • No Related Posts

OpenAI’s Mission to Benefit Humanity Now Includes Seeking Profit

SpaceX and Tesla CEO Elon Musk, startup accelerator Y Combinator president Sam Altman, and several other Silicon Valley figures launched OpenAI …

OpenAI, an artificial intelligence research group created by Silicon Valley investors as a non-profit, will now be seeking “capped” profit, according to a blog post on the OpenAI website published Monday.

SpaceX and Tesla CEO Elon Musk, startup accelerator Y Combinator president Sam Altman, and several other Silicon Valley figures launched OpenAI in late 2015 with $1 billion in seed funding and the stated goal of ensuring that AI “benefits all of humanity.” Musk stepped down from OpenAI in February 2018. Since its founding, the group has conducted research with reinforcement learning, robotics, and language.

According to OpenAI, the original nonprofit entity will own a limited partnership called OpenAI LP that’s designed to give a “capped return” to investors and employees and funnel excess funds back to the nonprofit. It’s unclear exactly how OpenAI’s technologies will generate the value needed to provide these returns, but the blog notes that OpenAI is flexible enough to allow for a return “in the long term.”

“The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission, which allows us to raise investment capital and attract employees with startup-like equity,” the blog post reads.

OpenAI’s blog claims that returns for the first round of investors will be capped at 100 times their original investment, and returns to employees will be “negotiated in advance.”

Wired reported that the shift toward seeking a “capped profit” was an effort to compete with other corporate groups in AI research, like Google DeepMind. OpenAI is now an entity that exists in part to enrich venture capitalist investors, with extremely modest financial “caps.”

OpenAI’s shift toward seeking profit in order to both woo and remunerate investors isn’t surprising, considering the venture capitalist resumes of Musk, Altman, Peter Theil, and others who helped get OpenAI off the ground.

Altman’s venture capital seed funding company Y Recombinator has invested in over 1,400 for-profit companies, including Reddit, Airbnb, Dropbox, and Stripe. Musk has poured billions of his own money into his companies SpaceX and Tesla. Thiel, a Trump-supporter, has funded Silicon Valley companies SpaceX and Airbnb, and is a principal investor in Palantir, a data-mining company used by US banks and police departments to create a “digital dragnet” of individuals designed to track and potentially incriminate them. Musk and Thiel co-founded PayPal, and OpenAI backer Reid Hoffman was a PayPal executive before co-founding LinkedIn.

OpenAI claims that its shift to a capped profit structure is specifically designed on nurturing the development of “safe” artificial general intelligence (AGI)—or, human-like intelligence. The blog states, “Our structure gives us flexibility for how to create a return in the long term, but we hope to figure that out only once we’ve created safe AGI.”

Experts have argued that the concept of AGI is based on false or unprovable assumptions about the nature of intelligence, and is a conceptual myth. It’s also worth noting that “artificial intelligence” is often appropriated by startups to elicit hype. In many cases, “AI” is used as a buzzword for standard computer programming. According to a recent survey from London venture capital firm MMC, 40 percent of European startups who are classified as “AI companies” do not even use AI. While OpenAI is working with real AI as the term is commonly understood, the hype about AI will likely play into their financial interests.

OpenAI has some incremental wins under its belt, but also hasn’t accomplished any paradigm-shifting breakthroughs in AI research. In July 2018, OpenAI claimed that its Dactyl robot-hand system achieved dexterity and flexibility was nearly but not quite comparable to that of a human hand. In August, OpenAI built neural networks that can beat humans at the game Dota 2by basically cheating.

It’s not clear exactly how or if OpenAI’s “capped profit” structure will change things on a day-to-day level for researchers at the entity. But generally, we’ve never been able to rely on venture capitalists to better humanity.

Related Posts:

  • No Related Posts

The AI Nonprofit Elon Musk Founded and Quit Is Now For-Profit

OpenAI, the nonprofit artificial intelligence research organization founded by Elon Musk and Sam Altman — and which Musk later quit — is now legally …


OpenAI, the nonprofit artificial intelligence research organization founded by Elon Musk and Sam Altman — and which Musk later quit — is now legally a for-profit company.

The organization published a blog post on Monday announcing OpenAI LP, a new entity that it’s calling a “capped-profit” company. The company will still focus on developing new technology instead of selling products, according to the blog post, but it wants to make more money while doing so — muddying the future of a group that Musk founded to create AI that would be “beneficial to humanity.”

About Face

With the capped-profit structure that it created, investors can earn up to 100 times their investment but no more than that. The rest of the money that the company generates will go straight to its ongoing nonprofit work, which will continue as an organization called OpenAI Nonprofit.

Musk, who has since left the organization, founded OpenAI with Altman to create ethical, beneficial artificial intelligence to counter the potentially dangerous technology being built elsewhere. However, more recently, the organization built a realistic AI tool that could churn out convincing fake news articles.

Related Posts:

  • No Related Posts