AI For Everyone: Super-Smart Systems That Reward Data Creators

Right now the answer tends to be: Large corporations. Data about our thoughts, preferences, fears and desires, as revealed in our emails, messages, …

Suppose a healthtech-oriented AI agent needs to make a hypothesis about which ones of the 25,000 or so human genes are involved in causing prostate cancer. But suppose it only has DNA data from a few hundred people – not enough to allow it to draw solid conclusions about so many different genes. Without a framework allowing this AI agent to consult other AI agents for help, the AI would probably just give up. But in a context like SingularityNET, where AIs can consult other AIs for assistance, there may be subtle routes to success. If there are other datasets regarding disorders similar to prostate cancer in model organisms such as mice, we may see progress on understanding which genes are involved in prostate cancer, via the combination of multiple AI agents, with different capabilities cooperating together.

Suppose AI #1 – let’s call it the Analogy Master – has a talent for analogy reasoning. This is the sort of reasoning that maps knowledge about one situation into a different sort of situation – for instance, using knowledge about warfare to derive conclusions about business. The Analogy Master might be able to use genetic data about mice with conditions similar to prostate cancer to draw indirect conclusions about human prostate cancer.

We will see work toward more general forms of AI that are owned and guided by individuals

Then, suppose AI #2 – let’s call it the Data Connector – is good at finding biological and medical datasets relevant to a certain problem, and preparing these datasets for AI analysis. And then suppose AI #3 – let’s call it the Disease Analyst – is expert at using machine learning for understanding the root causes of human diseases.

The Disease Analyst, when it’s tasked with the problem of finding human genes related to prostate cancer, may then decide it needs some lateral thinking to help it make a conceptual leap and solve the problem. It asks the Analogy Master, or many different AIs, for help.

The Analogy Master may not know anything about cancer biology, though it’s good at making conceptual leaps using reasoning by analogy. So, to help the Disease Analyst with its problem, it may need to fill its knowledge base with some relevant data, for example about cancer in mice. The Data Connector then comes to the rescue, feeding the Analogy Master with the data about mouse cancer it needs to drive its creative brainstorming, supporting the Disease Analyst to solve its problem.

All this cooperation between AI agents can happen behind the scenes from a user perspective. The research lab asking the Disease Analyst for help with genetic analysis of prostate cancer never needs to know that the Disease Analyst did its job by asking the Analogy Master and Data Connector for help. Furthermore, the Analogy Master and Data Connector don’t necessarily need to see the Disease Analyst’s proprietary data, because using multiparty computation or homomorphic encryption, AI analytics can take place on an encrypted version of a dataset without violating data privacy (in this case, patient privacy).

With advances in AI technology and cloud-based IT, this sort of cooperation between multiple AIs is just now becoming feasible. And, of course, such cooperation can happen in a manner controlled by large corporations behind firewalls. But what’s more interesting is how naturally this paradigm for achieving increasingly powerful and general AI could align with decentralized modalities of control.

What if the three AI agents in this example scenario are owned by different parties? What if the data about human prostate cancer utilized by the Disease Analyst is owned and controlled by the individuals with prostate cancer, from whom the data has been collected? This is not the way the medical establishment works right now. But at least we can say, on a technological level, there is no reason that AI-driven medical discovery needs to be monolithic and centralized. A decentralized approach, in which intelligence is achieved via multiple agents with multiple owners acting on securely encrypted data, is technologically feasible now, by combining modern AI with blockchain infrastructure.

Centralization of AI data analytics and decision-making, in medicine as in other areas, is prevalent at this point due to political and industry structure reasons and inertia, rather than because it’s the only way to make the tech work.

In this case, the original healthtech-oriented AI tasked with understanding the genetic causes of cancer would do well to connect behind-the-scenes with this analogy-reasoning AI, and with a provider of relevant model organism data to feed to the analogy reasoner, to get its help in solving its task.

In the Artificial General Intelligence network of the near future, the intelligence will exist on two different levels – the individual AI agents, and the coherent and coordinated activity of the network of AI agents (the combination of three AI agents in the above example; and combinations of larger numbers of more diverse AI agents in more complex cases). The ability to generalize and abstract also will exist, to some degree, on both of these levels. It will exist in individual AI agents like the Analogy Master in the example above, which are oriented toward general intelligence rather than toward solving highly specialized problems. And it will exist in the overall network, including a combination of generalization-oriented AI agents like the Analogy Master and special purpose AI agents like the Disease Analyst and “connector” AI agents like the Data Connector above.

The scalable rollout and broad adoption of decentralized AI networks is still near the beginning, and there are many subtleties to be encountered and solved in the coming years. After all, what the decentralized AI community needs to achieve its medium-term goals is more fundamentally complex than the IT systems that Google, Facebook, Amazon, IBM, Tencent or Baidu have created. These systems are the result of decades of engineering work by tens of thousands of brilliant engineers.

The decentralized AI community is not going to hire more engineers than these companies have. But then, Linux Foundation never hired as many engineers as Microsoft or Apple, and it now has the #1 operating system underlying both the server-side internet and the mobile and IoT ecosystems. If the blockchain-AI world’s attempt to catalyze the emergence of general intelligence via the cooperative activity of numerous AI agents with varying levels of abstraction is to succeed, it will have to be via community activity. This community activity will need to be self-organized to a large degree. But the tokenomic models underlying many decentralized AI projects are precisely configured to encourage this self-organization, via providing token incentives to AI agents that serve to stimulate and guide the intelligence of the overall network as well as working toward their individual goals.

Large centralized corporations bring tremendous resources to the table. However, for many applications – including medicine and advertising – it is not corporations, but individuals, who bring the data to the table. And AIs need data to learn. As blockchain-based AI applications emerge, large corporations may find their unique power being pulled out from under them.

Would you rather own a piece of medical therapies discovered using your medical records and genomic data? Would you rather know exactly how the content of your messages and your web-surfing patterns are being used to decide what products to recommend to you? Me too.

2020 will be the year that this vision starts to get some traction behind it. We will see the start of real user adoption for platforms that bring blockchain and AI together. We will see work toward more general forms of AI that are owned and guided by the individuals feeding the AI with the data they need to learn and grow.

Related Stories

Related Posts:

  • No Related Posts

Top Movies Of 2019 That Depicted Artificial Intelligence (AI)

Artificial intelligence (AI) is creating a great impact on the world by enabling computers to learn on their own. While in the real world AI is still focused …


Artificial intelligence (AI) is creating a great impact on the world by enabling computers to learn on their own. While in the real world AI is still focused on solving narrow problems, we see a whole different face of AI in the fictional world of science fiction movies — which predominantly depict the rise of artificial general intelligence as a threat for human civilization. As a continuation of the trend, here we take a look at how artificial intelligence was depicted in 2019 movies.

A warning in advance — the following listicle is filled with SPOILERS.

Terminator: Dark Fate

Terminator: Dark Fate — the sixth film of the Terminator movie franchise, featured a super-intelligent Terminator named Gabriel designated as “Rev-9”, and was sent from the future to kill a young woman (Dani) who is set to become an important figure in the Human Resistance against Skynet. To fight the “Rev-9” Terminator, the Human Resistance from the future also sends Grace, a robot soldier, back in time, to defend Dani. Grace is joined by Sarah Connor, and the now-obsolete ageing model of T-800 Terminator — the original killer robot in the first movie (1984).



Spider-Man: Far From Home

We all know Tony Stark as the man of advanced technology and when it comes to artificial intelligence, Stark has nothing short of state-of-the-art technology in Marvel’s cinematic universe. One such artificial intelligence was the Even Dead, I’m The Hero (E.D.I.T.H.) which we witnessed in the 2019 movie — Spider-Man: Far From Home. EDITH is an augmented reality security defence and artificial tactical intelligence system created by Tony Stark and was given to Peter Parker following Stark’s death. It is encompassed in a pair of sunglasses and gives its users access to Stark Industries’ global satellite network along with an array of missiles and drones.

I Am Mother

I Am Mother is a post-apocalyptic movie which was released in 2019. The film’s plot is focused on a mother-daughter relationship where the ‘mother’ is a robot designed to repopulate Earth. The robot mother takes care of her human child known as ‘daughter’ who was born with artificial gestation. The duo stays in a secure bunker alone until another human woman arrives there. The daughter now faces a predicament of whom to trust- her robot mother or a fellow human who is asking the daughter to come with her.


W3Schools


Wandering Earth

Wandering Earth is another 2019 Chinese post-apocalyptic film with a plot involving Earth’s imminent crash into another planet and a group of family members and soldiers’ efforts to save it. The film’s artificial intelligence character is OSS, a computer system which was programmed to warn people in the earth space station. A significant subplot of the film is focused on protagonist Liu Peiqiang’s struggle with MOSS which forced the space station to go into low energy mode during the crash as per its programming from the United Earth Government. In the end, Liu Peiqiang resists and ultimately sets MOSS on fire to help save the Earth.

Alita: Battle Angel

James Cameron’s futuristic action epic for 2019 — Alita: Battle Angel is a sci-fi action film which depicts the human civilization in an extremely advanced stage of transhumanism. The movie describes the dystopian future where robots and autonomous systems are extremely powerful. To elaborate, in one of the initial scenes of the movie, Ido attaches a cyborg body to a human brain he found (from another cyborg) and names her “Alita” after his deceased daughter, which is an epitome of advancements in AI and robotics.

Jexi

Jexi is the only Hollywood rom-com movie depicting artificial intelligence in 2019. The movie features an AI-based operating system called Jexi with recognizable human behaviour and reminds the audience of the previously acclaimed film ‘Her’, which was released in 2014. But unlike Her, the movie goes the other way around depicting how the AI system becomes emotionally attached to its socially-awkward owner, Phil. The biggest shock of the comedy film is when Jexi — the AI which lives inside Phil’s cellphone acts to control his life and even chases him angrily using a self-driving car.

Hi, AI

Hi, AI is a German documentary which was released in early 2019. The documentary was based on Chuck’s relationship with Harmony — an advanced humanoid robot. The film’s depiction of artificial intelligence is in sharp contrast with other fictional movies on AI. The documentary also depicts that even though human research is moving in the direction of creating advanced robots, interactions with robots still don’t have the same depth as human conversations. The film won the Max Ophüls Prize for best documentary for the year.


Enjoyed this story? Join our Telegram group. And be part of an engaging community.


FEATURED VIDEO

Provide your comments below

comments

Related Posts:

  • No Related Posts

Elon Musk: ‘A very serious danger to the public’ – tech giant’s dire warning revealed

ELON MUSK offered a grave warning over the dangers of Artificial intelligence, claiming they could be more dangerous than nuclear warheads.

In July this year, Microsoft invested $1billion (£823,825,000) into Musk’s AI venture that plans to mimic the human brain using computers.

OpenAI said the investment would go towards its efforts of building artificial general intelligence (AGI) that can rival and surpass the cognitive capabilities of humans.

CEO Sam Altman said: “The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity.

“Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI.”

Related Posts:

  • No Related Posts

Artificial General Intelligence Is The Next Step In Machine Intelligence Journey

The journey of Artificial Intelligence started in 1956 when it was founded as academic discipline. One of the pioneers, John McCarthy defined it as “the …

The journey of Artificial Intelligence started in 1956 when it was founded as academic discipline. One of the pioneers, John McCarthy defined it as “the science and engineering of creating intelligent machines”.

Though it moved ahead in research and academics, it started gaining commercial traction only when the cost of computation power and storage started falling and network bandwidths allowed cloud computing and storage to become viable. The rise and rise on internet provided multiple use cases for its use.

Application of Artificial Intelligence

Its most visible application is Machine Learning (ML) which is based on the idea that computers can decipher patterns from data and predict outcomes thereafter. Machine Learning uses a single layer of functions to encode the patterns while its subset, Deep Learning (DL) uses multiple layers of functions. Its underlying architecture is called neural networks as each node called neuron connects to all the neurons in the next layer similar to working of human brain.

ML/DL are creating significant impact on our lives. They have vastly improved the recognition techniques e.g. face, speech, handwriting etc. They help online companies recommend product to users, understand the responses, filter the emails etc. They help predict outages in IT systems and telecom networks, diseases in patients etc. They are replacing analysts in predicting financial market movements, asset allocations, preparing tax returns etc. Deep Learning is particular has changed the course of Natural Language Processing (NLP) as computers understand, generate and translate written text as well as speech. The rise of autonomous robots and vehicles is directly linked to advances in Artificial Intelligence. Affective computing focuses on systems which can even decipher emotions. These computers would use facial expressions, postures, gestures, speech, temperature of body etc to understand emotional state of user and adapt its response accordingly. The list is endless.

Artificial General Intelligence

However, current computers do extremely well on 1 set of tasks but perform miserably when same algorithms are applied to another set. E.g. a computer proficient in playing Chess is clueless when playing AlphaGo or a Natural Language translator which is accurate while translating English fails when attempting the same on French. Also their ability to use reasoning to infer answers from a set of observations is limited. Infact they perform worse than humans when transferring the knowledge or when using reasoning.

These computers need 2 attributes to match intelligence of humans, i.e. Machine Reasoning and Transfer Learning. Léon Bottou, an expert defined Machine Reasoning as “algebraically manipulating previously acquired knowledge in order to answer a new question”. Transfer learning refers to ability to transfer learned experience from one context to another. Today its role is limited to training algorithms on one set of data and using it to work on another set for the same problem.

The difference between current achievements and expectations has led to 2 separate definitions of Artificial Intelligence. The current set of computers have Artificial Narrow Intelligence (ANI) also called Narrow AI or Weak AI as they do well on 1 set of tasks. In contrast Artificial General Intelligence (AGI) or Broad AI or Strong AI aims to create systems that genuinely simulate human reasoning and generalize across a broad range of circumstances. These systems should be as capable as human in every aspect. Incomplete understanding of human brain is an important roadblock in this.

One simple test for AGI suggested by Apple’s co-founder Steve Wozniak, called Coffee Test expects a robot to enter a room, find all necessary tools, ingredients and procedures and make Coffee. Another test, The Robot College Student Test involves enrolling into a university and taking and passing classes and finally obtaining a degree.

There is a third definition, Artificial Superintelligence (ASI). One expert, Nick Bostrom has defined ASI as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Here computers would be somewhat superior to humans but they would continue to improve themselves. This is science fiction as of now but transition from AGI to ASI will a short one.

Games

In 1997 Deep Blue, a computer created by IBM defeated the then Chess World Champion, Garry Kasparov who was then considered to be the strongest player in Chess history. This was display of brute computing power as Deep Blue used 30 nodes working in parallel enhanced with Chess optimised chips.

In 2016 AlphaGo, a software program created by DeepMind defeated a 9 dan rank player, Lee Sedol in the game of Go, a simple looking board game but with combinatorial number of possible moves. This was a shift away from brute computing power and used a technique called Reinforcement Learning (RL) which is based on providing feedback with every step in a changing environment.

In 2017 another software program, AlphaGo Zero defeated AlphaGo by 100-0. More importantly, AlphaGo Zero was feed only the rules of the game and it learnt the game by playing against itself. It also used RL but no Go specific strategies were fed to it. Infact it discovered established as well as unknown strategies. Later it learnt Chess and Shogi the same way and defeated a top Chess program (Stockfish) and a top Shōgi program (Elmo). It required only 4 hrs of training to learn Chess.

Machine Reasoning and Transfer Learning

For computers to have AGI, they should be able handle 3 types of reasoning, deductive, inductive and abductive. Deductive means formation of a conclusion based on generally accepted statements or facts e.g. testing a theory, where theory is known and deduction is prediction of observations. Inductive means moving from specific observation to generalization, predicting a theory based on some observations. Abduction starts with some observations and forms a likeliest possible explanation. Infact this ability to make educated guesses constitutes an important part of human intelligence. This process of taking information that is known along with background knowledge and using mentioned logical techniques to make inference about unknown information is Machine Reasoning.

If Machine Reasoning is in place, transfer learning will become relatively easy. AGI computer will use previously acquired knowledge and not only be able to cut time of training but also reduce the amount the data needed for training. Such a computer would have multiple competencies just like humans.

None of today’s commercial computers or computers playing games have Machine Reasoning. AlphaGo Zero is a promising forward step. AGI is an area of active research with no products available so far.

Future

There is a consensus that AGI will happen even though our knowledge of human brain is imperfect. However there is a wide difference in opinion about when AGI will happen. In 2017, a survey of 352 AI experts estimated that there is 50% chance that AGI will occur until 2060.

While benefits of AGI are obvious, its eventual transition to ASI may create machines that human may struggle to control. Humans will gain as long as their natural intelligence controls intelligence of machines.

Disclaimer: The views expressed in the article above are those of the authors’ and do not necessarily represent or reflect the views of this publishing house. Unless otherwise noted, the author is writing in his/her personal capacity. They are not intended and should not be thought to represent official ideas, attitudes, or policies of any agency or institution.


Tags assigned to this article:

artificial intelligenceMachine Intelligence

Related Posts:

  • No Related Posts

Orlando Science Center hosts new Artificial Intelligence interactive exhibit

ORLANDO, Fla. – What do you think of when you hear the term Artificial Intelligence? More often than not, you might think of a movie with an evil robot …
Loading…

of
    0

    Orlando Science Center hosts new Artificial Intelligence interactive exhibit

ORLANDO, Fla. – What do you think of when you hear the term Artificial Intelligence? More often than not, you might think of a movie with an evil robot intent on destroying the world. Hollywood films often present a frightening image of Artificial Intelligence (AI), but the reality is that we use AI in our daily lives without even knowing it. Our smartphones, Siri, Google Maps, and Alexa all use AI.

During its world premiere at Orlando Science Center this fall, Artificial Intelligence: Your Mind & The Machine will show visitors exactly what AI is, how it works, and what it might do in the future.

From illusions that trick our brains to machines that can identify your emotions and translate languages, Artificial Intelligence: Your Mind & The Machine shows numerous ways that the human mind and thinking machines work. Once you’ve seen this exhibit, you will have a new understanding of how our brains view the world… and how smart machines are trying to become more like us.

There are several interactive exhibits and more when you land in this 5,000-square-foot exhibit hall!

Some activities you can experience in the Artificial Intelligence: Your Mind & The Machine include:

Chihuahua or Blueberry Muffin?

Part of this exhibit will feature illusions. Some can fool people, and some can fool machines. Without full context, our brains cannot always make sense of images. Machines can see illusions we can’t, but they can’t see things we can! For example: without ears and a tail, a Chihuahua can easily look like a blueberry muffin!

Quick Draw: What are you drawing?

Have you ever played Pictionary with a group? It’s frustrating when your teammates can’t guess your drawings, but imagine that they only have 20 seconds to guess what you created. That’s exactly what this AI exhibit will do! Draw an object on the screen, then see if the AI — which has been trained with over 50 million drawings — can guess what it is in 20 seconds! Since no two people will draw an object the exact same way, the AI has to make millions of calculations in order to come close to the answer.

AI Face Transferral

One of the most popular uses of AI is turning real faces into models of animated characters — or even other human faces. In this interactive, watch as an AI tracks the movement of your face and head using facial recognition to transfer your actions onto animated characters on the screen.

More Experiences

In other interactives, an AI will translate a fairy tale written in Russian into English, show you what a self-driving car sees out of its windows, and guesses if you are feeling happy, sad, joyful, or angry. AI will also help you play a song on the piano, compete with you in ping-pong, and have a conversation with you about anything you’d like to talk about.

These are just a few of the experiences in Artificial Intelligence: Your Mind & the Machine. You won’t believe what else AI has in store.

Don’t miss the world premiere of this exhibit and your chance to learn more about Artificial Intelligence, opening September 14 – January 5, included with your admission to the Orlando Science Center.

© 2019 Cox Media Group.

Related Posts:

  • No Related Posts