Suppose a healthtech-oriented AI agent needs to make a hypothesis about which ones of the 25,000 or so human genes are involved in causing prostate cancer. But suppose it only has DNA data from a few hundred people – not enough to allow it to draw solid conclusions about so many different genes. Without a framework allowing this AI agent to consult other AI agents for help, the AI would probably just give up. But in a context like SingularityNET, where AIs can consult other AIs for assistance, there may be subtle routes to success. If there are other datasets regarding disorders similar to prostate cancer in model organisms such as mice, we may see progress on understanding which genes are involved in prostate cancer, via the combination of multiple AI agents, with different capabilities cooperating together.
Suppose AI #1 – let’s call it the Analogy Master – has a talent for analogy reasoning. This is the sort of reasoning that maps knowledge about one situation into a different sort of situation – for instance, using knowledge about warfare to derive conclusions about business. The Analogy Master might be able to use genetic data about mice with conditions similar to prostate cancer to draw indirect conclusions about human prostate cancer.
We will see work toward more general forms of AI that are owned and guided by individuals
Then, suppose AI #2 – let’s call it the Data Connector – is good at finding biological and medical datasets relevant to a certain problem, and preparing these datasets for AI analysis. And then suppose AI #3 – let’s call it the Disease Analyst – is expert at using machine learning for understanding the root causes of human diseases.
The Disease Analyst, when it’s tasked with the problem of finding human genes related to prostate cancer, may then decide it needs some lateral thinking to help it make a conceptual leap and solve the problem. It asks the Analogy Master, or many different AIs, for help.
The Analogy Master may not know anything about cancer biology, though it’s good at making conceptual leaps using reasoning by analogy. So, to help the Disease Analyst with its problem, it may need to fill its knowledge base with some relevant data, for example about cancer in mice. The Data Connector then comes to the rescue, feeding the Analogy Master with the data about mouse cancer it needs to drive its creative brainstorming, supporting the Disease Analyst to solve its problem.
All this cooperation between AI agents can happen behind the scenes from a user perspective. The research lab asking the Disease Analyst for help with genetic analysis of prostate cancer never needs to know that the Disease Analyst did its job by asking the Analogy Master and Data Connector for help. Furthermore, the Analogy Master and Data Connector don’t necessarily need to see the Disease Analyst’s proprietary data, because using multiparty computation or homomorphic encryption, AI analytics can take place on an encrypted version of a dataset without violating data privacy (in this case, patient privacy).
With advances in AI technology and cloud-based IT, this sort of cooperation between multiple AIs is just now becoming feasible. And, of course, such cooperation can happen in a manner controlled by large corporations behind firewalls. But what’s more interesting is how naturally this paradigm for achieving increasingly powerful and general AI could align with decentralized modalities of control.
What if the three AI agents in this example scenario are owned by different parties? What if the data about human prostate cancer utilized by the Disease Analyst is owned and controlled by the individuals with prostate cancer, from whom the data has been collected? This is not the way the medical establishment works right now. But at least we can say, on a technological level, there is no reason that AI-driven medical discovery needs to be monolithic and centralized. A decentralized approach, in which intelligence is achieved via multiple agents with multiple owners acting on securely encrypted data, is technologically feasible now, by combining modern AI with blockchain infrastructure.
Centralization of AI data analytics and decision-making, in medicine as in other areas, is prevalent at this point due to political and industry structure reasons and inertia, rather than because it’s the only way to make the tech work.
In this case, the original healthtech-oriented AI tasked with understanding the genetic causes of cancer would do well to connect behind-the-scenes with this analogy-reasoning AI, and with a provider of relevant model organism data to feed to the analogy reasoner, to get its help in solving its task.
In the Artificial General Intelligence network of the near future, the intelligence will exist on two different levels – the individual AI agents, and the coherent and coordinated activity of the network of AI agents (the combination of three AI agents in the above example; and combinations of larger numbers of more diverse AI agents in more complex cases). The ability to generalize and abstract also will exist, to some degree, on both of these levels. It will exist in individual AI agents like the Analogy Master in the example above, which are oriented toward general intelligence rather than toward solving highly specialized problems. And it will exist in the overall network, including a combination of generalization-oriented AI agents like the Analogy Master and special purpose AI agents like the Disease Analyst and “connector” AI agents like the Data Connector above.
The scalable rollout and broad adoption of decentralized AI networks is still near the beginning, and there are many subtleties to be encountered and solved in the coming years. After all, what the decentralized AI community needs to achieve its medium-term goals is more fundamentally complex than the IT systems that Google, Facebook, Amazon, IBM, Tencent or Baidu have created. These systems are the result of decades of engineering work by tens of thousands of brilliant engineers.
The decentralized AI community is not going to hire more engineers than these companies have. But then, Linux Foundation never hired as many engineers as Microsoft or Apple, and it now has the #1 operating system underlying both the server-side internet and the mobile and IoT ecosystems. If the blockchain-AI world’s attempt to catalyze the emergence of general intelligence via the cooperative activity of numerous AI agents with varying levels of abstraction is to succeed, it will have to be via community activity. This community activity will need to be self-organized to a large degree. But the tokenomic models underlying many decentralized AI projects are precisely configured to encourage this self-organization, via providing token incentives to AI agents that serve to stimulate and guide the intelligence of the overall network as well as working toward their individual goals.
Large centralized corporations bring tremendous resources to the table. However, for many applications – including medicine and advertising – it is not corporations, but individuals, who bring the data to the table. And AIs need data to learn. As blockchain-based AI applications emerge, large corporations may find their unique power being pulled out from under them.
Would you rather own a piece of medical therapies discovered using your medical records and genomic data? Would you rather know exactly how the content of your messages and your web-surfing patterns are being used to decide what products to recommend to you? Me too.
2020 will be the year that this vision starts to get some traction behind it. We will see the start of real user adoption for platforms that bring blockchain and AI together. We will see work toward more general forms of AI that are owned and guided by the individuals feeding the AI with the data they need to learn and grow.