Growing Champions: How hope, fear and courage need to be part of your AI strategy

We are embarking on an exciting new economic era, one characterised by widespread adoption of artificial intelligence (AI). But what does that mean …

We are embarking on an exciting new economic era, one characterised by widespread adoption of artificial intelligence (AI). But what does that mean for your business? The truth is, as a society, we use the umbrella term AI to cover a broad range of technologies. Much of the excitement today revolves around the concept of Deep Learning — utilising large amounts of data to produce increasingly powerful predictive and analytical models.

Successfully embedding this opportunity in your organisation, however, demands something every bit as complex as artificial intelligence — the emotional intelligence to guide it. That key component is touched on in the MIT Sloan Management Review and Boston Consulting Group’s report, Artificial Intelligence in Business Gets Real.

If you want AI to get real in your business, you need to recognise the emotional resonance such technology has with human counterparts who form the foundation of your organisation’s success.

AI is an emotional opportunity

AI is fundamental to the future of the world economy, so much so it took centre stage at this year’s annual meeting of the World Economic Forum (WEF) in Davos. With the heavy snow of this iconic Swiss town pressing against the buildings, thought leaders and industry experts from around the world wrestled with how the AI revolution could be empowered to unlock its greatest value for humanity yet.

Like any truly human consideration, I would argue that emotion will be a key part in unlocking the greatest value of AI for business and society. Hope, fear, and courage are to play a fundamental role in the future of AI.

Hope: There’s no doubt that the opportunity AI represents is staggering. The MIT and BCG report highlights the potential of this technology to reshape entire business ecosystems, with over 91% of respondents predicting new business value from AI implementation in the coming five years. An overwhelming 82% of those surveyed believed AI would also help improve organisational efficiency.

Yet, so often, we speak of AI in terms of this “opportunity” to be realised. But what opportunity embodies at the most elementary level is hope. The revolutionary nature of AI functionality offers powerful solutions to some of the world’s most pressing challenges. It is providing the tools to revolutionise clean energy materials vital for tackling climate change. It’s offering insight to help us address global gender pay gaps in business. It’s unlocking new capabilities in the fight against money laundering. These are human concerns, concerns with direct and material impacts upon business, and a powerful representation of that thread of hope that runs through the core of this transformation.

Fear: Any period of transition is accompanied by an inevitable fear of change. The AI revolution is no different, and recognising this challenge is a crucial component in any enterprise being prepared to tackle it.

In Artificial Intelligence in Business Gets Real, we reveal that 47% of workers believe their workforce will be reduced in the coming five years due to AI. Yet, the intensity of these concerns diverge starkly throughout an organisation, with low-level operational and clerical staff far more likely than C-suite executives to worry about this transition. These fears in many ways hark back to sensational calls of “the robots coming for our jobs”, with echoes of the first, machine-driven industrial revolution travelling down the ages. It is a fear that is deep-rooted, and cannot be ignored.

Fear is a two-sided coin when it comes to AI. It’s not just fear of what implementation may bring, but fear of the consequences that improper implementation may have. There have been a number of high-profile horror stories of AI gone “rogue”; indeed, senior partner and managing director of BCG

Gamma, Sylvain Duranton, explored how AI can wreak havoc if unchecked by humans in a recent piece. With the correct processes, AI can provide an unparalleled opportunity for business, but fears around poor implementation have been heightened by cases of reputational harm and financial loss.

Courage: In the conflict between fear and opportunity, courage may well be the key to steering our emerging AI revolution towards its full potential.

Enterprises with the courage to tackle the fear of displacement and lost employment lay the groundwork for their own success. Commitment to retraining and reskilling provides not only the opportunity to benefit from enhanced talent within your organisation, but also an avenue to tackle anxiety emerging from perceptions of roles becoming obsolete.

AI also has the potential to eliminate menial tasks from employees’ workloads, leaving them free to engage in more value-generating and rewarding activities. The courage then requires industry to be open about the transformation to employment that AI could bring, while highlighting the benefits it will unlock along the way.

This focus on courage is also at the forefront of surmounting fears around improper AI implementation. Let us be honest — fear can be an important emotion. But it is one that should add caution to our actions, not inhibit them altogether. We must have the courage to transform that fear into a guiding force which provides valuable lessons to steer AI implementation.

Capitalising on courage

The benefits of courage are already being realised by early adopters within the AI space — those pioneers who have successfully undertaken pilot cases, developed AI expertise, and have extensively adopted AI within their organisations.

Indonesian tech unicorn Go-Jek is utilising machine learning technologies to optimise its dynamic pricing models so as to better respond to driver behaviour, customer behaviour, and even the weather. This has helped the company reduce infrastructure costs while enhancing its customer service. Malaysia’s Grab is following a similar route, utilising Microsoft’s Azure platform to enhance user recommendations and improve facial recognition for identification. Both these use cases highlight the hope that AI can deliver improved processes to support the human workforce, and courage to embrace new opportunity within innovative business frameworks. That agility will be key to ensuring business-wide AI success.

Such market leading enterprises are now pushing ahead with deeper commitments to AI opportunity. Ninety percent of pioneers surveyed as part of the BCG and MIT report already had an AI strategy in place, and 72% are focused on revenue increases from AI in the next five years.

What do you need to move forward with your own AI opportunity? Here are four fundamental questions to start.

1. Do you have data?

2. Can you use that data?

3. Is the data organised in a usable way?

4. Who owns the rights to this data?

The courage of early adopters raises important questions for those lagging behind. AI is at the forefront of a technological revolution set to transfer global business opportunity. In this emotionally-charged environment, where hope of opportunity and fear of change meet, do you have the courage to keep up?


Ching Fong Ong is senior partner and managing director Boston Consulting Group Kuala Lumpur.

Related Posts:

  • No Related Posts

Significance of AI to Businesses in Today’s Economy

Jean-Francois Gagné, CEO at the AI-based software company Element AI, told Via News that in what AI is concerned, “the job displacement topic is a …

Emerging technology is a word very much thrown around these days. It is especially as most of the business sector look to not only cut their expenditure but also to improve the quality of their offering. Artificial Intelligence (AI) is a leading emerging technology whose significance is growing by the day. A lot of companies are evaluating ways of leveraging the technology if not just thinking about it.

Integrating algorithms into the business model

Interestingly, there is nowhere AI seems crucial as in the entrepreneurial sector. Unlike established businesses, small businesses, which entrepreneurs run, lack stable economies of scale. It is to say that businesses are not established to deal with certain shocks, especially from the external environment. Further, they do not have the financial muscle to sufficiently deal with certain aspects of business like hiring the best talent available.

In this light, it is imperative that the entrepreneurs learn to improvise and also to make technology work for them. AI is doing just that! A lot of entrepreneurs are trying to figure out ways of integrating algorithms into their business model. Notably, this not only helps the small business maintain a lean staff but also to maximize output.

AI, alongside other emerging technological concepts like Machine Learning and Deep Learning, is defining most of the world’s business environment. However, it is clear that the new concepts are still a little bit technical for business owners who would like a go at it. In spite of that, there is an evident thirst for know-how and how to make the technology work for businesses.

“Entrepreneurs should have clear priorities and a clear framework on how the technology will be helpful.”

Vinita Bansal, social entrepreneur at Speaker City,

Clarity of purpose will unlock the true potential of AI

Vinita Bansal is a social entrepreneur who runs Speaker City, a Public Speaking startup in India. Bansal is among the new age entrepreneurs that are transforming the business environment, not just in India but globally. She finds it a blessing that AI came up at a time when she could make it work for her. Nonetheless, Bansal cautions that AI is beneficial if it aligns with your business model.

Regarding entrepreneurs who would like to integrate the new technology into their businesses, Bansal says, “Entrepreneurs should have clear priorities and a clear framework on how the technology will be helpful.” In this sense, business owners must be clear about the outcomes they desire from their business as a result of integrating AI.

“AI has great potential in terms of increasing the productivity of businesses, especially in the entrepreneurial sense,” she says. “However, knowing how best to deploy the technology will determine if it delivers the potential optimally. Basically, entrepreneurs should first figure out how the technology will grasp the concepts they want to implement.”

Mitigating job displacement

Interestingly, the intrigue does not end there. Evaluating the potential of the emerging technology points us in a direction that is coming up as a vital basis for debate. It is apparent that AI boosts efficiency, productivity and hence, it helps businesses substantially cut operational costs. However, it is also true that AI takes up more and more jobs that would otherwise keep the world’s workforce employed.

The manufacturing industry is at the center of this debate, given the number of people that depend on the jobs for livelihood. Tripti Gupta owns AADYA Fashions, a fashion house that sources materials and manufactures clothes in a variety of designs. Gupta unequivocally says that AI will revolutionize the labor market. For manufacturers like herself, it will be untenable to settle for human plant operators that are expensive to maintain and are prone to errors. Instead, she is more likely to go for intelligent machines that can accomplish her objectives fast, cheaply and with perfection.

“The business sector should begin looking for ways to mitigate job displacement as a result of the adoption of AI before it becomes a menace and disrupts the business environment,” Gupta concludes.

Jean-Francois Gagné, CEO at the AI-based software company Element AI, told Via News that in what AI is concerned, “the job displacement topic is a fair one.” In a five to ten-year time span, we can expect “AI taking on more the responsibility on regular mundane tasks and even some cognitive tasks,” Gagné concludes.

Jean-Francois Gagné, CEO at Element AI, an artificial intelligence software company. Photo by: Via News.
Jean-Francois Gagné, CEO at Element AI, an artificial intelligence software company. Photo by: Via News.

As for businesses wanting to implement AI capabilities, Jean-Francois Gagné, explains that “the first thing to do is have a clear idea of the objective. The second step is understanding how to get an AI system to learn about the context and the signal leading to the desired outcome.”

Via News TV

Related Posts:

  • No Related Posts

Myths of Big Data, Analytics & AI

In this special guest feature, Nikhil Bhatia, Director of Product Management at Riversand Technologies, addresses some of the common myths and …

In this special guest feature, Nikhil Bhatia, Director of Product Management at Riversand Technologies, addresses some of the common myths and misconceptions around the areas of Big Data, Analytics & AI and presents a pragmatic approach and some best practices to apply these technologies in today’s competitive world. Nikhil is one of the key leads for conceptualization, design and development of the Riversand Data Platform and Apps which enable customers to discover, manage, analyze and govern Master Data in their organization using Big Data, Cloud, Analytics and AI technologies. In addition, he is also involved in strategy, business development, sales and marketing functions. His educational background includes an MBA from one of India’s leading business schools and degrees in Information Technology and IPR Law.

With the confluence of growth in data, computing power to process that data and the democratization of AI technologies in the cloud, any organization can avail the benefits of Big Data, Analytics and AI to improve their business outcomes. But this should not be considered as a “magic” solution which can solve any business problem that an organization might have. This article addresses some of the common myths and misconceptions around these areas and presents a pragmatic approach and some best practices to apply Analytics & AI in today’s competitive world.

Myth: BigData Technologies are better and cheaper than traditional technologies

Although the allure of BigData is that even though data is too large or complex it can be managed using commodity hardware and open source (hence cheaper) technologies, the reality is far from this myth. It takes tremendous effort, skill and resources to really operationalize open-source BigData technologies to solve real world problems. Organizations should also understand that BigData technologies are not for solving all kinds of problems. Because they are built for large scale, these technologies can’t really handle smaller data sets. For some problems, a smaller data set is enough so applying BigData technologies wouldn’t be appropriate or necessary.

Myth: Every data problem can be solved just by using Analytics or AI

The best value from Analytics and AI can be realized after framing the right problem. The business value of the problem has to be understood and directly related to cost or revenue for the organization. Typically, a problem which requires a significant amount of time and effort by the organization interpreting information to gain knowledge is a prime candidate to generate value using Analytics and AI. That being said, sometimes the simple answer may still be to change a process or way of working which reduces the information itself rather than automate the interpretation of it. Say an organization is collecting invoices or negotiating promotional terms through email and wants to automate the reconciliation process. It might actually be better and easier to implement a new collaboration tool to raise and manage invoices or promotions rather than to implement an AI solution to comb through the emails and automatically interpret this information.

Myth: The better technology you use, the better the value of AI you will realize

There are three major components to AI: the data, typically a mathematical model and the software to generate and run the model. The way AI works is that by running data through the software a model needs to be discovered and evolved. The software pieces for AI today are not as packaged as some of the traditional software and hence there is a plethora of tools and frameworks, both open source vs. paid as well as developed by software giants such as Google and Microsoft vs. startups to be used to develop the AI models. Hence the main goal in sight should always be to create a transferable, packaged model to solve a specific business problem and not the technology to be used.

Myth: AI and Analytics in and of themselves generate value

It is always AI and Analytics plus “something” which provides this value to an organization. Say a retail organization discovers bottlenecks in the approval process of products they want to sell online by using Analytics and AI. That’s useful information and insights, but the final solution is using a better workflow or increasing resources or increasing automation in the approval process – this is what is going to provide the actual value. In addition, value from AI and Analytics is not created on day 1 and may feel underwhelming at the beginning. Any value that is generated will typically be greater than what the organization is doing today, so it’s a start. These solutions get better over time and usage, and hence such initiatives do require patience and executive sponsorship. AI is basically math done a different way, so the right problem and right expectation is important. Instead of finding a breakthrough the organization should focus on solving practical day-to-day problems.

In conclusion, there is a misconception in the market that anybody and everybody can take a bunch of open-source tools and create an AI solution which will provide immense value and completely change the way an organization operates. It’s great that people are talking about and getting excited about the potential for AI. However, the reality is that operationalizing AI without having a comprehensive data platform is going to be nearly impossible. Organizations must have a data platform foundation which can scale, is hybrid in nature and has the ability to consume all kinds and volumes of data. One of the biggest issues is that even though the core processing technologies for AI and Analytics may be fast and scalable, the rest of the pipeline to consume and move data remains slow and this bottleneck does not allow us to achieve the results in real time. 80% of the work in Analytics and AI is around collecting, cleansing, preparing and munging data. Unless the same data stores and pipelines are being used by everybody in the organization, the multiple, separate efforts being done by different teams for different purposes is just a waste of time and resources. Not to mention that the organization will never realize the true benefit and value that AI can provide when done properly. Organizations need to map out their data strategy first (and ensure they have a solid data foundation) before they try to embark on the promises of BigData, AI and Analytics.

Sign up for the free insideBIGDATA newsletter.

Related Posts:

  • No Related Posts

Tackling artificial intelligence using architecture

Artificial intelligence (‘AI’) is more and more sneaking up into our daily activities. Anyone using Google, Facebook or a Microsoft product knows this.

Artificial intelligence (‘AI’) is more and more sneaking up into our daily activities. Anyone using Google, Facebook or a Microsoft product knows this. It’s far from perfect, but it’s improving at a quick pace. Not every enterprise is using AI at the same pace. Has your organization started looking into using AI yet? Do you have any clue on how to tackle and implement AI in your organization? How should your enterprise and business architects examine AI? Where should they start? This article will try to answer these questions using a wealth management example.

What is artificial intelligence?

The first mention of artificial intelligence was about 60 years ago. AI has been defined in several ways. The10-minute video below, “What Is Artificial Intelligence Exactly?,” explains AI very well and elaborates on a few definitions:

I also find Wikipedia’s definition very appropriate:

Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.”

Much of the recent enthusiasm about AI has been the consequence of developments in deep learning, which is based on learning data representations, called neural networks, as opposed to task-specific algorithms. Deep learning can be supervised, semi-supervised or unsupervised. Deep learning networks can now easily have over ten layers, with simulated neurons running into the millions, as mentioned in “The promise and challenge of the age of artificial intelligence.”

The deployment challenge

Not everyone has the deep pockets and the technical know-how of Google, Facebook or a Microsoft. Artificial Intelligence will most likely provide value, but its development, its implementation and its practical use is and will remain a real challenge for most enterprises, not to mention for most public organizations. Technical know-how and resources are scarce. Getting the right to, accessing and then analysing existing collected data will continue to be an issue in some circumstances. Finally, positive results from concrete artificial intelligence initiatives may prove longer to materialize then anticipated.

As mentioned by Andrew Ng, founder of Google Brain, in a recent article in Forbes:

Artificial intelligence, sandboxing and certification in Malta

… Autonomous Organisations (DAOs) and elements of distributed or decentralised ledger technologies, a popular example of which is the blockchain.
Sunday, November 18, 2018, 10:42 by Ian Gauci

Barely hours after the new legislative framework comprising three Acts – the Virtual Financial Assets Act (VFAA), the Malta Digital Innovation Authority Act (MDIAA) and the Innovative Technology Arrangements and Services Act (ITASA) – came into force, Malta appointed a task force to deal with Artificial Intelligence and implement a national AI strategy.

The Innovative Technology Arrangements and Services Act regulates smart contracts, Decentralised Autonomous Organisations (DAOs) and elements of distributed or decentralised ledger technologies, a popular example of which is the blockchain. These can be voluntarily submitted for recognition to the MDIA. Prior to this stage, such innovative technology arrangements must be reviewed by a systems auditor, one of the services outlined as an Innovative Service under the ITASA.

The systems auditor must re­view the innovative technology arrangement based on recognised standards, in line with quality and ethical regulations and based on five key principles – security, process integrity, availability, confidentiality and protection of personal data. These have been reinforced by guidelines issued by the MDIA in conjunction with the provisions of the ITASA. These guidelines will also be further amplified very soon to cater for enhanced elements of systems audit in instances that would merit a deeper audit and analysis in critical areas of activity.

The MDIA here makes sure that the blueprint of an innovative technology, and thus its functionality, meets the desired outcomes. It is making sure that the techno­logy and the algorithm (code) can be trusted to achieve the desired outcome. The MDIA here is also establishing criteria for dressing up the code, culminating in certification, which could ultimately also used by courts in cases of software/code liability.

The coming into force of the new legal framework sees not the end of a journey, but the beginning of an immense chapter, which can also be extended to AI, with some minor amendments in the law to cater for non-distribu­ted ledger technology (DLT) elements in the definition of innovative technology arrangements.

The origins of AI can be traced back to the 18th century, from Thomas Bayes and George Boole to Charles Babbage, who constructed the first electronic computer. Alan Turing, in his classic essay ‘Computing Machinery and Intelligence’, also imagined the possibi­lity of computers created for simu­lating intelligence. John McCarthy in 1956 coined the definition for AI as ‘the science and engineering of making intelligent machines’. An AI essentially is built on software, thus composed of algorithms and code and which can also follow the same path of certification under the remit of the MDIA.

In this instance, given the intelligence and automated output of the code, the MDIA might request enhanced system audits as well as more structured certification criteria based on a unique and novel regulatory sandbox for certification. The Regulatory Sandbox here can be used to develop an environment in which AI and its underlying logic and code are able to function according to pre-determined functional outputs in a testing environment.

This testing silo will have a controlled environment, with a limi­ted number of participants, with­in the predetermined imple­mentation periods. The Regulatory Sandbox will not only look at the social and economic viability of the AI/code being proposed for certification but also at how this fits in with current enterprise or societal use and the eventual changes that would need to be made. It can also be used to verify the level of adaptability and adherence to principles like the Asilomar AI Principles or other principles which the MDIA would want to apply.

The House of Lords in the UK earlier this year did suggest an enhanced set of ethical principles in their report. These principles are aimed at the safe creation, use and existence of AI and include among others: Transparency (ascertaining the cause if an AI system causes harm); Value Alignment (aligning the AI system’s goals with human values); and Recursive Self-Improvement (subjecting AI systems with abili­ties to self-replicate to strict safety and control measures).

One could also visualise this sandbox and the system auditors functioning as a small portal in AI’s so-called black box problem, and thus the ability to assess functionality and code response before it is deployed and thus certified.

If Malta wants to shape the future of AI, then it should focus on building certainty around black box environments

This is not a novel concept in the industry. Apple, for example, uses a sandbox approach for the Mac OS’s graphical interface as well as for its apps to protect systems and users by limiting the privileges of an app to its intended functionality, increasing the difficulty for malicious software to compromise the user’s systems. In the case of certain AI it is envisaged that aside from veri­fying concrete properties of the code, there would also need to be the creation of a safe layer within the same sandbox which makes sure that the code interacts and functions correctly.

The AI sandbox would thus need to be modelled according to the particular use of the AI and functionality blueprint, creating an operational environment bas­ed on the blueprint and architecture, with the execution, operation and processes of the fun­ctions and thus the emanating certification criteria. These would be tested on a controlled environment to make sure the AI, and hence the code, has the required qualities to be deployed and used.

This would entail creating clear sandbox criteria consisting of the development hardware, software/ code, data, tools, interfaces, and policies necessary for starting an analytical operational deep learning practice in pre-determined environments. These pre-determined environments would need to have sufficient data and modular guard rails with inbuilt adversarial models. This would hypothetically allow the algorithms to be let loose and thus react to unexpected outcomes, including facing unexpected opposition.

The sandbox in this instance would be able to metre the resultant reactions due to the intelligence and autonomous nature of code. It is anticipated that such sandbox would also need an element of proportionality and flexibility so that, as far as possible, it limits the use of some AI technologies, such as neural networks, as that might stifle technological innovation.

If Malta wants to shape the future of AI, then it should focus on building certainty around black box environments. It should stimulate creators and inventors with opportunities to build and deploy cutting-edge code within sound certification criteria and parameters.

Ultimately it should lead to an environment that develops an ethi­cal framework for inclusive and diverse AI with voluntary certification criteria to avoid obscure and unwarranted outcomes such as that of HAL900 AI in Space Odyssey.

Ian Gauci is a partner at GTG Advocates and Caledo. He lectures on Legal Futures and Technology at the University of Malta.

This article is intended for general information purposes and does not constitute legal advice.

Acknowledgment

Thanks to Prof. Gordon Pace for his guidance and valuable inputs.

Related Posts:

  • No Related Posts