Significance of AI to Businesses in Today’s Economy

Jean-Francois Gagné, CEO at the AI-based software company Element AI, told Via News that in what AI is concerned, “the job displacement topic is a …

Emerging technology is a word very much thrown around these days. It is especially as most of the business sector look to not only cut their expenditure but also to improve the quality of their offering. Artificial Intelligence (AI) is a leading emerging technology whose significance is growing by the day. A lot of companies are evaluating ways of leveraging the technology if not just thinking about it.

Integrating algorithms into the business model

Interestingly, there is nowhere AI seems crucial as in the entrepreneurial sector. Unlike established businesses, small businesses, which entrepreneurs run, lack stable economies of scale. It is to say that businesses are not established to deal with certain shocks, especially from the external environment. Further, they do not have the financial muscle to sufficiently deal with certain aspects of business like hiring the best talent available.

In this light, it is imperative that the entrepreneurs learn to improvise and also to make technology work for them. AI is doing just that! A lot of entrepreneurs are trying to figure out ways of integrating algorithms into their business model. Notably, this not only helps the small business maintain a lean staff but also to maximize output.

AI, alongside other emerging technological concepts like Machine Learning and Deep Learning, is defining most of the world’s business environment. However, it is clear that the new concepts are still a little bit technical for business owners who would like a go at it. In spite of that, there is an evident thirst for know-how and how to make the technology work for businesses.

“Entrepreneurs should have clear priorities and a clear framework on how the technology will be helpful.”

Vinita Bansal, social entrepreneur at Speaker City,

Clarity of purpose will unlock the true potential of AI

Vinita Bansal is a social entrepreneur who runs Speaker City, a Public Speaking startup in India. Bansal is among the new age entrepreneurs that are transforming the business environment, not just in India but globally. She finds it a blessing that AI came up at a time when she could make it work for her. Nonetheless, Bansal cautions that AI is beneficial if it aligns with your business model.

Regarding entrepreneurs who would like to integrate the new technology into their businesses, Bansal says, “Entrepreneurs should have clear priorities and a clear framework on how the technology will be helpful.” In this sense, business owners must be clear about the outcomes they desire from their business as a result of integrating AI.

“AI has great potential in terms of increasing the productivity of businesses, especially in the entrepreneurial sense,” she says. “However, knowing how best to deploy the technology will determine if it delivers the potential optimally. Basically, entrepreneurs should first figure out how the technology will grasp the concepts they want to implement.”

Mitigating job displacement

Interestingly, the intrigue does not end there. Evaluating the potential of the emerging technology points us in a direction that is coming up as a vital basis for debate. It is apparent that AI boosts efficiency, productivity and hence, it helps businesses substantially cut operational costs. However, it is also true that AI takes up more and more jobs that would otherwise keep the world’s workforce employed.

The manufacturing industry is at the center of this debate, given the number of people that depend on the jobs for livelihood. Tripti Gupta owns AADYA Fashions, a fashion house that sources materials and manufactures clothes in a variety of designs. Gupta unequivocally says that AI will revolutionize the labor market. For manufacturers like herself, it will be untenable to settle for human plant operators that are expensive to maintain and are prone to errors. Instead, she is more likely to go for intelligent machines that can accomplish her objectives fast, cheaply and with perfection.

“The business sector should begin looking for ways to mitigate job displacement as a result of the adoption of AI before it becomes a menace and disrupts the business environment,” Gupta concludes.

Jean-Francois Gagné, CEO at the AI-based software company Element AI, told Via News that in what AI is concerned, “the job displacement topic is a fair one.” In a five to ten-year time span, we can expect “AI taking on more the responsibility on regular mundane tasks and even some cognitive tasks,” Gagné concludes.

Jean-Francois Gagné, CEO at Element AI, an artificial intelligence software company. Photo by: Via News.
Jean-Francois Gagné, CEO at Element AI, an artificial intelligence software company. Photo by: Via News.

As for businesses wanting to implement AI capabilities, Jean-Francois Gagné, explains that “the first thing to do is have a clear idea of the objective. The second step is understanding how to get an AI system to learn about the context and the signal leading to the desired outcome.”

Via News TV

Related Posts:

  • No Related Posts

Myths of Big Data, Analytics & AI

In this special guest feature, Nikhil Bhatia, Director of Product Management at Riversand Technologies, addresses some of the common myths and …

In this special guest feature, Nikhil Bhatia, Director of Product Management at Riversand Technologies, addresses some of the common myths and misconceptions around the areas of Big Data, Analytics & AI and presents a pragmatic approach and some best practices to apply these technologies in today’s competitive world. Nikhil is one of the key leads for conceptualization, design and development of the Riversand Data Platform and Apps which enable customers to discover, manage, analyze and govern Master Data in their organization using Big Data, Cloud, Analytics and AI technologies. In addition, he is also involved in strategy, business development, sales and marketing functions. His educational background includes an MBA from one of India’s leading business schools and degrees in Information Technology and IPR Law.

With the confluence of growth in data, computing power to process that data and the democratization of AI technologies in the cloud, any organization can avail the benefits of Big Data, Analytics and AI to improve their business outcomes. But this should not be considered as a “magic” solution which can solve any business problem that an organization might have. This article addresses some of the common myths and misconceptions around these areas and presents a pragmatic approach and some best practices to apply Analytics & AI in today’s competitive world.

Myth: BigData Technologies are better and cheaper than traditional technologies

Although the allure of BigData is that even though data is too large or complex it can be managed using commodity hardware and open source (hence cheaper) technologies, the reality is far from this myth. It takes tremendous effort, skill and resources to really operationalize open-source BigData technologies to solve real world problems. Organizations should also understand that BigData technologies are not for solving all kinds of problems. Because they are built for large scale, these technologies can’t really handle smaller data sets. For some problems, a smaller data set is enough so applying BigData technologies wouldn’t be appropriate or necessary.

Myth: Every data problem can be solved just by using Analytics or AI

The best value from Analytics and AI can be realized after framing the right problem. The business value of the problem has to be understood and directly related to cost or revenue for the organization. Typically, a problem which requires a significant amount of time and effort by the organization interpreting information to gain knowledge is a prime candidate to generate value using Analytics and AI. That being said, sometimes the simple answer may still be to change a process or way of working which reduces the information itself rather than automate the interpretation of it. Say an organization is collecting invoices or negotiating promotional terms through email and wants to automate the reconciliation process. It might actually be better and easier to implement a new collaboration tool to raise and manage invoices or promotions rather than to implement an AI solution to comb through the emails and automatically interpret this information.

Myth: The better technology you use, the better the value of AI you will realize

There are three major components to AI: the data, typically a mathematical model and the software to generate and run the model. The way AI works is that by running data through the software a model needs to be discovered and evolved. The software pieces for AI today are not as packaged as some of the traditional software and hence there is a plethora of tools and frameworks, both open source vs. paid as well as developed by software giants such as Google and Microsoft vs. startups to be used to develop the AI models. Hence the main goal in sight should always be to create a transferable, packaged model to solve a specific business problem and not the technology to be used.

Myth: AI and Analytics in and of themselves generate value

It is always AI and Analytics plus “something” which provides this value to an organization. Say a retail organization discovers bottlenecks in the approval process of products they want to sell online by using Analytics and AI. That’s useful information and insights, but the final solution is using a better workflow or increasing resources or increasing automation in the approval process – this is what is going to provide the actual value. In addition, value from AI and Analytics is not created on day 1 and may feel underwhelming at the beginning. Any value that is generated will typically be greater than what the organization is doing today, so it’s a start. These solutions get better over time and usage, and hence such initiatives do require patience and executive sponsorship. AI is basically math done a different way, so the right problem and right expectation is important. Instead of finding a breakthrough the organization should focus on solving practical day-to-day problems.

In conclusion, there is a misconception in the market that anybody and everybody can take a bunch of open-source tools and create an AI solution which will provide immense value and completely change the way an organization operates. It’s great that people are talking about and getting excited about the potential for AI. However, the reality is that operationalizing AI without having a comprehensive data platform is going to be nearly impossible. Organizations must have a data platform foundation which can scale, is hybrid in nature and has the ability to consume all kinds and volumes of data. One of the biggest issues is that even though the core processing technologies for AI and Analytics may be fast and scalable, the rest of the pipeline to consume and move data remains slow and this bottleneck does not allow us to achieve the results in real time. 80% of the work in Analytics and AI is around collecting, cleansing, preparing and munging data. Unless the same data stores and pipelines are being used by everybody in the organization, the multiple, separate efforts being done by different teams for different purposes is just a waste of time and resources. Not to mention that the organization will never realize the true benefit and value that AI can provide when done properly. Organizations need to map out their data strategy first (and ensure they have a solid data foundation) before they try to embark on the promises of BigData, AI and Analytics.

Sign up for the free insideBIGDATA newsletter.

Related Posts:

  • No Related Posts

Tackling artificial intelligence using architecture

Artificial intelligence (‘AI’) is more and more sneaking up into our daily activities. Anyone using Google, Facebook or a Microsoft product knows this.

Artificial intelligence (‘AI’) is more and more sneaking up into our daily activities. Anyone using Google, Facebook or a Microsoft product knows this. It’s far from perfect, but it’s improving at a quick pace. Not every enterprise is using AI at the same pace. Has your organization started looking into using AI yet? Do you have any clue on how to tackle and implement AI in your organization? How should your enterprise and business architects examine AI? Where should they start? This article will try to answer these questions using a wealth management example.

What is artificial intelligence?

The first mention of artificial intelligence was about 60 years ago. AI has been defined in several ways. The10-minute video below, “What Is Artificial Intelligence Exactly?,” explains AI very well and elaborates on a few definitions:

I also find Wikipedia’s definition very appropriate:

Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.”

Much of the recent enthusiasm about AI has been the consequence of developments in deep learning, which is based on learning data representations, called neural networks, as opposed to task-specific algorithms. Deep learning can be supervised, semi-supervised or unsupervised. Deep learning networks can now easily have over ten layers, with simulated neurons running into the millions, as mentioned in “The promise and challenge of the age of artificial intelligence.”

The deployment challenge

Not everyone has the deep pockets and the technical know-how of Google, Facebook or a Microsoft. Artificial Intelligence will most likely provide value, but its development, its implementation and its practical use is and will remain a real challenge for most enterprises, not to mention for most public organizations. Technical know-how and resources are scarce. Getting the right to, accessing and then analysing existing collected data will continue to be an issue in some circumstances. Finally, positive results from concrete artificial intelligence initiatives may prove longer to materialize then anticipated.

As mentioned by Andrew Ng, founder of Google Brain, in a recent article in Forbes:

Artificial intelligence, sandboxing and certification in Malta

… Autonomous Organisations (DAOs) and elements of distributed or decentralised ledger technologies, a popular example of which is the blockchain.
Sunday, November 18, 2018, 10:42 by Ian Gauci

Barely hours after the new legislative framework comprising three Acts – the Virtual Financial Assets Act (VFAA), the Malta Digital Innovation Authority Act (MDIAA) and the Innovative Technology Arrangements and Services Act (ITASA) – came into force, Malta appointed a task force to deal with Artificial Intelligence and implement a national AI strategy.

The Innovative Technology Arrangements and Services Act regulates smart contracts, Decentralised Autonomous Organisations (DAOs) and elements of distributed or decentralised ledger technologies, a popular example of which is the blockchain. These can be voluntarily submitted for recognition to the MDIA. Prior to this stage, such innovative technology arrangements must be reviewed by a systems auditor, one of the services outlined as an Innovative Service under the ITASA.

The systems auditor must re­view the innovative technology arrangement based on recognised standards, in line with quality and ethical regulations and based on five key principles – security, process integrity, availability, confidentiality and protection of personal data. These have been reinforced by guidelines issued by the MDIA in conjunction with the provisions of the ITASA. These guidelines will also be further amplified very soon to cater for enhanced elements of systems audit in instances that would merit a deeper audit and analysis in critical areas of activity.

The MDIA here makes sure that the blueprint of an innovative technology, and thus its functionality, meets the desired outcomes. It is making sure that the techno­logy and the algorithm (code) can be trusted to achieve the desired outcome. The MDIA here is also establishing criteria for dressing up the code, culminating in certification, which could ultimately also used by courts in cases of software/code liability.

The coming into force of the new legal framework sees not the end of a journey, but the beginning of an immense chapter, which can also be extended to AI, with some minor amendments in the law to cater for non-distribu­ted ledger technology (DLT) elements in the definition of innovative technology arrangements.

The origins of AI can be traced back to the 18th century, from Thomas Bayes and George Boole to Charles Babbage, who constructed the first electronic computer. Alan Turing, in his classic essay ‘Computing Machinery and Intelligence’, also imagined the possibi­lity of computers created for simu­lating intelligence. John McCarthy in 1956 coined the definition for AI as ‘the science and engineering of making intelligent machines’. An AI essentially is built on software, thus composed of algorithms and code and which can also follow the same path of certification under the remit of the MDIA.

In this instance, given the intelligence and automated output of the code, the MDIA might request enhanced system audits as well as more structured certification criteria based on a unique and novel regulatory sandbox for certification. The Regulatory Sandbox here can be used to develop an environment in which AI and its underlying logic and code are able to function according to pre-determined functional outputs in a testing environment.

This testing silo will have a controlled environment, with a limi­ted number of participants, with­in the predetermined imple­mentation periods. The Regulatory Sandbox will not only look at the social and economic viability of the AI/code being proposed for certification but also at how this fits in with current enterprise or societal use and the eventual changes that would need to be made. It can also be used to verify the level of adaptability and adherence to principles like the Asilomar AI Principles or other principles which the MDIA would want to apply.

The House of Lords in the UK earlier this year did suggest an enhanced set of ethical principles in their report. These principles are aimed at the safe creation, use and existence of AI and include among others: Transparency (ascertaining the cause if an AI system causes harm); Value Alignment (aligning the AI system’s goals with human values); and Recursive Self-Improvement (subjecting AI systems with abili­ties to self-replicate to strict safety and control measures).

One could also visualise this sandbox and the system auditors functioning as a small portal in AI’s so-called black box problem, and thus the ability to assess functionality and code response before it is deployed and thus certified.

If Malta wants to shape the future of AI, then it should focus on building certainty around black box environments

This is not a novel concept in the industry. Apple, for example, uses a sandbox approach for the Mac OS’s graphical interface as well as for its apps to protect systems and users by limiting the privileges of an app to its intended functionality, increasing the difficulty for malicious software to compromise the user’s systems. In the case of certain AI it is envisaged that aside from veri­fying concrete properties of the code, there would also need to be the creation of a safe layer within the same sandbox which makes sure that the code interacts and functions correctly.

The AI sandbox would thus need to be modelled according to the particular use of the AI and functionality blueprint, creating an operational environment bas­ed on the blueprint and architecture, with the execution, operation and processes of the fun­ctions and thus the emanating certification criteria. These would be tested on a controlled environment to make sure the AI, and hence the code, has the required qualities to be deployed and used.

This would entail creating clear sandbox criteria consisting of the development hardware, software/ code, data, tools, interfaces, and policies necessary for starting an analytical operational deep learning practice in pre-determined environments. These pre-determined environments would need to have sufficient data and modular guard rails with inbuilt adversarial models. This would hypothetically allow the algorithms to be let loose and thus react to unexpected outcomes, including facing unexpected opposition.

The sandbox in this instance would be able to metre the resultant reactions due to the intelligence and autonomous nature of code. It is anticipated that such sandbox would also need an element of proportionality and flexibility so that, as far as possible, it limits the use of some AI technologies, such as neural networks, as that might stifle technological innovation.

If Malta wants to shape the future of AI, then it should focus on building certainty around black box environments. It should stimulate creators and inventors with opportunities to build and deploy cutting-edge code within sound certification criteria and parameters.

Ultimately it should lead to an environment that develops an ethi­cal framework for inclusive and diverse AI with voluntary certification criteria to avoid obscure and unwarranted outcomes such as that of HAL900 AI in Space Odyssey.

Ian Gauci is a partner at GTG Advocates and Caledo. He lectures on Legal Futures and Technology at the University of Malta.

This article is intended for general information purposes and does not constitute legal advice.


Thanks to Prof. Gordon Pace for his guidance and valuable inputs.

Related Posts:

  • No Related Posts

Book Highlights MindMeld: CEO and Artificial Intelligence – Afterward Future Thoughts

Even with the explosive pace of technology, human endeavors in artificial intelligence are still very primitive. The pace of the pursuit of human-made …


By Thomas B. Cross @techtionary

The following is an excerpt to Afterward: Future Thoughts of MindMeld: CEO & AI Merging of Mental & Metal book available now from iBooks – Available on iPhone, iPad, iPod touch, and Mac.

Book Review – “As the CEO of a energy industrial company and actively involved in CEO Leadership Forums I have been following AI for more than a decade. Indeed the promises for improving many technical tasks are interesting yet in reality often prove more complex to manage than proposed. MindMeld was very profound in proposing that AI starts not at the bottom of the organization but with CXO decision-making and worth reading by anyone in or rising to the boardroom.” George B.

For interviews, professional guidance, product/market research or evaluations, articles, speeches or presentations as well as CEO Executive Seminar on AI, please contact

Here are some of highlights from Future Thoughts, please click on any image for ibook:

There are still many key issues requiring further research. Even with the explosive pace of technology, human endeavors in artificial intelligence are still very primitive. The pace of the pursuit of human-made systems remains, for the most part, an extension of machines rather than an extension of human knowledge processing. If this approach continues, there will be vastly sophisticated high-speed processing devices that will provide elegant simulations of real-world conditions. However, these conditions will parallel the real world rather than interact with it. The development of new technology continues to outstrip society’s ability to absorb its effects, much less its potential consequences. However, technology is like the extension of a rubber band stretched far ahead. While society is anchored in the past and holds the rubber band back, technological change stretches the rubber band ahead. The rubber band keeps getting stretched further until, sometime, it will break, splintering society into widely diverse factions. Everything from terrorism to AIDS is being impacted by the increasing rate of communication and the knowledge of underlying issues. With the acceleration of technology, the concept of artificial intelligence becomes an even more perplexing enigma. There have been business problems that have always existed: labor, working conditions, inventory, supply, and finance, and now there are issues, such as environmental impact, career planning, virtual everything, and productivity. Other new problems are on the horizon, and these merge with the old problems. Together with the needs of competitive advantage, new market demands, and global distribution, networks provide little opportunity for human-made systems to be up-to-the-moment. This doesn’t make the study of artificial intelligence worthless. AI permits organizations to exist where none were possible before. Much as one 100-horsepower motor took the place of 100 workers in textile mills a century ago, AI will become the intelligent worker of the future. Some of the major technologies that offer the greatest potential are:

1 – Neural networks. The subtle but truly amazing ability of the mind, upon a second’s glance, to recognize a face unseen for ten years and, at the same moment, forget where the car keys are. Automated neural networks offer nearly all industries and humankind new intelligent tools that become increasingly more human with use. This is partially due to the fact that most business is based on human interaction and human thinking processes. Systems that begin to learn about humans in humanlike ways, rather than humans adapting to machinelike ways, offer an incredible business opportunity.

2 – Visualization systems. Visualization systems are those systems that convey and process information graphically. Rather than machines that see, these are complex machines that are able to potentially understand doodles, notes, ideas, and images as humans do. Humans interact with one another and their world in nearly a totally sensory way, and for the most part, in a totally visual way. Images are formed, sequences are organized, events are cataloged, and. life-spans are archived, frame after frame. Visualization systems process information and provide, like pages in a book, frames or windows of events, concepts, and emotional situations. With advances in image processing and storage technology, within a few years people will be able to start recording important events in their lives at a phenomenally low cost. The ability to record every moment is less critical in itself than the ability to use this information as part of a person’s expertise or skill, not unlike the skill of a successful CEO or manager whose ability and related compensation come from their knowledge, manipulation, and organization of certain facts to the financial benefit of an organization. By capturing personal information and by organizing it in a certain way, individuals will be able to market their own automated data bases to organizations, much like “human” software programs. Throughout history, the power of information derives from, among other things, its ability to move from one point to another—its portability—and its usability when it arrives. Employment potential has also meant being in the right place with the right skill at the right time. The greatest problem with the demise of the industrial era is that in an information society skills can rarely be transmitted from one generation to the next. New skills are required, and people’s ability to organize information and themselves is critical to their overall success. The development of systems and software that allow people to develop their own personal data bases of information and provide that as marketable information will be as important in the near future as the latest generation of hybrid was to a farmer one hundred years ago. Visualization systems are not robot vision devices that can navigate through a maze; they are systems that interact with humans and with the world in a visual way. More than intelligent graphics, these systems convey knowledge and interact with other visual systems via visual language, operating in much the same way as a speaker who might begin a presentation by saying, “Let me tell you a story.”

3 – Idiot systems (dumb expert systems). The problem that presently exists with most expert systems is that they either require so much expertise that it takes an expert to use one, or they solve problems within too narrow a scope to useful in the real world. What is really needed is an ignorant machine or idiot system (IS) machine. An IS knows nothing but is willing to learn about anything; it’s a system that does not pretend to be the expert, but makes a good apprentice. This system can tolerate enormous amounts of ambiguity, logic jumps, and gut reactions. It’s more like a pencil than a smart pad.


Related Posts:

  • No Related Posts