HISWAI shows you what’s connected to topics that interest you, and where to find more. HISWAI members can curate information with dashboards to gain insights, proper context and stay up to date on the latest relevant information. Sharing these discoveries with others is easy too!
Date: 2022-01-25 20:28:13
Tags for this article:
With increased literacy and understanding – and increasingly publicized failures – the curtain is falling on the public’s perception of artificial intelligence as a mysterious, unmanageable or independently evolving intelligence. We begin a new episode, in which we – as individuals and corporate entities – firmly acknowledge and embrace our inalienable role as the architects of AI’s future.
Embracing that agency, the human factor will become central to how and where enterprises think about deploying AI. In that vein, here are four human-centric predictions for what to expect for AI in 2022.
Also see: Top AI Software
As AI capabilities inexorably evolve, organizations continue to rethink employee and customer engagement. As a result, a focus on human-centric design or human experience (HX) will gain ground in all domains.
HX incorporates traditional elements of decision intelligence, CX and UX practices with design thinking. However, HX further expands the field of view to encompass both the human (individual) and humanity (society at large).
Informed by lessons learned in the digital realm and myriad high-profile AI failures, an HX-orientation also – as artfully articulated by strategist Kate O’Neill – requires consideration of not just potential failures but also of wild success. Thus moving strategic tools such as scenario planning beyond the boardroom.
The skeptical will point to HX as a rebranding of existing concepts. They are right, in part. However, this interpretation misses the mindset shift and complexity inherent in bringing these disciplines together to effect long-term transformational change. One in which technology is not deployed for use by or on behalf of passive recipients (aka users and clients, respectively) but for humans. Humans, who individually and collectively, are actors in their own right, with all the agency and engagement that entails.
Also see: What is Machine Learning?
While AI will continue to be deployed to identify patterns and to surface previously unknown connections, the need for knowledgeable humans to make true sense of the algorithm’s outputs will be acknowledged. There will be a heightened appreciation for the power of AI, but more importantly the very real limitations of AI algorithms – best demonstrated by recent adventures with GPT-3.
This will direct organizations away from using AI to automate knowledge work in favor of using AI to expand the knowledge available to the expert. As such, automation of tasks with a high degree of variability or contextual nuance will shift toward supplementation. Examples include healthcare practitioners to paralegals to production line engineers. Less context-dependent, invariable and rote tasks will continue to be automated at pace.
In a related but separate trend, improved AI literacy across the enterprise will result in the continued evolution toward collaborative, multi-disciplinary teaming models. This will result in an increased focus on identifying potential pitfalls, instituting operational guardrails and test-driving AI-enabled processes in parallel with existing processes.
This will go a long way toward establishing realistic expectations by allowing AI models – and the humans wielding them – to learn and improve over time. As opposed to today, in which decision makers often expect AI to outperform existing systems (human or analytical) with minimal flaws straight out of the training gate.
This will result in more robust, resilient and responsible AI solutions. It will also, perversely and positively, result in more intelligent decisions about when these solutions don’t make the grade.
As the legal and compliance environment heats up, AI ethics initiatives may experience a pregnant pause as organizations assess the direction, weight and viability of emerging regulations.
During this time, entities without discrete responsible AI programs already in place will take a pragmatic approach, exercising existing governance and risk management practices as a first line of defense. These may include data governance, data quality, cybersecurity, risk management and audit practices. Explicit tactics will be influenced by industry and level maturity. These will leverage established bioethics and safety assessments in healthcare, model auditing and compliance in financial services, and safety engineering practices in manufacturing.
On the human front, organizations will begin to balance risk and compliance-led approaches with rights-based responses and corporate responsibility initiatives aligned with emerging ESG priorities.
About the Author:
Kimberly Nevala is the Strategic Advisor for AI at SAS