In a purchase market, servicers need a proactive retention strategy

… such as neural networks, evolutionary programming and genetic algorithms to understand both individual assets and local market dynamics within …

Intelligence is the difference between facts and knowledge, between information and understanding. Although artificial intelligence and machine learning are popular buzzwords in the mortgage industry today, truly machine intelligence based solutions are still a rarity in our space. Fortunately for lenders and servicers, Quantarium is bucking that trend.

The company approaches real estate from a distinctly scientific perspective, reflecting its founders’ backgrounds in high-end technology, quantum physics and computational genetics. As co-founder and Quantarium CEO Clement Ifrim explained, the company’s understanding of the market and its development of solutions is different from other companies not just in degree, but in kind.

“The word quanta describes the smallest entity involved in any physical interaction. Similarly, our initiative from the first days of our company was to use all this data — the smallest pieces of information — meshed with machine intelligence to provide the most accurate solutions on the market,” Ifrim said.

Quantarium’s robust artificial intelligence is a result of the methods the company leveraged from the beginning, applying machine learning solutions, such as neural networks, evolutionary programming and genetic algorithms to understand both individual assets and local market dynamics within the housing market. The company’s ambitious goal was to discover the DNA of real estate by mapping out its chromosomes — the up to hundreds of property characteristics of each asset — and determining how their proportional usefulness, along with the local market environment defined its expressed phenotype, impacting the estimated value..

Using this approach, Quantarium can credibly claim true AI, as its models behave like living systems.

“Our valuation system is constantly evolving and learning. Combined with our unparalleled data lake, our valuations get more accurate with each new run. Similarly, with our portfolio services, we are generating predictive models that get better every single day,” Ifrim said.

Quantarium’s offerings include its automated valuation model, QVM, valuation services, portfolio services, and a data and search platform, which was developed by mathematicians, scientists and computer architects — many of whom formerly worked on Microsoft’s enterprise services.

Romi Mahajan, the commercial head of the company, said, “Quantarium is a ‘meaningful AI’ company — we use AI not for some abstruse solution in search of a problem but instead to create markets and enhancements in the world’s largest asset class, namely residential real estate.”

For servicers in this low-volume purchase market, Quantarium’s portfolio services are especially critical as part of a proactive strategy to retain customers. For the last several years, servicers could count on moving borrowers into refinance loans and having enough new customers that they weren’t concerned about retention. Today, servicers need portfolio solutions that identify ongoing opportunities to engage homeowners as they contemplate selling, refinancing or taking cash out — before they start looking.

With Quantarium, servicers can leverage automated vigilance to see market activity on any loan in their portfolio, giving them the ability to contact borrowers with new offers. They can also gain insight into a borrower’s current status — such as whether they have paid off a loan and still live in the house, sold the property or refinanced with another lender. It also surfaces new lien activities.

Additionally, Quantarium’s best-in-class portfolio services can identify borrowers who are likely to list their property, or refinance.

“We can look at a borrower and say, ‘Do we think this person because of age, job, age of children, etc. is a good cash-out candidate? Or maybe a consolidation candidate?’ Servicers can take a segment of their portfolio and send a very specific message to that group,” Ifrim said.

Brian Mushaney, Quantarium’s business development vice president commented further, “Leveraging AI models, allows us to move loan loss beyond simple analysis, so our customers can further understand attrition from their lost customers’ loan purpose perspective and respond accordingly.”

Servicers and investors can also use Quantarium’s geocentric property reports to spot neighborhood trends in sales prices and demography, view comprehensive transaction history and get sales comps.

“We provide servicers with a statistical analysis of what’s happening in a borrower’s neighborhood, so they can stay in meaningful contact with the borrower, edified to both their individual and market context, all the time. They might see that homes in a particular micro market have increased beyond certain thresholds and now they can let the borrower know that, current transforming asset and market circumstance have made them candidates for products within the servicers portfolio that can optimize the borrower’s position,” said Malcolm Cannon, Quantarium’s chief operating officer.

Quantarium leverages data from more than a dozen data services, but it also creates new data all the time, using machine learning methods, such as computer vision to add property characteristics and then understand how they impact quality and value, expanding the reach of its intelligent systems.

“We begin by sitting down with our customers to find out where their pain points are, then we get our AI scientists together to figure out the solution. If you have a question, we have the data to answer it,” Ifrim said.

“We’re so excited about the potency of what Quantarium does — it’s unparalleled,” Cannon said. “We are working with the best tech companies in the world to stay on top of innovation and provide the best possible services for our clients.”

Related Posts:

  • No Related Posts

Some of the Latest Trends in Artificial Intelligence

We’re into our second year of publishing a “Global AI Race” series of articles on artificial intelligence startups from around the world and it continues to …

We’re into our second year of publishing a “Global AI Race” series of articles on artificial intelligence startups from around the world and it continues to pose a challenge. We use an objective measure of “total funding taken in so far” and that excludes any firms that choose not to disclose funding or are bootstrapped. We search for various categorizations like “artificial intelligence” or “deep learning” and that means we’ll miss any firms that haven’t chosen those categories in their Crunchbase profile. But the ones we worry about the most are those firms that we might include in one of our “top AI startups” lists that don’t actually do AI. It’s a huge problem, and one that was highlighted recently by a European venture capital firm, MMC Ventures, that surveyed 2,830 startups in Europe that were classified as being AI companies and found out that 44% of these companies were incorrectly classified as being “AI startups.”

Still, the fact that there are 1,580 AI startups across Europe means that we’re reaching a tipping point, or what MMC Ventures calls “a divergence.” Working together with Barclays, they produced a 149-page report titled “The State of AI: Divergence 2019” which takes a holistic look at AI across the globe finding a growing division between leaders and laggards. We pored through every page of that report to extract some of the latest trends in artificial intelligence that you might find insightful.

Artificial Intelligence and Adoption

“In 2019, AI ‘crosses the chasm’ from early adopters to the early majority,” says the report. As an investor, you should now be looking for companies that don’t use artificial intelligence since they’re quickly going to become laggards. For many of the companies adopting artificial intelligence, it may seem all new and shiny. Truth be told, AI has been around for decades with “seven false dawns” taking place between 1965 until now.

Artificial intelligence may be “the fastest paradigm shift in technology history.” In just three years, the number of enterprises with “AI initiatives” rose from 1 in 25 to 1 in 3. One in ten enterprises use more than ten AI applications, and the most popular use cases are chatbots (26% of enterprises), process automation solutions (26%), and fraud analytics (21%). Nearly half of all companies prefer to buy AI solutions from third parties as opposed to building their own.

Globally, China leads the charge with twice as many Asian firms adopting AI as compared to North American firms. The report points out some interesting high-level reasons why China has become a global leader:

  • Data Availability – China has more permissive policies than Europe regarding use of personal data.
  • Less siloed data within companies – According to MIT Sloan Management Review, 78% of leading Chinese companies maintain their corporate data in a centralized data lake, compared with 37% of European and 43% of US companies.
  • Legacy technology – Chinese companies typically have fewer legacy applications and processes to deal with.

It’s not surprising that two out of three reasons involve data. The best AI algorithms are the ones with exclusive access to high-quality data sets. With that said, some of the developments being made in artificial intelligence hardware and technologies are of equal importance.

Artificial Intelligence Technologies

When it comes to understanding the underlying technology behind artificial intelligence, most of us can get by with the very basics. For the people who are building these algorithms, it’s a different story. They’re highly paid and highly educated. Salaries for AI engineers average $224,000 at the 20 highest-paying companies and 60% of AI developers have a Master’s or Doctoral degree. Demand has been so high for talent, that even the academics are being pulled into the corporate world. According to The Economist, between 2006 and 2014, the proportion of AI research publications including an author with corporate affiliation increased from approximately 2% to nearly 40%.

There are more than 15 approaches to machine learning, and trying to understand even one would probably take as much time as studying for the CFA with about the same benefits at the end. Instead, you just need to know about the latest trends in machine learning so you can throw them around at your next board meeting to demonstrate what a thought leader you are. Here are some trends to watch when it comes to how artificial technology is developing.

AI Hardware Trends

AI Software Trends

  • Generative Adversarial Networks (GANs) – Have you seen the realistic photos of people created by AI? They were actually created using GANs. As the name implies, the method operates with two networks – a ‘generator’ and ‘discriminator’ – working in opposition to create increasingly lifelike media. GANs can be used for a wide variety of other applications such as data normalization, network security, system training. For an excellent detailed description of GANs, see page 80/81 of the report.

  • Reinforcement Learning (RL) – The AI algorithm is presented with a goal and experiments through trial and error while being rewarded for progress towards the goal. Requires no human intervention. Great for applications that lack training data. It’s how Google’s DeepMind mastered Go.

  • Transfer Learning (TL) – Emerging method that uses skills learned from a previous problem, and applies them to a different but related challenge. Interest in TL has grown seven-fold in the last two years. 2018 was a breakthrough year for the use of TL in Natural Language Processing (NLP).
  • Artificial General Intelligence (AGI) – It’s not in the report but something we thought should be in your vocabulary. It refers to the point where artificial intelligence becomes as smart as a human with intelligence that’s broad and adaptable. It’s where what we’re trying to accomplish as we adopt techniques like RL and TL.

It’s to be expected that a technology being adopted so quickly will also be developing as fast. The report also makes mention of how quantum computing will impact AI, something that we talked about in our article on Artificial Intelligence (AI) and Quantum Computing. Let’s look at how AI is being applied in the real world.

Artificial Intelligence Applications

Teachers and managers will be somewhat reassured to find out that their jobs will be “more resilient to AI in the medium term” while others won’t fare as well. The report talks about how AI will have the greatest impact on job functions where a majority of time is spent collecting and making sense of data.

Based on the above, it’s not a surprise to learn that the sector with the highest adoption of AI firms is the insurance sector. Nearly half of all insurance companies have deployed AI or plan to in the next 12 months. (Check out our piece on How Technology Will Affect Big Insurance Companies.)

Insurance is a good example of an industry where AI brings both threats and opportunities. One threat is the autonomous vehicle which impacts the 42% of global premiums that come from car insurance. According to Autonomous Research, UK car insurance premiums are expected to fall by as much as 63%, causing profits for insurers to fall by 81%. These losses might be partially offset by improved fraud detection. According to the Association of British Insurers, fraud costs UK insurers around $1.7 billion a year, something they currently spend around $265 million annually trying to prevent.

It’s easy to see how fraud detection can impact the bottom line, but what about chatbots? They’re the most pervasive “application of AI” with more than 20% of enterprises now using them.

Just because you plug in a chatbot to your corporate site, does this now mean you can say your firm is “using AI?” The question to ask is how much the chatbots are impacting the bottom line. What’s incredible to see in the above chart is how drastically these answers have changed in just one year. Of the 2,600 firms surveyed in 2018, 72% didn’t use any AI applications. Ask roughly the same number of companies the same question one year later, and the situation has changed dramatically. The number of uses cases for AI has expanded dramatically, and the report cites 31 core use cases across eight sectors.

Seems like it was just yesterday when we were looking at the promise of AI applied to medical imaging. Today, 40% of healthcare providers use AI-powered computer-assisted diagnostics.

AI is about data, and loads of data will be generated by the Internet of Things (IoT). Out of all sectors, the utility sector has climbed on board with IoT the most. According to Gartner, 67% of all utility companies now use IoT technologies such as sensors – all of which are generating loads of big data that can then be fed to AI algorithms. Across all sectors, we’ll see a move towards “X-as-a-Service” business models that will service the preference of enterprises to buy vs. build. These business models also serve to centralize the data so that the AI algorithms will become better and faster through economies of scale.

European Artificial Intelligence Startups

Since the report is written by a European VC, it’s no surprise to see that they’ve carved out the European AI startup space by country and presented it in this handy chart.

We’ve covered a number of these AI startups in our “Global AI Race” series, and were surprised to see that about a third of Europe’s AI startups can be found in the UK. Zee Germans and the French have developed their own AI ecosystems and the Spaniards should be noted for punching above their weight. Nine in ten of Europe’s AI startups are business-to-business (B2B) which means they sell solutions to enterprises as opposed to consumers. That’s changing now, as a quarter of new AI startups are focused on selling solutions to consumers.

Six in ten European AI startups are early stage (Angel or Seed-stage funding.) Here’s a good quote from the report on how early-stage AI companies can make for compelling acquisitions as larger firms look to acquire or risk being passed by:

AI startups are valuable suppliers – an ‘on-ramp’ to AI – for companies that embrace them, while disrupting those that do not.

Alongside these success stories will be high-profile failures as well. Then we get back to thinking about all the startups that are incorrectly classified, whether intentional or not. Today, one in twelve European startups put artificial intelligence at the core of their value proposition. There’s good reason for that too. Forbes published an article on this topic titled Nearly Half Of All ‘AI Startups’ Are Cashing In On Hype which explains why:

Startups labelled as being in AI attract 15% to 50% more funding than other technology firms.

That’s why it’s so important for investors to do their own due diligence prior to investing in companies that claim to be using AI. That’s exactly what the firm that produced this insightful report excels at.

Conclusion

MMC Ventures states that AI is a core area of research, conviction, and investment for their firm. In the last several years they’ve made 20 investments, comprising 50% of the capital they have invested, into many of the UK’s most promising AI companies. We’re going to look at some of those companies in coming articles and deep dive into what’s happening in the UK AI startup world.

As investors, we expect the impact of artificial intelligence on the bottom line should be reflected in earnings over the coming years. In some cases like the insurance industry, this impact could be negative. When the dust settles, artificial intelligence will become something that companies can no longer use to obtain a competitive advantage because everyone else is doing the same thing. At what point does AI move from disruptive technology to commodity?

Worried about transaction costs when you’re buying stocks every month? Ally charges just $4.95 a trade which is one of the cheapest prices of any broker out there. Saving money makes sense.

Related Posts:

  • No Related Posts

HR Departments Turn to AI-Enabled Recruiting in Race for Talent

Artificial intelligence is helping companies across industries answer human resources-related questions, automate some HR tasks and suggest jobs to …

“It’s pervasive in all aspects of how we think about people, getting the right people on the right projects and building careers,” said Jeff Wong, chief innovation officer for Ernst & Young, which brands itself as EY.

EY in 2017 launched an AI-powered chatbot named “Goldie” that has answered more than 2.2 million questions for employees across 138 countries to date. Now, the company, which hires about 65,000 people annually, is considering ways to use artificial intelligence to help human resources staff select qualified candidates. “We’re trying to be particularly thoughtful about how we apply AI in this particular circumstance,” Mr. Wong said. Emphasis on diversity, inclusion and fairness are requirements for such an AI system, he said.

Newsletter Sign-up

Eventually, AI could offer personalized recommendations for training programs that could be useful for career development, Mr. Wong said. It could also suggest which employees should be assigned to specific teams, in order to get the highest-performing teams possible, he said.

About 23% of organizations using some artificial intelligence said they were doing so in the human resources and recruiting domain, according to a 2018 Gartner Inc. study of about 850 respondents in the U.S. and Canada.

The use of AI in human resources is becoming more important as companies across all sectors compete against each other for talent, executives say. By 2030, there will be an estimated global talent shortage of more than 85.2 million people, which could result in $8.45 trillion worth of unrealized annual revenue, according to a 2018 study by executive-search firm Korn Ferry.

LinkedIn Corp., owned by Microsoft Corp. , offers AI-based services for business clients including one that launched last fall for companies looking to make strategic business decisions that involve hiring. For example, the software can show how competitive the technology talent pool is in a specific location where a company is considering opening an office.

The data used to develop its AI algorithms comes from profiles of its more than 610 million members and tens of thousands of skills and titles. LinkedIn also uses behavioral data, such as specific jobs that a candidate applies for, to learn about their interests.

But its AI capabilities aren’t meant to replace human staff, said John Jersin, vice president of product management at LinkedIn Talent Solutions.

“There should always be a human in the loop when there’s important decisions about hiring being made,” Mr. Jersin said. Computers might be able to get to an answer faster, with less effort or cost, he said. “But it’s not necessarily going to be the best or fair answer,” he said. The company works to address fairness in its AI services, he said. For example, its recruiting software contains a feature in which each page of candidate search results accurately reflects the gender distribution for that specific job and location.

Citizens Bank NA, a subsidiary of Citizens Financial Group, launched an AI-powered career coach for about 1,500 employees named “Myca” in a pilot program last year. Myca, short for “my career,” suggests new jobs, training, videos to watch and periodicals to read based on employees’ career interests.

The chatbot, developed with International Business Machines Corp. , is being tested partly as a way to fill training gaps and match employees with open positions they aspire to, said Kristi Robinson, executive vice president and head of talent acquisition at the bank.

Myca is expected to launch to about 18,000 employees at the end of this year or early next. “AI is absolutely reinventing recruiting in a very exciting type of way,” said Ms. Robinson, who has been in the recruiting industry for 20 years. “It’s the biggest game-changer I’ve seen in a long time.”

IBM is using its own artificial intelligence platform tailored for human resources functions, called Watson Candidate Assistant, to infer specific skill-sets from the roles on a candidate’s resume. If a resume lists work on an advertising campaign over the past year, Watson can infer that the work involved digital marketing skills, for example. The technology then presents the candidate with several job opportunities that they might be qualified for based on their skills. “It gives you the opportunity to apply for jobs that you’ve probably never thought of,” said Amy Wright, managing partner of IBM Talent & Transformation.

The use of artificial intelligence in the human resources division has been growing over the past five years at IBM, she said. “It’s really infiltrated every function in HR,” she said.

Watson learns from the 3 million job applications IBM receives annually. The technology knows the specifics of IBM’s jobs in 170 countries, understands the skills needed for each job and maps that to the data coming in from job applicants, according to a spokeswoman for the company.

Another AI-based tool, Watson Recruitment, helps prioritize resumes for recruiters without considering personal details like age or gender.

The technology frees up recruiters to spend more time with candidates and chasing down referrals, Ms. Wright said. “The bottom-line business results are strengthened,” she said.

Even more powerful in the future, Ms. Wright said, is the combination of AI and blockchain, a ledger system where data could be encrypted and unchangeable. Blockchain could ensure that a candidate’s job history is pre-confirmed, and an AI system could find and offer jobs to the right candidates without their applying, she said.

Write to Sara Castellanos at sara.castellanos@wsj.com

Related Posts:

  • No Related Posts

Intel offers AI breakthrough in quantum computing

For example, a traditional “fully-connected” neural network — what the … An executive guide to artificial intelligence, from machine learning and …

We don’t know why deep learning forms of neural networks achieve great success on many tasks; the discipline has a paucity of theory to explain its empirical successes. As Facebook’s Yann LeCun has said, deep learning is like the steam engine, which preceded the underlying theory of thermodynamics by many years.

But some deep thinkers have been plugging away at the matter of theory for several years now.

On Wednesday, the group presented a proof of deep learning’s superior ability to simulate the computations involved in quantum computing. According to these thinkers, the redundancy of information that happens in two of the most successful neural network types, convolutional neural nets, or CNNs, and recurrent neural networks, or RNNs, makes all the difference.

Amnon Shashua, who is the president and chief executive of Mobileye, the autonomous driving technology company bought by chip giant Intel last year for $14.1 billion, presented the findings on Wednesday at a conference in Washington, D.C. hosted by The National Academy of Science called the Science of Deep Learning.

In addition to being a senior vice president at Intel, Shashua is a professor of computer science at the Hebrew University in Jerusalem, and the paper is co-authored with colleagues from there, Yoav Levine, the lead author, Or Sharir, and with Nadav Cohen of Princeton University’s Institute for Advanced Study.

Also: Facebook’s Yann LeCun reflects on the enduring appeal of convolutions

The report, “Quantum Entanglement in Deep Learning Architectures,” was published this week in the prestigious journal Physical Review Letters.

The work amounts to both a proof of certain problems deep learning can excel at, and at the same time a proposal for a promising way forward in quantum computing.

intel-mobileye-cnns-and-cacs-for-quantum-march-2019.png

The team of Amnon Shashua and colleagues created a “CAC,” or, “convolutional arithmetic circuit,” which replicates the re-use of information in a traditional CNN, while making it work with the “Tensor Network” models commonly used in physics.

Mobileye.

In quantum computing, the problem is somewhat the reverse of deep learning: lots of compelling theory, but as yet few working examples of the real thing. For many years, Shashua and his colleagues, and others, have pondered how to simulate quantum computing of the so-called many-body problem.

Physicist Richard Mattuck has defined the many-body problem as “the study of the effects of interaction between bodies on the behaviour of a many-body system,” where bodies have to do with electrons, atoms, molecules, or various other entities.

What Shashua and team found, and what they say they’ve proven, is that CNNs and RNNs are better than traditional machine learning approaches such as the “Restricted Boltzmann Machine,” a neural network approach developed in the 1980s that has been a mainstay of physics research, especially quantum theory simulation.

Also: Google explores AI’s mysterious polytope

“Deep learning architectures,” they write, “in the form of deep convolutional and recurrent networks, can efficiently represent highly entangled quantum systems.”

Entanglements are correlations between those interactions of bodies that occur in quantum systems. Actual quantum computing has the great advantage of being able to compute entanglements with terrific efficiency. To simulate that through conventional electronic computing can be extremely difficult, even intractable.

“Our work quantifies the power of deep learning for highly entangled wave function representations,” they write, “theoretically motivating a shift towards the employment of state-of-the-art deep learning architectures in many-body physics research.”

intel-mobileye-rnns-and-racs-for-quantum-march-2019.pngintel-mobileye-rnns-and-racs-for-quantum-march-2019.png

The authors took a version of the recurrent neural net, or “RNN,” and modified it by adding data reuse to a “recurrent arithmetic circuit,” or RAC.

Mobileye.

The authors pursued the matter by taking CNNs and RNNs and applying to them “extensions” they have devised. They refer to this as a “simple ‘trick’,” and involves that redundancy mentioned earlier. It turns out, according to Shashua and colleagues. It turns out, they write, that the structure of CNNs and RNNs involves an essential “reuse” of information.

In the case of CNNs, the “kernel,” the sliding window that is run across an image, overlaps at each moment, so that parts of the image are ingested to the CNN multiple times. In the case of RNNs, the recurrent use of information at each layer of the network is a similar kind of reuse, in that case for sequential data points.

Also: Google says ‘exponential’ growth of AI is changing nature of compute

In both cases, “this architectural trait […] was shown to yield an exponential enhancement in network expressivity despite admitting a mere linear growth in the amount of parameters and in computational cost.” In other words, CNNs and RNNS, by virtues of redundancy, achieved via stacking many layers, have a more efficient “representation” of things in computing terms.

For example, a traditional “fully-connected” neural network — what the authors term a “veteran” neural network, requires computing time that scales as the square of the number of bodies being represented. A RBM, they write, is better, with compute time that scales linearly in terms of the number of bodies. But CNNs and RNNs can be even better, with their required compute time scaling as the square root of the number of bodies.

Those properties “indicate a significant advantage in modeling volume-law entanglement scaling of deep-convolutional networks relative to competing veteran neural-network based approaches,” they write. “Practically, overlapping-convolutional networks […] can support the entanglement of any 2D system of interest up to sizes 100 × 100, which are unattainable by competing intractable approaches.”

Must read


To make that work, the authors had to use their “trick”: The traditional way of representing quantum computation, a “Tensor Network,” doesn’t support the reuse of information. So, the authors created modified versions of the CNN and the RNN. The first is called a “convolutional arithmetic circuit,” or CAC. It’s an approach they’ve been developing in work of recent years, here brought to greater fruition. The trick is “duplication of the input data itself” in the CAC, which effectively replicates the reuse seen in the overlapping of the CNN. In the case of the RNN, they created a “recurrent arithmetic circuit,” in which they duplicate the input information.

“Importantly, since the output vector of each layer of the deep RAC at every time step is used twice (as an input of the next layer up, but also as a hidden vector for the next time-step), there is an inherent reuse of data during network computation,” they write. “Therefore, we duplicate the inputs as in the overlapping-convolutional network case, and obtain the TN of the deep RAC.”

The results of all this are two-fold: proofs for deep learning, and a way forward for quantum simulations. The formal proofs of the efficiency of CACs and RACs, included in supplementary material, amount to a proof that deep learning approaches can tackle quantum entanglement more efficiently.

They end on the hopeful note that their findings “can help bring quantum many-body physics and state-of-the-art machine learning approaches one step closer together.”

Both quantum computing and deep learning may never again be the same. How much progress do you think deep learning will make on the theory side? Talk Back and Let Me Know.

Previous and related coverage:

What is AI? Everything you need to know

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is deep learning? Everything you need to know

The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.

What is machine learning? Everything you need to know

This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.

What is cloud computing? Everything you need to know about

An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.

Related stories:

Related Posts:

  • No Related Posts

Transformational Artificial Intelligence: Prioritizing AI in Healthcare While Maintaining Legal …

AI growth in this sector is underscored by a CB Insights study showing that since 2013, healthcare AI startups have raised approximately $4.3 billion in …
Thursday, March 14, 2019

In February 2019, President Trump signed an executive order titled “Maintaining American Leadership in Artificial Intelligence,” also known as the American AI Initiative, that aims to increase the use of artificial intelligence (AI) nationwide. The executive order identifies various federal AI-related policies, principles, objectives, and goals, including: increased federal investment in AI research and development, better education of workers relating to AI, promotion of national trust in AI systems, an emphasis on improved access to the cloud computing services and data needed to build AI systems, the creation of technical and regulatory standards relating to AI, and the promotion of AI-related cooperation with foreign powers.

According to Michael Kratsios, deputy assistant to the president for technology policy, the executive order, and the policies underlying it, are designed to “prepar[e] America’s workforce for [the] jobs of today and tomorrow.”

AI Use Cases in Healthcare

As the executive order underscores, AI impacts every sector of the American workforce, especially healthcare. The healthcare industry is using AI to improve quality of care as well as drive down costs. AI growth in this sector is underscored by a CB Insights study showing that since 2013, healthcare AI startups have raised approximately $4.3 billion in funding for AI development, research, and production.

Innovative AI developments in healthcare include the following:

  • Diagnostic research and development. The ability of AI to identify disease-related risks is quickly developing. For example, one technology company has developed an artificial neural network (a computing system inspired by the biological neural network that involves various machine learning algorithms working together to process complicated data inputs) that uses retinal images to assist in the identification of cardiovascular risk factors. Similarly, Stanford University researchers have developed an algorithm to assist in the identification of skin cancer using neural networks.

  • Do-it-yourself diagnostics. Smartphones, wearables, and other connected personal devices will continue to become resources for at-home diagnostics, sometimes eliminating the need to go to a doctor’s office. For example, technology companies have developed apps that use image recognition algorithms to identify skin cancer risks and to diagnose urinary tract infections.

  • AI and medical records. While many large health systems already use electronic medical records, the medical records ecosystem continues to evolve. Various companies have developed and now offer programs that analyze unstructured patient medical records by using AI tools like machine learning (a type of AI that involves algorithms that can learn from data without relying on rules-based programming) and natural language processing (a type of AI in which computers can understand and interpret human language) to deliver meaningful and searchable data, such as diagnoses, treatments, dosages, symptoms, etc.

Key Questions to Ask to Evaluate Legal Compliance

AI developments are likely to accelerate over the next decade. As AI expands into modern workplaces—healthcare and otherwise—employers may want to consider the following questions to ensure legal and regulatory compliance from a labor and employment perspective:

  1. Through the technology, is data being collected, stored, or transmitted? As the healthcare examples discussed above highlight, many AI systems collect, store, and/or transmit enormous amounts of data—often sensitive data. Various international, federal, and state rules and common law govern the collection, storage, and movement of data, as well as privacy rights. This area of the law is evolving, so employers may want to carefully review their obligations and stay up to date.

  2. Is the technology changing employees’ terms and conditions of employment? AI is changing employees’ working conditions, from minor workflow alterations to more significant changes like the displacement of employees through layoffs or reductions in force. In a unionized workforce, many changes to the terms and conditions of employment are subject to the collective bargaining process. Moreover, regardless of whether a union is in place, changes to employees’ working conditions may implicate other state and federal laws like the Worker Adjustment and Retraining Notification Act of 1988, which mandates notification obligations before certain types of workplace employee reductions, and relevant discrimination statutes, such as the Age Discrimination in Employment Act of 1967.

  3. Is the technology changing the physical working environment? Under the Occupational Safety and Health Act, employers have a legal duty to maintain a safe workplace. The Occupational Safety and Health Administration has developed specific standards for employers utilizing robotics to ensure that the technology is safe for employees. Depending on the nature and function of the technology at issue, various additional federal and state workplace safety laws may also be implicated. Employers may want to be mindful of these rules and ensure compliance with them.

  4. Is the technology affecting employment-related decision-making? Employers are increasingly using AI to analyze job applicants and make day-to-day employment-related decisions. For example, some employers are using AI-powered software programs to auto-screen resumes as a traditional recruiter would, and others are using AI recruiting assistants to communicate with applicants through messaging apps. The information used to structure an AI algorithm could be unintentionally biased, which could potentially lead to discrimination claims by employees and/or applicants. If employers are using AI either directly or indirectly to make employment-related decisions, they may want to evaluate employment discrimination risks and mitigate against them, if possible, by, for example, understanding the data used to build out and/or train the AI at issue and regularly auditing decisions made through the use of AI.

Because of the pace of AI development and the prioritization of its growth, employers may want to continue harnessing the opportunities AI presents while staying mindful of legal and regulatory compliance issues.

Related Posts:

  • No Related Posts