5 Things Business Leaders Must Know About Adopting AI at Scale Despite growing awareness of the importance and growth potential in AI, most AI implementations fail in production.

By Roey Mechrez Edited by Amanda Breen

Opinions expressed by Entrepreneur contributors are their own.

As part of my job, I meet on a daily basis with enterprise leaders who tackle the challenge of implementing AI in their business. These are typically executives in charge of their organization's AI transformation, or business managers who wish to gain a competitive edge by improving quality, shortening delivery cycles and automating processes. These business leaders have a solid understanding of how AI can serve their business, how to start the AI-implementation process and which machine-learning application fits their specific business needs. Despite their understanding of AI and its potential, most managers seem to lack understanding in key technical areas in AI adoption at scale.

Managers that strive to overcome these blind spots, which currently derail successful implementation of AI projects in production, should address the following five questions.

What data goes into the model?

If you have a basic understanding of deep learning, you probably know that it's based on an algorithm that takes input data samples and produces an output in the form of classification, prediction, detection and more. During the training phase of the model, historical data (whether labeled or unlabeled) is used. Once trained, the model will be able to deal with data similar to the samples it was trained with. This model may keep running smoothly in a controlled lab environment, but it is locked within the "convex hull" of the training data. If, for some reason, the model is fed with data that is outside the scope of the training data distribution, it will fail miserably. Unfortunately, this is what often happens in real-life production environments.

The ability to process data that deviates from the boundaries of the sterile training environment is determined by how robust and stable the AI system is. Enterprises that use low robustness and stability systems will inevitably realize they're facing a case of "garbage-in, garbage out" model in terms of how data is analyzed and processed.

Related: When Should You Not Invest in AI?

What are the model boundaries?

With the understanding that the model is highly coupled with the training data that feeds it, we would like to know when the model is right and when it's wrong. Building a trustful human-machine collaboration is vital for success in AI adoption. The first step is to control the model uncertainty for each given sample. Take an example in which the AI application is automating a mission critical operation that requires very high accuracy (for example, claim processing for an insurance company, quality control on an airliner assembly line or fraud detection in a big bank). Considering how sensitive the output is in these use cases, the required accuracy cannot be achieved with solo AI automation. Complex, rare cases must be passed to a human expert for final judgment. That's the essence of setting a boundary for the AI system. The huge flow of data that comes into the model must be divided into two categories: a fully automated bucket of data and a semi-automatic bucket.

The ability to split the data into these two buckets is based on uncertainty estimation: For each sample of data (case and input), the model needs to generate not just a prediction output, but also a confidence score of this prediction. This score is compared against a pre-set threshold that governs how data is split between the fully automatic path and the human-in-the-loop path.

Related: Here's What AI Will Never Be Able to Do

When should the model be retrained?

The first day in production is the worst day. That's the point where the model needs to be constantly improved by ongoing feedback. How is that feedback loop provided? Following the above example, the data that is passed to a human for analysis, the data with a low-confidence score and the data that is out of the training distribution should be used to improve the model.

There are three main scenarios in which AI models should be retrained with feedback mechanisms:

  1. Insignificant data. If the data used in training the system is not well distributed across the production data, you will need to improve the model over time with additional data to achieve better generalization.

  2. Adversarial environments. Some models are prone to external hacks and attacks (such as in the case of fraud detection and anti-money laundering systems). In these cases, the model must be improved over time to ensure it's one step ahead of the fraudsters, who may invest plenty of resources to break into it.

  3. Dynamic environments. Data is constantly changing, even in seemingly stable and traditional businesses. In most cases, maintaining high sustainability levels of solutions require taking new data into consideration.

In simple terms, AI models are not evergreen by nature; they must be nurtured, improved and fine-tuned over time. Having these mechanisms in production is highly coupled with sustainable AI and with the adoption of AI at scale.

Related: How Entrepreneurs Can Use AI to Boost Their Business

How to detect when the model goes off the rails?

By now, you understand the complexity of different production and operational elements of AI, which are at the core of adopting AI at scale. In light of these complexities, having the ability to monitor the system, understanding what goes on under the hood, getting insights, detecting data drifts (change of the distribution), and having a general observation of the system's health are crucial. The industry standards state that for every $1 you spend developing an algorithm, you must spend $100 to deploy and support it. Given the amount of academic research, open-source and centralized tools (like PyTorch and TensorFlow), the process of building AI solutions is becoming democratized. Productizing AI at scale, on the other hand, is something only a few companies can achieve and master.

There's a common saying about deep learning: "When it fails, it fails silently." AI systems are fail-silent systems, for the most part. Advanced monitoring and observability mechanisms can shift them into fail-safe systems.

How to build a responsible AI product?

The fifth element is the most complex to master. Given the latest advancement in AI regulation, particularly in the E.U., building responsible AI systems is becoming more necessary, and not just for the sake of regulation, but rather to ensure companies conduct themselves in an ethical and responsible way. Fairness, trust, mitigating bias, explainability (the ability to explain the rationale behind decisions made by AI), and result repeatability and traceability are all key components of a responsible, real-world AI system. Companies that adopt AI at scale should have an ethics committee that can gauge the ongoing usage of the AI system and make sure it's "doing good."

AI should be used responsibly not because regulation demands it, but because it's the right thing to do as a community, as humans. Fairness is a value, and as people who care about our values, we need to incorporate them into our daily developments and strategy.

Adopting AI at scale requires a lot of effort, but is a massively rewarding process. Market trends indicate that 2021 will be a pivotal year for AI. The right people, partners and mindset can help make the leap from the lab to full-scale production. Business leaders who acquire deep understanding of the technical and operational aspects of AI will have a head start in the race to adopt AI at scale.

Roey Mechrez

CEO and Co-founder of BeyondMinds

Roey Mechrez is the CEO and co-founder of BeyondMinds. As a leading AI pioneer and global visionary, he is passionate about fostering a data-driven culture, using AI as a transformational catalyst to address complex regulatory, operational and business-intelligence challenges.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Business News

A New Hampshire City Was Named the Hottest Housing Market in the U.S. This Year. Here's the Top 10 for 2024.

Zillow released its annual lists featuring the top housing markets, small towns, coastal cities, and geographic regions. Here's a look at the top real estate markets and towns in 2024.

Business News

'We're Not Allowed to Own Bitcoin': Crypto Price Drops After U.S. Federal Reserve Head Makes Surprising Statement

Fed Chair Jerome Powell's comments on Bitcoin and rate cuts have rattled cryptocurrency investors.

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.

Business Ideas

Is Your Business Healthy? Why Every Entrepreneur Needs To Do These 3 Checkups Every Year

You can't plan for the new year until you complete these checkups.

Science & Technology

This AI is the Key to Unlocking Explosive Sales Growth in 2025

Tired of the hustle? Discover a free, hidden AI from Google that helped me double sales and triple leads in a month. Learn how this tool can analyze campaigns and uncover insights most marketers miss.