We have observed mounting peer pressure as businesses race to adopt artificial intelligence as a safeguard measure to avoid being left behind or losing competitive advantage. The reality is a little different to what we often perceive when we read about AI. When tracking surveys that have been asking decision makers in Europe, Asia, and the US over the last six years, we notice a considerable gap between ambition to adopt AI and action to implement such projects. The AI adoption gap. This seems paradoxical initially as year after year business leaders rave about the potential of AI and the expectations that their business will start using AI in the next 3-5 years. Depending on the survey, AI ambition (expected adoption in the next 3-5 years) has risen from 53% in 2018 to over 85% in 2024*. Business leaders around the world seem excited about the potential of AI and expect that they will invest in the technology in the near future. At the same time, when these leaders are asked about actual adoption of AI in their business today, reality seems a little more sombre. Whilst actual adoption rates have been climbing from 8% in 2018 to around 22% in 2024, when compared to AI ambitions, we notice a stagnant adoption gap. The AI adoption gap, as the difference between AI ambition and AI adoption has been hovering around 50% in the last 6 years. In other words, until today and pretty much constant for several years, significantly more businesses talk about adopting AI then actually invest into the technology.
The AI adoption gap might certainly be one symptom of the current hype we observe around the technology. We believe that this phenomenon is symptomatic for the current confusion persisting in board rooms. Driven by news that everyone around them is doing something with AI (or at least claiming their ambition to do so in the near future) leaders are left in a state of mild panic of being left behind if they don't take action. At the same time the "Why" and business case for AI remains obscure to them as the drive for AI adoption is resulting from external peer pressure instead of reflecting internally on business requirements and the supply chain the business is operating in.
This sombre reality of actual AI saturation in the market is often overlooked amid the current hype and receives much less attention in the press. If you haven't started your AI journey yet, you are not alone. The list off inhibitors that challenge businesses to adopt AI is long. We have summarised some of the inhibitors to AI adoption in a recent article in the MHD magazine (pp. 58-59).
The question is, how can these inhibitors be overcome? To answer this question the business should firstly establish why AI should be added to the portfolio of capabilities and what value the technology could add to the business. We can assist your business in this process using our proprietary methodology. The methodology has been developed based on research including hundreds of companies that have successfully implemented AI solutions in supply chain management. Research has shown that drivers for AI adoption (we call them catalyst) can be grouped into three main clusters: data specific, process specific, or supply chain specific. Using our research-based methodology and an interview format, we can identify drivers (The Why) for AI adoption that are specific to your business. We can then link these to performance metrics to assist you in creating a business case for AI or identify gaps in your processes or technology stack that act as major inhibitors and might need to be mediated first.
*MHI/Deloitte Annual Industry Report 2018 - 2024, available online at https://og.mhi.org/publications/report
Part of the confusion that leads to many decision makers struggling to identify drivers for AI adoption and define clear use cases, is based in the definition of the term artificial intelligence (AI). AI is not a specific product or technology but rather a marketing phrase for a wide range of technologies today, many of these doubtful of deserving the mark "intelligent". Not everything that is labelled AI lives up to the expectations. But what can we expect when we read or hear AI?
Before we answer this question, we should look a short history of AI to give some context of the developments that resulted in the paradigm shifting impact of the technology on our society and industries today. The term AI was coined by John McCarthy in 1956. Together with Marvin Minsky, Claude Shannon and Nathaniel Rochester, McCarthy had organised the Dartmouth Summer Research Project on Artificial Intelligence, the first conference dedicated to AI, in 1956. It was bringing together the most prominent experts in mathematics and computer science at the time. McCarthy is noteworthy not only for his vision and contribution in creating the term AI but more so for developing the high-level programming language (LISP) which was used for the first attempts to program AI models. The concept of intelligent machines was introduced a few years earlier already by Alan Turing in a landmark paper titled "Computing machinery and intelligence" in 1950. Turing was one of the leading mathematicians of his time, achieving fame for cracking the enigma code which was used by Nazi Germany during WW2 to encrypt their communication. Cracking the code gave the allied force a considerable advantage and resulted in Turing being credited for providing a pivotal contribution to the outcome of the war. Turing introduced a conversational test which he called the "imitation game" in his 1950 article, which was designed to determine if a machine was able to think; thus could be considered intelligent. The Turing Test is used to measure or determine intelligence in AI models until today.
The 1950s were a busy decade for AI. In 1959, A.L. Samuel introduced the first algorithm that allowed computers to learn, starting the area of machine learning. For the first time computers were able perform tasks they had not explicitly been programmed for. Machine learning algorithms allow computers to learn from data and thus acquire knowledge to perform new tasks, in other words, computers that can write their own programming code. This was the dawn of a new area of intelligent machines. In the early 1960s a subfield of machine learning was developed, which was generally referred to as deep learning. Deep learning algorithms rely on artificial neurons, copying the way the human brain learns and processes information. These so called artificial neural networks connect information between all neurons within the same layer of the network, creating a web of connected pieces of information, very much like the human brain. Conceptually these algorithms worked, limitations in processing power, data, and capacity to store and transmit this data quickly proved to be critical barriers to apply deep learning artificial neural networks to real world problems. So in the late 1960s, scientists came to the conclusion that deep learning AI machines, whilst fascinating conceptually, could not be practicably build at scale. The interest (and so the funding for research) in AI faded over the next few decades.
It wasn't until the 1990s that AI made headlines again. In 1997 IBM's Deep Blue, running a deep learning model specifically trained to play chess, became the first machine to beat a human in the game. At that time no other than Garry Kasparov, a chess grandmaster and sitting world champion. AI was back in the spotlight. In 2017 a paper by a number of Google engineers led by A. Vaswani opened the next chapter for AI, the era of transformer models. Generative pre-trained transformers (GPT) are a new generation of algorithms that power most generative AI applications we see today. The recent hype around intelligent machines, powered by the development of transformer models, has also created considerable confusion though. Not everything that is labelled AI can live up to the expectations. It is therefore helpful to categorise AI applications and separate them into four AI application clusters: descriptive, predictive, prescriptive and cognitive AI. Each cluster of AI applications has different features and capabilities and may be useful in different applications.
Descriptive or diagnostics AI are applications that capture and process information, for example in image recognition or anomaly detection. These applications learn from large amounts of structured data (data that has been labelled by humans, like "this image shows a dog"). The application finds patterns in this data and thus enables it to analyse new data sets and provide the user with information about them. Applications for descriptive AI are common these days, for example in the photo app on our mobile phones that allows you to search for all photos that contain a dog or a specific person (provided you have trained the app sufficiently). In manufacturing, diagnostic AI is widely used for quality control, for example scanning every part at the end of the manufacturing line to find manufacturing problems. In medicine, diagnostic AI applications have revolutionised radiology. Advanced systems can spot or predict conditions like cancer much earlier and with higher accuracy than human radiologists, based on learning from millions of images.
Descriptive AI has its roots in natural language processing (NLP) applications, the ability for machines to understand and interpret human language. All voice to text and text to voice applications are powered by NLP models that have learned from millions of structured data samples to be able to understand human language. Today a specific form of NLP models, called conversational AI, has become very popular in form of chat bots or automated service agents. This field is evolving rapidly and caution is required as not all of these agents actually constitute conversational AI. Many chat bots are simple pre-programmed decision tree models that can only handle a certain number of pre-defined scenarios, as such they don't deserve the label "intelligent" as they are not learning.
Using machine learning, predictive AI applications analyse and process information to understand future events. Also learning from structured data, predictive AI applications can analyse data to find patterns that allow them to make predictions about the future. These applications are widely used in forecasting and demand planning, for example determining the required inventory for a particular distribution centre or retail outlet. They can also be used in asset management to predict maintenance cycles, for example allowing planned downtime to service machines or mobile assets that prevents unplanned outages.
Prescriptive AI embraces optimisation models that can make recommendations and automate processes. These applications go beyond purely predictive ones by making recommendations or providing several options to the user. These applications would not only recommend how much inventory to keep in a particular retail outlet but also where within the outlet to display this inventory to maximise sales. Omni channel prescriptive optimisation models can also make recommendations on how to channel advertising spent across different social media channels to maximise impact or marketing dollar spent.
Finally, cognitive AI sums up smart applications that act autonomously or generate new content that did not exist before. These applications typically learn from large amounts of unstructured data (data that is not labelled) to find answers and take action. Cognitive AI is sometimes called artificial general intelligence (AGI) or “strong AI”. Significant progress has been made in this area of AI research over the last few years, especially around generative AI and transformer models as we have discussed above. Many researchers argue though, that intelligent machines and AGI are still elusive. GPT models rely on the same principles that underly predictive AI models, that is learning from large of amounts of data to make predictions. In other words, GPT models are not intelligent or able to reason like a human. They are just very good at making predictions to give us the impression that they are. For now, it seems we are save from machines taking over the world.