Recession on the horizon? Based on our own research and research by Bain & Company, Harvard Business Review, Deloitte, Gartner and McKinsey, we formulate 7 actions to accelerate your profitability during and straight after a recession. Figure 1 shows how big the difference is between winners and losers. This does not only apply to EBIT, after a recession, winning companies are also able to make significant strides in market share.
THE 7 MOST IMPORTANT ACTIONS TO BE AMONGST THE WINNERS
The key to success is preparation. Although, preparation is actually a wrong choice of words. Winners are winners, because they structurally sail close to the wind and have a clear vision. They are proactive, fast and decisive. They are financially prudent to absorb setbacks and seize opportunities as they arise. The seven actions below clearly indicate what that means in concrete terms.
1. CLEAR VISION AND ORGANISATIONAL ALIGNMENT
What will your business look like in three to five years? And in one year? What are the ‘vital few’ strategic initiatives and what is the path from strategy to concrete actions? Not only your leadership team needs to be committed and aligned, this applies to your entire organisation. Strategy Deployment is a powerful tool to maintain alignment and focus, monitor progress against plan and make rapid and appropriate adjustments in case of changing conditions.
2. UNDERSTAND YOUR STRATEGIC AND FINANCIAL POSITION
Mapping out your plans depends on your strategic and financial position (see figure 2).
3. FREE UP FINANCIAL RESOURCES
The focus is on aligning your spending with your vision and strategic initiatives; not blunt cost cutting. Zero-based Alignment is a good way to select and make lean those activities that are fully aligned. Non-aligned activities are stopped. The financial resources you free up can strengthen your balance sheet and/or support your investment agenda.
Currently we face high inflation. Supply chain problems and capacity bottlenecks are responsible for some of it, but their effect will dampen. Another cause is the sharp rise in energy costs as a result of the conflict in Ukraine and the resulting economic sanctions. In time, part of the costs will bounce back, but no longer to the old level. Costs will remain structurally higher due to urgency of the climate-change-driven energy transition. Furthermore, too much money is in circulation and its effect on inflation will also last longer.
The current high inflation can turn margins negative very quickly. Speed and flexibility are called for and selling prices have to go up. Raising prices in one go is difficult. It is better to do this in regular small steps. Possibilities depend on the strength of your brand and the market your company operates in. Make sure you retain the right customers in the process
4. RETAIN YOUR CUSTOMERS
Retaining customers is much cheaper than acquiring new ones. The margin impact is significant. Explore ways to help your customers through the economic downturn and particularly in the early upturn when the opportunities start to arise. Winners have already created the “currency” to invest. Just make sure you target the right customers.
5. PLAN FOR VARIOUS SCENARIOS
No one knows when and how a downturn will fully unfold and when the economy will start growing again. The winners have developed different scenarios, and they know how they should act in each scenario. This allows them to act quickly and decisively.
6. ACT QUICKLY AND DECISIVELY
Winning companies act quickly and decisively, in the downturn and particularly in the early upswing when the opportunities begin to emerge. They have already unlocked the financial resources to invest.
7. EMBRACE TECHNOLOGY
Not all companies have been equally aggressive in adopting new technologies. There are many opportunities here for improving efficiency or generating more value and thereby gaining a competitive advantage.
To emphasise the importance of technology even more.
Figure 3 shows the development of the total shareholder return before and after the recession of 2009/2010. It is clear to see how winners break away from the rest.
Harvard Business Review found that 70% of companies failed to regain their pre-recession growth rate in the 3 years following the recession. Only 5% of companies manage to develop a growth rate that is consistently above that of their competitors (quarter-over-quarter simultaneous growth of sales and profit margin).
Digital leaders are 3x more likely to achieve revenue and margin growth that exceeds industry!
Artificial Intelligence is hot. We can hardly do anything without coming into contact, consciously or unconsciously, with forms of Artificial Intelligence. And it is becoming increasingly important. This article is an introduction to the field of Artificial Intelligence. It starts with a definition and then explores the different sub-specialties, complete with description and some applications.
WHAT IS ARTIFICIAL INTELLIGENCE?
Artificial Intelligence (AI) uses computers and machines to imitate people’s problem-solving and decision-making skills. One of the leading textbooks in the field of AI is Artificial Intelligence: A Modern Approach (link resides outside Axisto) by Stuart Russell and Peter Norvig. In it they elaborate four possible goals or definitions of AI.
Human approach:
Systems that think like people
Systems that behave like people
Rational approach:
Systems that think rationally
Systems that act rationally
Artificial intelligence plays a growing role in (I)IoT (Industrial) Internet of Things, among others), where (I)IoT platform software can provide integrated AI capabilities.
SUB-SPECIALTIES WITHIN ARTIFICIAL INTELLIGENCE
There are several subspecialties that belong to the domain of Artificial Intelligence. While there is some interdependence between many of these specialties, each has unique characteristics that contribute to the overarching theme of AI. The Intelligent Automation Network (link resides outside Axisto) distinguishes seven subspecialties, figure 1.
Each subspecialty is further explained below.
MACHINE LEARNING
Machine learning is the field that focuses on using data and algorithms to imitate the way humans learn using computers, without being explicitly programmed, while gradually improving accuracy. The article “Axisto – an introduction to Machine Learning” takes a closer look at this specialty.
MACHINE LEARNING AND PREDICTIVE ANALYTICS
Predictive analytics and machine learning go hand in hand. Predictive analytics encompasses a variety of statistical techniques, including machine learning algorithms. Statistical techniques analyse current and historical facts to make predictions about future or otherwise unknown events. These predictive analytics models can be trained over time to respond to new data.
The defining functional aspect of these engineering approaches is that predictive analytics provides a predictive score (a probability) for each “individual” (customer, employee, patient, product SKU, vehicle, part, machine, or other organisational unit) to determine, to inform or influence organisational processes involving large numbers of “individuals”. Applications can be found in, for example, marketing, credit risk assessment, fraud detection, manufacturing, healthcare and government activities, including law enforcement.
Unlike other Business Intelligence (BI) technologies, predictive analytics is forward-looking. Past events are used to anticipate the future. Often the unknown event is of significance in the future, but predictive analytics can be applied to any type of “unknown,” be it past, present, or future. For example, identifying suspects after a crime has been committed, or credit card fraud if it occurs. The core of predictive analytics is based on capturing relationships between explanatory variables and the predicted variables from past events, and exploiting them to predict the unknown outcome. Of course, the accuracy and usefulness of the results strongly depends on the level of data analysis and the quality of the assumptions.
Machine Learning and predictive analytics can make a significant contribution to any organisation, but implementation without thinking about how they fit into day-to-day operations will severely limit their ability to deliver relevant insights.
To extract value from predictive analytics and machine learning, it’s not just the architecture that needs to be in place to support these solutions. High-quality data must also be available to nurture them and help them learn. Data preparation and quality are important factors for predictive analytics. Input data can span multiple platforms and contain multiple big data sources. To be usable, they must be centralised, unified and in a coherent format.
To this end, organisations must develop a robust approach to monitor data governance and ensure that only high-quality data is captured and stored. Furthermore, existing processes need to be adapted to include predictive analytics and machine learning as this will enable organisations to improve efficiency at every point in the business. Finally, they need to know what problems they want to solve in order to determine the best and most appropriate model.
NATURAL LANGUAGE PROCESSING (NLP)
Natural language processing is the ability of a computer program to understand human language as it is spoken and written – also known as natural language. NLP is a way for computers to analyse and extract meaning from human language so that they can perform tasks such as translation, sentiment analysis, and speech recognition.
This is difficult, as it involves a lot of unstructured data. The style in which people speak and write (“tone of voice”) is unique to individuals and is constantly evolving to reflect popular language use. Understanding context is also a problem – something that requires semantic analysis from machine learning. Natural Language Understanding (NLU) is a branch of NLP and picks up these nuances through machine “reading understanding” rather than simply understanding the literal meanings. The purpose of NLP and NLU is to help computers understand human language well enough so that they can converse naturally.
All these functions get better the more we write, speak and talk to computers: they are constantly learning. A good example of this iterative learning is a feature like Google Translate that uses a system called Google Neural Machine Translation (GNMT). GNMT is a system that works with a large artificial neural network to translate more smoothly and accurately. Instead of translating one piece of text at a time, GNMT tries to translate entire sentences. Because it searches millions of examples, GNMT uses a broader context to derive the most relevant translation.
The following is a selection of tasks in natural language processing (NLP). Some of these tasks have direct real-world applications, while others more often serve as sub-tasks used to solve larger tasks.
Optical Character Recognition (OCR)
Determining the text associated with a given image representing printed text.
Speech Recognition
Determine the textual representation of the speech on the basis of a sound fragment of a speaking person or persons. This is the opposite of text-to-speech and is an extremely difficult problem. In natural speech, there are hardly any pauses between consecutive words, so speech segmentation is a necessary subtask of speech recognition (see ‘word segmentation below). In most spoken languages, the sounds representing successive letters merge into one another in a process called coarticulation. Thus, the conversion of the analog signal to discrete characters can be a very difficult process. Since words are spoken in the same language by people with different accents, the speech recognition software must also be able to recognise a wide variety of inputs as identical to each other in terms of textual equivalents.
Text-to-Speech
The elements of a given text are transformed and a spoken representation is produced. Text-to-speech can be used to help the visually impaired.
Word Segmentation (Tokenization)
Splitting a piece of continuous text into individual words. For a language like English, this is quite trivial, as words are usually separated by spaces. However, some written languages such as Chinese, Japanese, and Thai do not mark word boundaries in such a way, and in those languages, text segmentation is an important task that requires knowledge of the vocabulary and morphology of words in the language. Sometimes word segmentation is also applied in, for example, making words in data mining.
Document AI
A Document AI platform sits on top of NLP technology, allowing users with no previous experience with artificial intelligence, machine learning, or NLP to quickly train a computer to extract the specific data they need from different document types. NLP-powered Document AI enables non-technical teams to quickly access information hidden in documents, e.g. lawyers, business analysts and accountants.
Grammatical Error Correction
Grammatical error detection and correction involves a wide range of problems at all levels of linguistic analysis (phonology/orthography, morphology, syntax, semantics, pragmatics). Grammatical error correction has a major impact because it affects hundreds of millions of people who use or learn a second language. In terms of spelling, morphology, syntax, and certain aspects of semantics, with the development of powerful neural language models such as GPT-2, this can be regarded as a largely solved problem since 2019. Various commercial applications are available in the market.
Machine Translation
Automatically translating text from one human language to another is one of the most difficult problems: all different kinds of knowledge are required to do it properly, such as grammar, semantics, real world facts, etc..
Natural Language Generation (NLG)
Converting information from computer databases or semantic intent into human readable language.
Natural Language Understanding (NLU)
NLU concerns the understanding of human language, such as Dutch, English, and French, which allows computers to understand commands without the formalised syntax of computer languages. NLU also allows computers to communicate back to people in their own language. The main goal of NLU is to create chat and voice-enabled bots that can communicate with the public unsupervised. Answer questions and determine the answer to a question in human language. Typical questions have a specific correct answer, such as “What is the capital of Finland?”, but sometimes open questions are also considered (such as “What is the meaning of life?”). How does understanding natural language work? NLU analyses data to determine its meaning by using algorithms to reduce human speech to a structured ontology – a data model made up of semantics and pragmatic definitions. Two fundamental concepts of NLU are intent and entity recognition. Intent recognition is the process of identifying user sentiment in input text and determining its purpose. This is the first and most important part of NLU as it captures the meaning of the text. Entity Recognition is a specific type of NLU that focuses on identifying the entities in a message and then extracting key information about those entities. There are two types of entities: named entities and numeric entities. Named entities are grouped into categories, such as people, businesses, and locations. Numeric entities are recognised as numbers, currency and percentages.
Text-to-picture generation
Describe an image and generate an image that matches the description.
Natural language processing – understanding people – is key to AI justifying its claim to intelligence. New deep learning models are constantly improving the performance of AI in Turing tests. Google’s Director of Engineering Ray Kurzweil predicts AIs will “reach human levels of intelligence by 2029“(link resides outside Axisto).
By the way, what people say is sometimes very different from what people do. Understanding human nature is by no means easy. More intelligent AIs expand the perspective of artificial consciousness, opening up a new field of philosophical and applied research.
SPEECH
Speech recognition is also known as automatic speech recognition (ASR), computer speech recognition or speech-to-text. It is a capability that uses natural language processing (NLP) to process human speech in a written format. Many mobile devices incorporate speech recognition into their systems to perform voice searches, e.g. Siri from Apple.
An important area of speech in AI is speech-to-text, the process of converting audio and speech into written text. It can help visually or physically impaired users and can promote safety with hands-free operation. Speech-to-text tasks contain machine learning algorithms that learn from large datasets of human voice samples to arrive at adequate usability quality. Speech-to-text has value for businesses because it can help transcribe video or phone calls. Text-to-speech converts written text into audio that sounds like natural speech. These technologies can be used to help people with speech disorders. Polly from Amazon is an example of a technology that uses deep learning to synthesise human-sounding speech for the purposes of e-learning and telephony, for example.
Speech recognition is a task where speech is received by a system through a microphone and checked against a database of large pattern recognition vocabulary. When a word or phrase is recognised, it will respond with the corresponding verbal response or a specific task. Examples of speech recognition include Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google’s Google Assistant. These products must be able to recognise a user’s speech input and assign the correct speech output or action. Even more sophisticated are attempts to create speech based on brain waves for those who cannot speak or have lost the ability to speak.
EXPERT SYSTEMS
An expert system uses a knowledge base about its application domain and an inference engine to solve problems that normally require human intelligence. An interference engine is a part of the system that applies logical rules to the knowledge base to derive new information. Examples of expert systems include financial management, business planning, credit authorisation, computer installation design, and airline planning. For example, an expert traffic management system can help design smart cities by acting as a “human operator” to relay traffic feedback for appropriate routes.
A limitation of expert systems is that they lack the common sense people have, such as an understanding of the limits of their skills and how their recommendations fit into the bigger picture. They lack the self-awareness of people. Expert systems are not a substitute for decision makers because they lack human capabilities, but they can dramatically ease the human work required to solve a problem.
PLANNING SCHEDULING AND OPTIMALISATION
AI planning is the task of determining how a system can best achieve its goals. It is choosing sequential actions that have a high probability of changing the state of the environment incrementally in order to achieve a goal. These types of solutions are often complex. In dynamic environments with constant change, they require frequent trial-and-error iteration to fine-tune.
Planning is making schedules, or temporary assignments of activities to resources, taking into account goals and constraints. To design an algorithm, planning determines the sequence and timing of actions generated by the algorithm. These are typically performed by intelligent agents, autonomous robots and unmanned vehicles. When designed properly, they can solve organisational scheduling problems in a cost-effective way. Optimisation can be achieved by using one of the most popular ML and Deep Learning optimisation strategies: gradient descent. This is used to train a machine learning model by changing its parameters in an iterative way to minimise a particular function to the local minimum.
Intelligence is at one end of the Intelligent Automation spectrum, while Robotic Process Automation (RPA), software robots that mimic human actions, is at the other end. One is concerned with replicating how people think and learn, while the other is concerned with replicating how people do things. Robotics develops complex sensor-motor functions that enable machines to adapt to their environment. Robots can sense the environment using computer vision.
The main idea of robotics is to make robots as autonomous as possible through learning. Despite not achieving human-like intelligence, there are still many successful examples of robots performing autonomous tasks such as carrying boxes, picking up and putting down objects. Some robots can learn decision making by associating an action with a desired outcome. Kismet, a robot at M.I.T.’s Artificial Intelligence Lab, learns to recognise both body language and voice and respond appropriately. This MIT video (link is outside Axisto) gives a good impression.
COMPUTER VISION
Computer vision is an area of AI that trains computers to capture and interpret information from image and video data. By applying machine learning (ML) models to images, computers can classify and respond to objects, such as facial recognition to unlock a smartphone or approve intended actions. When computer vision is coupled with Deep Learning, it combines the best of both worlds: optimised performance combined with accuracy and versatility. Deep Learning offers IoT developers greater accuracy in object classification.
Machine vision goes one step further by combining computer vision algorithms with image registration systems to better control robots. An example of computer vision is a computer that can “see” a unique series of stripes on a universal product code and scan it and recognize it as a unique identifier. Optical Character Recognition (OCR) uses image recognition of letters to decipher paper printed records and/or handwriting, despite the wide variety of fonts and handwriting variations.
WHAT IS MACHINE LEARNING?
This article covers the introduction to machine learning and the directly related concepts.
Machine learning is the field of study that gives computers the ability to learn without being explicitly programmed. It is a subset of artificial intelligence (AI) and computer science that focuses on the use of data and algorithms to imitate the way humans learn, and in doing so it gradually improving its accuracy. By using statistical learning (link resides outside Axisto) and optimisation methods, computers can analyse datasets and identify patterns in the data. Machine learning techniques leverage data mining to identify historic trends to inform future models.
According to the University of California, Berkeley, the typical supervised machine learning algorithm consists of three main components:
A decision process: A recipe of calculations or other steps that takes in the data and returns a guess at the kind of pattern in the data that the algorithm is looking to find.
An error function: A method of measuring how good the guess was by comparing it to known examples (when they are available). Did the decision process get it right? If not, how do you quantify how bad the miss was?
An updating or optimisation process: The algorithm looks at the miss and then updates how the decision process comes to the final decision so that the miss will not be as great the next time.
Machine learning is a key component in the growing field of data science. Using statistical methods, algorithms are trained to make classifications or predictions and uncover key insights from data.
HOW DOES A MACHINE LEARNING ALGORITHM LEARN?
The technology company Nvidia (link resides outside Axisto) distinguishes four learning models that are defined by the level of human intervention:
Supervised learning: If you are learning a task under supervision, someone is with you, prompting you and judging whether you’re getting the right answer. Supervised learning is similar in that it uses a full set of labelled* data to train an algorithm.
Unsupervised learning: In unsupervised learning, a deep learning model is handed a dataset without explicit instructions on what to do with it. The training dataset is a collection of examples without a specific desired outcome or correct answer. The neural network then attempts to automatically find structure in the data by extracting useful features and analysing its structure. It learns by looking for patterns.
Semi-supervised learning: Semi-supervised learning is, for the most part, just what it sounds like: a training dataset with both labelled and unlabelled data. This method is particularly useful in situations where extracting relevant features from the data is difficult or where labelling examples is a time-intensive task for experts.
Reinforcement learning: In this kind of machine learning, AI agents are trying to find the optimal way to accomplish a particular goal or improve the performance of a specific task. If the agent takes action that moves the outcome towards the goal, it receives a reward. To make its choices, the agent relies both on learnings from past feedback and on exploration of new tactics that may present a larger payoff. The overall aim is to predict the best next step that will earn the biggest final reward. Just as the best next move in a chess game may not help you eventually win the game, the best next move the agent can make may not result in the best final result. Instead, the agent considers the long-term strategy to maximise the cumulative reward. It is an iterative process: the more rounds of feedback, the better the agent’s strategy becomes. This technique is especially useful for training robots to make a series of decisions for tasks such as steering an autonomous vehicle or managing inventory in a warehouse.
* Fully labelled means that each example in the training dataset is tagged with the answer the algorithm should produce on its own. So a labelled dataset of flower images would tell the model which photos were of roses, daisies and daffodils. When shown a new image, the model compares it to the training examples to predict the correct label.
In all four learning models, the algorithm learns from datasets based on human rules or knowledge.
In the domain of artificial intelligence, you will come across the terms machine learning (ML), deep learning (DL) and neural networks (artificial neural networks – ANN). Artificial intelligence and machine learning are often used interchangeably, as are machine learning and deep learning. But, in fact, these terms are progressive subsets within the larger AI domain, as illustrated in Figure 1.
Therefore, when discussing machine learning, we must also consider deep learning and artificial neural networks.
THE DIFFERENCE BETWEEN MACHINE LEARNING AND DEEP LEARNING IS THE WAY AN ALGORITHM LEARNS
Unlike machine learning, deep learning does not require human intervention to process data. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required, which means it can be used for larger data sets.
“Non-deep” machine learning is more dependent on human intervention for the learning process to happen because human experts must first determine the set of features so that the algorithm can understand the differences between data inputs, and this usually requires more structured data for the learning process.
“Deep” machine learning can leverage labelled datasets, also known as supervised learning, to inform its algorithm. However, it does not necessarily require a labelled dataset. It can ingest unstructured data in its raw form (e.g., text and images), and it can automatically determine the set of features that distinguishes between different categories of data. Figure 2 illustrates the difference between machine learning and deep learning.
Deep learning uses multiple layers to progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human, such as digits or letters or faces.
In deep learning, each level learns to transform its input data into a slightly more abstract and composite representation. In an image-recognition application, the raw input may be a matrix of pixels. The first representational layer may abstract the pixels and encode edges; the second layer may compose and encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may recognise that the image contains a face. Importantly, a deep learning process can learn which features to optimally place in which level on its own. This does not fully eliminate the need for manual-tuning – for example, varying numbers of layers and layer sizes can provide different degrees of abstraction. The word “deep” in “deep learning” refers to the number of layers through which the data is transformed.
NEURAL NETWORKS
An artificial neural network (ANN) is a computer system designed to work by classifying information in the same way a human brain does, while still retaining the innate advantages they hold over us, such as speed, accuracy and lack of bias. For example, it can be taught to recognise images and classify these according to elements they contain. Essentially, it works on a system of probability – based on data fed to it, it can make statements, decisions or predictions with a degree of certainty. The addition of a feedback loop enables “learning” – by sensing or being told whether its decisions are right or wrong, it modifies the approach it takes in the future.
Artificial neural networks consist of a multilevel learning of detail or representations of data. Through these different layers, information passes from low-level parameters to higher-level parameters. These different levels correspond to various levels of data abstraction, leading to learning and recognition. An ANN is based on a collection of connected units called artificial neurons (analogous to biological neurons in a biological brain). Each connection (synapse) between neurons can transmit a signal from one neuron to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal to neurons connected to it downstream. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream. Typically, neurons are organised in layers, as illustrated in Figure 3. Different layers can perform various kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.
USES OF MACHINE LEARNING
There are many applications for machine learning; it is one of the three key elements of Intelligent Automation and a autonomous operating model within Industry 4.0. The computer programs can read text and work out whether the writer was making a complaint or offering congratulations. They can listen to a piece of music, decide whether it is likely to make someone happy or sad, and find other pieces of music to match the mood. In some cases, they can even compose their own music that either expresses the same themes or is likely to be appreciated by the admirers of the original piece.
Neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games, and medical diagnosis. As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Although this number is several orders of magnitude less than the number of neurons in a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognising faces, playing “Go”).
INVESTING IN INDUSTRY 4.0 TECHNOLOGIES YIELDS SIGNIFICANT BENEFITS
In 2018 the World Economic Forum (WEF) launched an initiative, Shaping the Future of Advanced Manufacturing and Production, to demonstrate the true potential of Industry 4.0 technologies to transform the very nature of manufacturing. Learnings from 69 frontrunner companies boosting 450 use cases in action reveal that organisations investing in Industry 4.0 technology are realising significant improvements in productivity, sustainability, operating cost, customisation and speed to market.
Here are just a few numbers from the 450 use cases: labour productivity up by 32% to 86%, order lead times down by 29% to 82%, field quality up 32%, manufacturing costs down 33%, OEE up 27%, new product design lead time down 50%.
Additionally, frontrunner companies showed that by investing in Industry 4.0 technologies they can solve business problems while simultaneously reducing environmental detractors such as waste, consumption and emissions. While the greatest environmental benefits come from core green sustainability initiatives (such as commitments to renewable energy), Industry 4.0 use cases have shown significant environmental impact as well, reducing energy consumption by more than one-third and water use by more than one-quarter.
Out of the 69 frontrunners within the WEF initiative that exist to date across the globe, 64% have been able to drive growth by adopting Industry 4.0 solutions. In all those cases, with little to no capital expenditure, they were able to unlock capacity and grow by coupling some of the technology solutions together with a much more flexible production system. The business case is big and the pay back is short, both for large companies and for SMEs.
HOWEVER, MOST COMPANIES STRUGGLE TO IMPLEMENT
Most companies struggle to start and scale an Industry 4.0 transformation because they lack people with the right skills and knowledge and because of a limited understanding of technology and vendor landscape. On average, 72% of companies don’t get beyond the pilot phase.
AIMA enables manufacturing companies to understand where they stand and to design an implementation roadmap that helps them start their Industry 4.0 implementation journey or progress to the next level. AIMA assesses your operations along eight elements, as shown in Figure 1.
In total, the eight elements are made up of 33 categories (see Figure 2), and each category spans the four fundamental building blocks of Industry 4.0: processes, technology, people and competencies, and organisation.
HOW AIMA SUPPORTS YOU ON YOUR INDUSTRY 4.0 IMPLEMENTATION JOURNEY
AIMA helps you:
build knowledge
tear down interdepartmental walls and create strategic alignment
understand where your operations stand – what is strong and must be maintained
and what needs to be improved
understand what your key areas are and what you need to focus on.
AIMA helps you establish a company-specific interpretation of key principles and concepts. It creates an improved case for change and provides more momentum to implement the change.
HOW AIMA WORKS
AIMA consists of four steps:
Preparation – get to know the members of the leadership team and understand: the vision and strategy, how the team views market developments, challenges, opportunities and how the company develops within this context and inventory of expectations for the next days.
The first workshop day – identification of and alignment on the case for change: introduction to Industry 4.0 and explore how it affects the strategy (execution), test the extent of alignment within the leadership team and identify the (/ check if there is a) case for change.
The second workshop day – the Industry 4.0 Maturity Assessment: assessing operations using a selection from the AIMA categories, prioritising the KPIs and identifying the focus areas.
The third workshop day – design of the implementation roadmap: sequence of steps that address processes, technology, people & capabilities and organisation, identification of risks and design of a risk mitigation plan.
Focusing on these areas will accelerate performance improvements in operations. AIMA provides the insights, designs an implementation roadmap and is a strategic tool to regularly assess progress and refine your roadmap based on new insights. Starting at the operations leadership level allows us to create an overall framework. AIMA is then deployed at the next level down into respective factories. Again, we begin with a preparation; followed by three workshop days, now with the factory leadership team:
Preparation – get to know the members of the factory leadership team and understand: the factory vision and strategy, how the team views market developments, challenges, opportunities and how the factory develops within this context and inventory of expectations for the next days.
The first workshop day – identification of and alignment on the case for change: introduction to Industry 4.0 and explore how it affects the strategy (execution), test the extent of alignment within the factory leadership team and identify the (/ check if there is a) case for change.
The second workshop day – the Industry 4.0 Maturity Assessment: assessing operations using a selection from the AIMA categories, prioritising the KPIs and identifying the focus areas.
The third workshop day – design of the implementation roadmap: prioritisation of factory KPIs and the identification of focus areas, sequence of steps that address processes, technology, people & capabilities and organisation, identification of risks and design of a risk mitigation plan.
Making improvements in these focus areas will make the biggest impact on the factory’s performance within the overall framework. Leveraging this cascaded approach creates the biggest wins for the whole business rather than just a sub-optimisation of an individual factory.
AIMA OUTCOMES FOR YOUR ORGANISATION
AIMA provides four key outcomes:
Understanding of Industry 4.0, its key principles and concepts, and how they affect strategy (execution)
Alignment within the operations leadership team and factory leadership teams
Understanding of your Industry 4.0 maturity level / readiness
Priority of focus areas to create short-term business value within a long-term context
PUT YOUR PEOPLE AT THE CENTRE OF YOUR INDUSTRY 4.0 IMPLEMENTATION
AIMA will generate initial momentum. However, it is worth noting that any Industry 4.0 implementation will only be successful if you put your people at the centre of it.
The biggest challenge for a company is not in choosing the right technology, but in having a lack of digital culture and skills in the organisation. Investing in the right technologies is important – but the success or failure does not ultimately depend on specific sensors, algorithms or analysis programs.
Axisto was founded in 2006 to help companies accelerate their operational performance – fast, measurable and lasting. We have executed more than 150 projects across Europe.
We have concrete on-the-ground experience, which is why our approach is practical and pragmatic. We combine subject-matter expertise with excellent change management skills.
We see change through and do whatever it takes to make our clients successful.
THE GOAL OF USING INTELLIGENT AUTOMATION
The goal of using Intelligent Automation (IA) is to achieve better business outcomes through streamlining and scaling decision making across businesses. IA adds value to business by increasing process speed, reducing costs, improving compliance and quality, increasing process resilience and optimising decision results. Ultimately, it improves customer and employee satisfaction and improves cash flow and EBITDA and decreases working capital.
WHAT IS INTELLIGENT AUTOMATION?
IA is a concept leveraging a new generation of software based automation. It combines methods and technologies to execute business processes automatically on behalf of knowledge workers. This automation is achieved by mimicking the capabilities of knowledge that workers use in performing their work activities (e.g., language, vision, execution and thinking & learning).IA effectively creates a software-based digital workforce that enables synergies by working hand-in-hand with the human workforce.
On the simpler end of the spectrum, IA helps perform the repetitive, low-value add and tedious work activities such as reconciling data or digitising and processing paper invoices. On the other end, IA augments workers by providing them with superhuman capabilities. For example, it provides the ability to analyse millions of data points from various sources in a few minutes and generate insights from.
THREE KEY COMPONENTS OF INTELLIGENT AUTOMATION
IA consists of three key components:
Business Process Management with Process Mining to provide greater agility and consistency to business processes.
Robotic Process Automation (RPA). Robotic process automation uses software robots, or bots, to complete repetitive manual tasks. RPA is both the gateway to artificial intelligence and can leverage insights from Artificial Intelligence to handle more complex tasks and use cases.
Artificial Intelligence. By using machine learning and complex algorithms to analyse structured and unstructured data, businesses can develop a knowledge base and formulate predictions based on that data. This is the decision engine of IA.
WHERE AND HOW TO START WITH INTELLIGENT AUTOMATION?
Implementing Intelligent Automation might come across as a daunting endeavour, but it doesn’t need be. Like any business leader, you will have a keen eye on accelerating operations performance, which in essence is improving the behaviour and outcomes of your business processes. Process Mining is a perfect tool to help you with that.
Process Mining is a data-driven analysis technique, i.e., analysis software, to objectively analyse and monitor business processes. It does this based on transactional data that is recorded in a company’s business information systems. The analysis software is system agnostic and doesn’t need any adaptation of your systems. Process Mining provides fact-based insight into how processes run in daily reality: all process variants (you will be surprised how many variations of one process there actually are in your business) and where the key problems and opportunities lie to improve process efficiency and effectiveness.
Process Mining is also an excellent way to prepare the introduction of Robotic Process Automation, which could be the most relevant next step on your IA journey. Process Mining can be purely used as an analysis tool, but it can also be installed permanently to constantly monitor the performance of and the issues in the processes. It is a non-intimidating approach and a gradual implementation of Intelligent Automation.
THE IMPORTANCE OF A COMPANY-WIDE VISION AND SHARED ROADMAP
However, at some point, rather sooner than later, it is important to establish and communicate a comprehensive, company-wide vision for what you want Intelligent Automation to achieve: how will automation deliver value and boost competitive advantage. You need a shared roadmap for a successful implementation that covers processes, technology (including legacy systems), people & competencies and organisation.
Such a shared Intelligent Automation/Industry 4.0 Roadmap ensures a consistent, thoughtful approach to selecting, developing, applying, and evolving the IA/I4.0 structure to achieve the intended impact. The Axisto Industry 4.0 Maturity Assessment (AIMA) is an effective way to create such a shared implementation roadmap.
THE CRUX TO SUCCESS LIES IN A WIDE-RANGE OF PEOPLE-ORIENTED FACTORS
Importantly, the biggest challenge for a company is not in choosing the right technology, but in having a lack of digital culture and skills in the organisation. Investing in the right technologies is important – but the success or failure does not ultimately depend on specific sensors, algorithms or analysis programs. The implementation and scaling of Intelligent Automation/Industry 4.0 requires a fundamental shift in mindset and behaviours at all levels in the organisation. The crux to success lies in a wide range of people-oriented factors.
“Tell me where you spend your money and I will tell you what your strategy is.” There is probably no better sentence to describe the potential difference between an intended strategy and a defacto strategy. Zero-based budgeting (ZBB) is a powerful approach to accelerate growth, create value and make your strategy happen.
WHAT IS ZERO-BASED BUDGETING?
ZBB starts from a blank sheet of paper, not from last year’s budget. On a very granular level, you start by determining what resources various business units require to deliver the strategic goals. You then address individual cost categories across all business units and justify all expenditure. In ZBB the base line is not last year’s budget, but “zero”.
ZBB was introduced in the 1960s and was slow in getting traction. It had a brief spell of popularity and then sank away into obscurity. Now, supported by progressed digitisation, it is on the rise again. But it’s no longer just being used in the consumer packaged goods industry, nor focused only on sales and general administrative expenditure. It has begun to spread across industries and functions. And rightfully so because ZBB is appropriate for any industry and all functions: procurement, supply chain, sales and marketing, service and support, and others.
ZERO-BASED BUDGETTING IS NOT JUST A COST-CONTROL TOOL
Many companies use it as a cost-control tool. However, this is vastly underestimating its real power. When used in a strategic context, ZBB can reconfigure cost structures, free up investment funds and accelerate growth. Successful companies start with a solid “What by How” objective that gives the company direction. The related goals then lead to questions about which investments are necessary and what the total cost structure needs to be to enable these investments. This way, ZBB is tightly integrated with the company’s strategy. It addresses both the cost discipline and the investments and opportunities that drive growth. However, using ZBB as a one-time exercise won’t cut it.
ZERO-BASED BUDGETING TRANSFORMS YOUR BUSINESS
ZBB is not a one-time exercise; it is a way of doing business and part of the DNA of an organisation. Its implementation not only redesigns your processes, policies and systems, but also instils new mindsets and behaviours. ZBB establishes clear cost accountability and disciplines to reduce and permanently eliminate costs that add little or no value. At the same time, it establishes a clear accountability to maximise the added value of the right expenditure. ZBB challenges companies to operate more efficiently and effectively across functions, geographies, divisions and business units to grow the top line and margin. It drives people to make conscious, strategic decisions and to get the right things done.
ZERO-BASED BUDGETING IN GOOD TIMES AND IN BAD TIMES – MAINTAIN STRATEGIC MOMENTUM
During a recession – and more so just afterwards – successful companies grow their EBIT whereas others stall. So why do some companies win while others lose? The common denominator with the winners is that they maintain a strict cost discipline and fund their growth levers in both the high and low phases of the economic cycle. They maintain strategic momentum regardless of market conditions.
We know that the total shareholder return a company achieves is mainly determined by its margin. The companies that generate a significantly higher long-term value grow their EBIT most and implement the required change during economic highs – i.e., pre-emptively. So the earlier a company transforms, the better its future performance.
AND WHAT ABOUT LEAN SIX SIGMA (LSS)?
Lean is often talked about as being an extensive toolbox. This misses the point. Lean is all about mindset and behaviours – it’s about strict cost discipline and fast cash conversion cycle. Lean originated at Toyota when it was rebuilding its business just after World War II. The company was cash strapped – as were its customers.
The whole concept of flow within Toyota’s way of working was, and still is, to ensure a fast cash conversion cycle and eliminate low value-added costs. What’s more, they approached everything from the customer’s point of view – what is the customer willing to pay for? Everything else is waste. Having a fast cash conversion cycle creates the opportunity to grow faster. And that is what they did.
Similarly, Six Sigma is often talked about as being an extensive toolbox. But Six Sigma is also all about mindset and behaviours – one of relentlessly eliminating variation. Six Sigma was developed at Motorola in the late 1980s. The company was crippled by the cost of poor quality, which drained their margins and eroded their revenue. For the company to have a viable future, it had to drive down variation.
SO HOW ARE ZERO-BASED BUDGETING AND LEAN SIX SIGMA RELATED?
Zero-based budgeting is the overarching approach to drive the short- and long-term success of a company. From a business strategy point of view, first the “What by How” objective is set and then the top goals and targets are set. ZBB views the company as a whole from the highest level, informed by its purpose, vision and ambition. It affects every aspect of a company: the operating model including the organisation structure and policies. ZBB thrives on the right mindset and behaviours that are incorporated in the DNA of the organisation.
The mindset and behaviours behind Lean Six Sigma (LSS) fit fully with the mindset and behaviours behind zero-based budgeting. ZBB will steer the selection of tools from the LSS toolbox that best contribute to the business needs in the company’s drive to deliver on its vision and ambition – in the same way that Toyota and Motorola developed and acquired skills and tools that were in line with their business needs and informed by their mindset.
Disciplined cash and working capital management drives good operational and financial performance. However, performance in order to cash, inventory management and procure to pay slumped over the 5 years prior to the COVID outbreak. A closer analysis reveals that inventory optimisation poses companies the biggest challenge – both in volatile and non-volatile markets. More Cash – Lower Inventory – Better Service, good inventory management is the key.
DELIVER DOUBLE DIGIT INVENTORY REDUCTIONS AND MAINTAIN OR IMPROVE SERVICE LEVELS
Decades of experience have taught us that going straight for the inventories themselves is both the quickest and the surest way of delivering a high-performing supply chain. Inventory sits right at the heart of your supply chain and is both a symptom and cause of your supply chain performance. Getting inventory right keeps your customers happy, increases flow and reduces cost and waste and frees up cash.
At Axisto, we combine the practical business focus of management consulting with the high-speed analytical capability of advanced information technology. We rapidly distil practical insights from data in Enterprise Resource Planning (ERP) systems. Our people concentrate on the human challenges of implementing and sustaining resilient and lean supply chains.
Our unique approach to supply chain puts inventory optimisation front and centre. This allows us to help deliver double digit reductions in inventory while maintaining or improving service levels – at speed in a low risk manner compared to traditional approaches.
OUR INVENTORY MANAGEMENT PROPOSITIONS
Axisto provides three inventory management propositions: inventory optimisation programmes, inventory analytics and inventory maturity assessments.
Our starting point with most clients is a quick scan. On the basis of just 3 standard reports from your ERP system, we quantify improvement potential item by item as well as overall. The output is both an immediate high-level quantification of improvement potential and the basis of a road map to deliver sustainable improvements quickly.
INVENTORY OPTIMISATION PROGRAMMES
We provide expert analytics and effective change management backed up by a clearly measurable business case. Improvements to inventory positions of 20% or more, sometimes much more, are usually achievable within the first year, at a high return on investment.
INVENTORY ANALYTICS
Do you find it difficult to really understand what your inventory data is telling you, or what you should do about it? Do you have optimisation tools that are difficult to use or which give results you know to be wrong, but you’re not sure why? With the proprietary technology that we use, we provide clients with rapid actionable insights into their inventory data.
In addition, we help clients with a range of targeted analytical exercises, ranging from strategic inventory positioning (where in your supply chain should you hold inventory?) through to setting inventory policies for items that are hard to optimise, such as spare parts, or make to order products.
INVENTORY MATURITY ASSESSMENTS
Inventory is influenced by almost every aspect of your business. Therefore, it can be hard to know at an enterprise level where the biggest opportunities for further improvement are, or how you compare to your competitors.
Axisto can take the temperature of your inventory management. We combine a granular, bottom-up quantitative assessment of your potential for improvement with a qualitative overview of your people, processes and systems, including relevant benchmarks, to give you actionable insights into where to find the next step change in your performance journey.
A CASE
CHALLENGE
A medium-sized industrial manufacturing firm with a strong market position and profitability had little historical focus on inventory. The consequence was that inventory was increasing gradually. It was time to act.
RESULTS
Inventory was reduced by more than 50% from the initial baseline over a period of 3 years, while service levels were maintained or improved. Improvements in the underlying data led to a better understanding of how and why to act – inventory management capability was significantly developed within the client’s teams.
SOME QUOTES
“We finally have full transparency of what we have, so we can make fact-based decisions on a weekly basis.” – Automotive manufacturer
“Since starting a programme, we have reduced our inventories by over 50%.” – Industrial manufacturer
“The results are exceptional and have made a major difference to our cash flow.” – Global manufacturing company
“The inventory programme brought a wide range of process issues into sharp focus, with an impact much broader than just inventory.” – Market-leading manufacturer
These days, customers expect shorter fulfilment timeframes and have a lower tolerance for late or incomplete deliveries. At the same time, supply chain leaders face growing costs and volatility. how process mining creates value in the supply chain is by creating transparency and visibility across the supply chain and providing proposals for decisions with their trade-offs for real-time optimisation of flows.
FULL TRANSPARENCY
Instead of working with the designed process flow or the process flow that is depicted in the ERP system, process mining monitors the actual process at whatever granularity you want: end-2-end process, procure-2-pay, manufacturing, inventory management, accounts payable, for a specific type of product, supplier, customer, individual order, individual SKU. Process mining monitors compliance, conformance, cooperation between departments or between client, own departments and suppliers, etc.
VISIBILITY ACROSS THE SUPPLY CHAIN
Dashboards are created to suit your requirements. These are flexible and can be easily altered whenever your needs change and/or bottlenecks shift. They create real-time insights into the process flow. At any time, you know, how much revenue is at stake because of inventory issues, what root-causes are and which decisions you can take and what their effects and trade-offs will be.
If supplier reliability is not at the target level at the highest reporting level, you can easily drill down in real-time to a specific supplier and a particular SKU to discover what is causing the problem in real-time. Suppliers could also be held to the best-practice service level of competitive suppliers.
MAKING INFORMED DECISIONS AND TAKING THE RIGHT ACTIONS
The interactive reports highlight gaps between actual and target values and give details of the discrepancies, figure A. By clicking on one of the highlighted issues, you can assign an appropriate action to a specific person, figure B. Or it can even be done automatically when a discrepancy is detected.And direct communication with respect to the action is facilitated in real-time, figure C.
HOW PROCESS MINING CREATES VALUE IN THE SUPPLY CHAIN – WRAP UP
Process mining is an effective tool to optimise the end-2-end supply chain flows in terms of margin, working capital, inventory level and profile, cash, order cycle times, supplier reliability, customer service levels, sustainability, risk, predictability, etc. Because process mining monitors the actual process flows in real-time, it creates full transparency and therefore adds significant value to the classic BI-suites. Process mining can be integrated with existing BI-applications and can enhance reporting and decision-making. We consider process mining to be a core element of Industry 4.0.
THIS INTERVIEW WAS PUBLISHED BY THE GUARDIAN
Zoë Corbyn
Sun 6 Jun 2021 09.00 BST
‘AI systems are empowering already powerful institutions – corporations, militaries and police’: Kate Crawford. Photograph: Stephen Oxenbury
The AI researcher on how natural resources and human labour drive machine learning and the regressive stereotypes that are baked into its algorithms
Kate Crawford studies the social and political implications of artificial intelligence. She is a research professor of communication and science and technology studies at the University of Southern California and a senior principal researcher at Microsoft Research. Her new book, Atlas of AI, looks at what it takes to make AI and what’s at stake as it reshapes our world.
You’ve written a book critical of AI but you work for a company that is among the leaders in its deployment. How do you square that circle? I work in the research wing of Microsoft, which is a distinct organisation, separate from product development. Unusually, over its 30-year history, it has hired social scientists to look critically at how technologies are being built. Being on the inside, we are often able to see downsides early before systems are widely deployed. My book did not go through any pre-publication review – Microsoft Research does not require that – and my lab leaders support asking hard questions, even if the answers involve a critical assessment of current technological practices.
What’s the aim of the book? We are commonly presented with this vision of AI that is abstract and immaterial. I wanted to show how AI is made in a wider sense – its natural resource costs, its labour processes, and its classificatory logics. To observe that in action I went to locations including mines to see the extraction necessary from the Earth’s crust and an Amazon fulfilment centre to see the physical and psychological toll on workers of being under an algorithmic management system. My hope is that, by showing how AI systems work – by laying bare the structures of production and the material realities – we will have a more accurate account of the impacts, and it will invite more people into the conversation. These systems are being rolled out across a multitude of sectors without strong regulation, consent or democratic debate.
What should people know about how AI products are made? We aren’t used to thinking about these systems in terms of the environmental costs. But saying, “Hey, Alexa, order me some toilet rolls,” invokes into being this chain of extraction, which goes all around the planet… We’ve got a long way to go before this is green technology. Also, systems might seem automated but when we pull away the curtain we see large amounts of low paid labour, everything from crowd work categorising data to the never-ending toil of shuffling Amazon boxes. AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous.
Unfortunately the politics of classification has become baked into the substrates of AI
Problems of bias have been well documented in AI technology. Can more data solve that? Bias is too narrow a term for the sorts of problems we’re talking about. Time and again, we see these systems producing errors – women offered less credit by credit-worthiness algorithms, black faces mislabelled – and the response has been: “We just need more data.” But I’ve tried to look at these deeper logics of classification and you start to see forms of discrimination, not just when systems are applied, but in how they are built and trained to see the world. Training datasets used for machine learning software thatcasually categorise people into just one of two genders; that label people according to their skin colour into one of five racial categories, and which attempt, based on how people look, to assign moral or ethical character. The idea that you can make these determinations based on appearance has a dark past and unfortunately the politics of classification has become baked into the substrates of AI.
You single out ImageNet, a large, publicly available training dataset for object recognition… Consisting of around 14m images in more than 20,000 categories, ImageNet is one of the most significant training datasets in the history of machine learning. It is used to test the efficiency of object recognition algorithms. It was launched in 2009 by a set of Stanford researchers who scraped enormous amounts of images from the web and had crowd workers label them according to the nouns from WordNet, a lexical database that was created in the 1980s.
Beginning in 2017, I did a project with artist Trevor Paglen to look at how people were being labelled. We found horrifying classificatory terms that were misogynist, racist, ableist, and judgmental in the extreme. Pictures of people were being matched to words like kleptomaniac, alcoholic, bad person, closet queen, call girl, slut, drug addict and far more I cannot say here. ImageNet has now removed many of the obviously problematic people categories – certainly an improvement – however, the problem persists because these training sets still circulate on torrent sites .
And we could only study ImageNet because it is public. There are huge training datasets held by tech companies that are completely secret. They have pillaged images we have uploaded to photo-sharing services and social media platforms and turned them into private systems.
You debunk the use of AI for emotion recognition but you work for a company that sells AI emotion recognition technology. Should AI be used for emotion detection? The idea that you can see from somebody’s face what they are feeling is deeply flawed. I don’t think that’s possible. I have argued that it is one of the most urgently needed domains for regulation. Most emotion recognition systems today are based on a line of thinking in psychology developed in the 1970s – most notably by Paul Ekman – that says there are six universal emotions that we all show in our faces that can be read using the right techniques. But from the beginning there was pushback and more recent work shows there is no reliable correlation between expressions on the face and what we are actually feeling. And yet we have tech companies saying emotions can be extracted simply by looking at video of people’s faces. We’re even seeing it built into car software systems.
What do you mean when you say we need to focus less on the ethics of AI and more on power? Ethics are necessary, but not sufficient. More helpful are questions such as, who benefits and who is harmed by this AI system? And does it put power in the hands of the already powerful? What we see time and again, from facial recognition to tracking and surveillance in workplaces, is these systems are empowering already powerful institutions – corporations, militaries and police.
What’s needed to make things better? Much stronger regulatory regimes and greater rigour and responsibility around how training datasets are constructed. We also need different voices in these debates – including people who are seeing and living with the downsides of these systems. And we need a renewed politics of refusal that challenges the narrative that just because a technology can be built it should be deployed.
Any optimism? Things are afoot that give me hope. This April, the EU produced the first draft omnibus regulations for AI. Australia has also just released new guidelines for regulating AI. There are holes that need to be patched – but we are now starting to realise that these tools need much stronger guardrails. And giving me as much optimism as the progress on regulation is the work of activists agitating for change.
The AI ethics researcher Timnit Gebru was forced out of Google late last year after executives criticised her research. What’s the future for industry-led critique? Google’s treatment of Timnit has sent shockwaves through both industry and academic circles. The good news is that we haven’t seen silence; instead, Timnit and other powerful voices have continued to speak out and push for a more just approach to designing and deploying technical systems. One key element is to ensure researchers within industry can publish without corporate interference, and to foster the same academic freedom that universities seek to provide.
Atlas of AI by Kate Crawford is published by Yale University Press (£20). To support the Guardian order your copy at guardianbookshop.com. Delivery charges may apply.