Apr 8, 2021 | Leadership, Management, Operations
Have you ever started to read a book, and decided to miss chapter 1?
I guess few ever have.
Miss chapter one, and you miss the foundation of what is to come. It is the first impression, creates the context in which the book is set, irrespective of it being fiction or non-fiction.
Why then do most businesses and their advisors not read chapter 1 of the business improvement handbook?
I know they do not, simply because Cash is such a low priority in these conversations. It is left behind by management clichés and fluffy words about visions and missions.
These things are all important, but in the absence of cash, beyond reach.
How much cash does it take to run your business?
How long is your cash conversion cycle?
What are the sources of the cash you are using?
What are the trends in your free cash flow?
These should be chapter 1 of the business improvement handbook.
When you know the answers, you can move on to the things you can do better to free up more cash, then to the operational, customer and strategic challenges being faced, knowing how much cash you have at your disposal to address these challenges.
Let me know when you need some experienced assistance sorting all this out.
Mar 8, 2021 | Analytics, Governance, Operations
One of the significant problems in making any change is the articulation of the need to change, and the outcomes that are expected as a result.
Overcome those two, and change is suddenly easier, albeit still really hard.
The first hurdle is the articulation.
In order to communicate and have complex ideas generally understood, you do not use technical, academic jargon backed by data, you use stories and metaphors in a way that connects with the audience.
Communicating industry 4.0 is such a complex challenge.
What is it, how will it affect me, why should I be interested?
Answering these questions is a core foundation of gaining acceptance, followed by action that becomes automatic as it gets buried in the auto-response system.
Remember the last time you put your hand onto a hot stove.
Before you felt anything, you had reacted by pulling your hand away, a totally unconscious, instantaneous, action then, it started to hurt like hell.
Think about the processes involved in this.
First: the ‘data’ that indicated the stove was hot was collected by the nerves in your fingers and hand.
Second: the ‘data’ is sent for processing to your brain, the CPU between your ears. This processing concludes your hand is in danger of being burnt.
Third: That conclusion is sent to the muscles that control where your hand is, with firm instructions to remove it immediately.
Fourth: Your hand is pulled back out of danger.
Fifth: It starts to hurt like hell, and the memory of that hurt is stored deep in your personal CPU for future reference should your hand stray again.
The astonishing thing is that the first four happen without thought, instantaneously, and the fifth is a long term ‘frame’ through which you unconsciously ‘feel’ the hurt and approach the stove warily. It is a neural network that collaborates, communicates, drives action, and learns.
Industry, or more specifically, Factory 4.0 is, similarly, a set of tools that collects, analyses and acts on data without direction, and learns from the experience, adding to the auto-response ‘memory bank’ and adjusted based on the ‘learning’ that occurs as data on outcomes is collected. The system becomes more Automatic than Artificial.
Header cartoon credit: Tom Gauld in ‘New Scientist’ magazine.
Oct 16, 2020 | Analytics, Management, Operations
Certainty in forecasting is the holy grail, being certain of the future means success. However, as we know the only thing we know for certain about the future, is that it will not be the same as the past, or present.
Quantifying uncertainty appears to be an oxymoron, but reducing the degree of uncertainty would be a really useful competitive outcome.
When you explicitly set about quantifying the degree of uncertainty, risk, in a decision, you create a culture where people look for numbers not just supporting their position, but those that may lead to an alternative conclusion. This transparency of forecasts that underpin resource allocation decisions is enormously valuable.
How do you go about this?
- Start at the top. Like everything, behaviour in an enterprise is modelled on behaviour at the top. If you want those in an enterprise to take data seriously, those at the top need to not just take it seriously, but be seen to be doing just that.
- Make data widely available, and subject to detailed examination and analysis. In other words, ‘Democratise’ it, and ensure that all voices with a view based on the numbers are heard.
- Ensure data is used to show all sides of a question. In the absence of data showing every side of a proposition, the presence of data that emphasises one part of a debate at the expense of another will lead to bias. Data is not biased, but people usually are. In the absence of an explicit determination to find data and opinion that runs counter to an existing position, bias will intrude.
- Educate stakeholders in their understanding of the sources and relative value of data.
- Build models with care, and ensure they are tested against outcomes forecast, and continuously improved.
- Choose performance measures with care, make sure there are no vanity or one sided measures included, and that they reflect outcomes rather than activities.
- Explicit review of the causes of variances between a forecast and the actual outcomes is essential. This review process, and the understanding that will evolve will lead to improvement in the accuracy of forecasts over time.
Data is agnostic, the process of turning it into knowledge is not. Ensure that the knowledge that you use to inform the forecasts of the future are based on agnostic analysis, uninfluenced by biases of any sort. This is a really tough cultural objective, as human beings are inherently biased; it is a cognitive tool that enables us to function by freeing up ‘head space’ reducing the risk of being overwhelmed.
Consistent forecast accuracy is virtually impossible, but being consistently more accurate than your competition, while very tough, is not. Forecast accuracy is therefore a source of significant competitive advantage.
Header cartoon courtesy Scott Adams and his side-kick, Dilbert.
Forecast in cartoons
Sep 23, 2020 | Analytics, Management, Operations
When you want superior performance, implement a number of key cross functional metrics.
Gaining agreement on a set of metrics that genuinely track a projects cross functional performance is not a simple task. KPI’s are usually focussed on functional performance, whereas optimal performance requires that cross functional dependencies are reflected in the KPI’s put in place.
The standard response of functional management to such an idea is that if they cannot control a process, how can they be held accountable for its performance?
To get over this reasonable question requires that there is agreement across three domains, and collaboration around the tactical implementation of a processes improvement.
Let us use a reduction of Working Capital requirements as an example, requiring 4 steps.
Agreement on strategic objectives, and accompanying KPI’s.
The strategic objective becomes making the enterprise more resilient, and therefore able to adjust to unforeseen shocks. One of the strategies agreed is the reduction of Working capital. There are many parts that make up working capital, inventory being a major one in a manufacturing environment. As the joint objective is to make the enterprise more resilient, it is agreed that Inventory levels must be reduced.
Agreement on what ‘success’ looks like.
The absence of an outcome that signals success means that any improvement will do. There are numerous measures that can be applied, how much, when, what outcomes, compliance to standards, variation from the mean, and many others. In this case, a reduction of inventory levels by 15% without compromising customer service, is the agreed metric of success. Agreement across functions that this is a sensible measure will deliver the opportunity for cross functional alignment, and will contribute to delivering the strategic objective of resilience.
Agreeing on tactical diagnostics.
Tactical diagnostics are aimed at tracking and optimising the short term performance detail of the components of the agreed objective. Which parts of a project are working as expected, and which are not. You can make the changes in these on the run, experiment, learn, adjust. It is usually not necessary to have these on the high level dashboard, they are for the teams and individuals responsible for the execution of a strategy to determine the best way of doing them. What is critical at the tactical level, is that those involved clearly understand the wider objective, and their role in achieving it.
Application of the diagnostics.
As the old saying goes, ‘what gets measured, gets done’. In this case, to reduce inventory without compromising customer service, requires the co-ordination of many moving parts, some of which will need some sort of a scoreboard to track progress on the tactical improvements. For example, transparency of raw materials inventory and incoming delivery schedules to those doing production planning, matching production to real demand, improving forecast accuracy, managing DIFOT levels, levelling production flow between work stations, and many others. These should be made visual to the teams engaged in the work, at the place where the work gets done.
For all this to work, the KPI’s need to be simple, visual, apparent to everyone, and as far as possible dependently cross functional. In other words, build mutual KPI’s that reflect both sides of a challenge.
For example, stock availability and inventory levels. Generally those responsible for selling do some of the forecasting, so they always want inventory, manufactured yesterday, to be available when a customer needs it. As a result of uncertainty, they tend to over forecast to ensure stock availability when an order arrives. By contrast, Operations tends to like to do long runs of products to satisfy productivity KPI’s, so you end up running out of stock of the fast movers, while having too much stock of the slow lines.
The solution is to make the sales people responsible for inventory levels, and the operations people responsible for stock availability. In that way, they collaborate to achieve the optimum mix of production and inventory. This mutuality ensures functional collaboration at the tactical level, leading to making decisions for which they are jointly accountable.
You are in effect, forcing cross functional collaboration where it does not naturally exist in a traditional top down management model.
None of this is easy. If it was, everybody would be doing it. That is the reason you should be on this journey, it is hard, and so delivers competitive sustainability.
Aug 3, 2020 | Leadership, Operations
I love smart goals, they provide a road map, discipline, and a definition of what success looks like. Over the years they have proved to be very useful.
However, as I get wiser, I realise there is one vital element missing from Smart goals:
Compounding.
Compounding is, as Einstein noted, the most powerful force in the universe. To compound, you do little things that build on each other over time, becoming increasingly more powerful at a geometric rate.
The benefit of compounding is that you learn as you go, it is learning oriented, whereas SMART is by definition, goal oriented, it has an end point.
The obvious solution to this dilemma is to make every project a series of goal oriented components, that together and compounding, deliver the continuously improving outcomes. This sort of view forces to you to be ambidextrous in the way you look at performance.
On one hand, you are down in the weeds working with the detail, while on the other hand, there is the really important helicopter view that is able to make the compounding impact of all those tiny improvement obvious over time.
At its core, this is what lean thinking is all about, continuous improvement that delivers over time.
Jun 15, 2020 | Management, Operations
Metrics at their best deliver game changing insight and wisdom. At their worst, they are misleading , irrelevant and a pain in the arse to collect.
So, what are the two characteristics that make a great metric?
The metric is a leading indicator.
A Leading indicator is a reliable measure of what will happen.
For example, if you have the data that shows that for every lead you generate, you convert 5% at an average purchase price of $50, and those customers buy twice a year for an average lifetime of 3 years, you can calculate with some confidence what each lead is worth to you. In this case, it would be: 100 leads X 5% X $50 X twice a year X 3 years = $1500.
The metric is causal.
The most common mistake I see, is metrics that confuse cause with correlation. There are many things that correlate, despite the fact that there is no relationship between them. One does not cause the other.
For example, there is a correlation between ice cream sales and drownings, which on a graph looks identical, but there is no causation between the two. Look deeper, and you might see that on sunny days, more people eat ice cream, and more people also go to the beach, swim, and therefore risk drowning. There is also a close correlation between ice cream consumption and a shark attack. This second correlation would also suffer from very ‘thin’ data, which make any sort of causal relationship even further from the truth. However, a glance at a graph, which takes on some credibility as someone has actually created a graph, would suggest there is some causation.
For a metric to be of any real use, it has to be the catalyst that changes behaviour, and delivers a predictable result. It is not always easy to sort the causal from the correlative. When you need some experienced wisdom, give me a call.