Aug 24, 2020 | Leadership, Management
There is rich wisdom to be found in fiction, although you might have to look hard to find it. There are some writers who have used fiction to deliver timeless messages.
For example, Sir Arthur Conan Doyle had his protagonist Sherlock Holmes utter some really meaningful lines. Amongst these is the classic: ‘it is a capital mistake to theorise before one has data. Instinctively one begins to twist facts to suit theories instead of theories to suit facts.’
This is confirmation bias at work. We see things that confirm what we already believe much more often and clearly than we see things that may erode or contravene our existing beliefs.
In digging for facts, data, you need to be able to ask smart questions, in some sort of order, to give some ‘shape’ to the way a problem is perceived.
- How and why is this issue a problem? Assembling observations, some informal information, input from customers, line workers, wherever the problem may be seen, to define that there really is a problem, not just someone having a moan.
- When does the problem show itself? Under what circumstances is the problem to be seen, are there patterns of behaviour or circumstances that seem to be correlated? Is there any foundation to see causation?
- Where is the problem showing up? This goes a step deeper to start defining the location of the problem, and the impact it may have.
- What are the impacts of the problem? What are the financial, cultural, value chain, and customer impacts of the problem?
- What is the priority in allocating resources to solve the problem? There are always more problems than there are resources to address them, and as a result, only a few get the attention they deserve. Make sure those limited resources are allocated in the best possible way.
- What return is delivered by solving? This is way more than a financial calculation, it needs to include an assessment of how the transaction costs may be moved around. What is the impact on workflow and stakeholder engagement as they see problems being identified and removed?.
- What other problems are uncovered by the consideration of the first one? Looking at a problem always uncovers others. Often in the process of understanding the problem, others that are the root causes show themselves for the first time. The ‘5 why’ tool is invaluable in understanding the root causes of problems, and should be in every managers toolbox.
Going back to Sherlock, one of the extremely useful observations captured the essence of Occam’s razor, when he said ‘ When you have eliminated the impossible, whatever remains, however improbable, must be the truth’
It is our job as leaders, to get at the truth, and communicate that truth widely, in a manner that it is clearly understood, and able to be acted on. So, the essential lesson, is to ask good questions.
.
Aug 20, 2020 | Governance, Management
The term ‘After Action Review’ emerged from the US military, which formalised it after facing a range of disasters in the field, from Vietnam to the Middle East. Finally, it became obvious they were repeating the same mistakes, consistently.
They should have asked an accountant earlier.
Standard good management practise after a capital expenditure project has been to review the outcomes of the planned expenditure compared to the expected outcomes. Variations in outcome to the plan needed understanding, to ensure errors in judgement were not repeated.
In my experience, it rarely happens well enough; too much corporate politics and ego are involved. However, the idea is not a new one; it just makes absolute sense, which is why you should build it into the performance management culture of your business.
Five simple questions, the first is easy, that is the plan, the following three are where the gold of improved performance hides when you dig hard enough, and ensure the lessons are well learned. The last drives future action.
- What did you plan to make happen?
- What actually happened?
- What caused the difference?
- What can we learn?
- What specific changes will we make next time?
Such a process, embedded in your performance management culture will deliver guaranteed results. ‘Rinse and repeat’ the question process after every project. No matter how small the project may appear to be, an AAR should be automatic, simply a standard part of the process. After a while, it will become second nature to observe the things that may cause the unexpected, plan for them, and take steps to remove them before they occur.
Therein hides one of the secrets of continuous improvement in profitability.
Aug 14, 2020 | Analytics, Management
You will not hear the term ‘normalising’ the P&L very often. When you do, it is often an indication that the business is in a frame of mind open to change.
It is a common starting point of valuing a business, a process that has two basic buckets:
- Financial value. This is where any valuation process will start, with the numbers.
- Strategic value. Far more qualitative than the numbers, a potential buyer will set out to put a value on such things as market share, customer profiles, geographic location, cultural fit, and so on.
Valuing a business is a complex exercise, particularly valuing the contents of the ‘strategic bucket’.
Creating a financial value is much better understood, and almost always starts at the same place: EBITDA. Earnings Before Interest, Tax, Depreciation, and Amortisation.
EBITDA is a construction of the profit and Loss account, which reflects the trading results. Usually the P&L is completed on a monthly basis, and so long as the classifications of the expenses remains consistent, can be used for comparisons over time to give a good picture of trends.
However, the P&L can also be the repository of all sorts of costs and activities that bear little relationship to the competitive trading health that determines the value of a business. Therefore, an exercise to arrive at a value will seek to remove from, or add back, items that reflect more accurately the trading health. The usual term is to ‘normalise’ the P&L.
This is particularly relevant in the sale process of a private company, less subject to the rigors of governance that apply to listed companies with professional rather than family management.
The common items to be ‘normalised’ I have seen are:
Related party revenue or expenses. Purchases from, or sales to another business related in some way to the one being investigated, that are above or below market value. A common practise is for the owner of a private business to have their superannuation fund own the premises from which the business operates. The premises are then leased back to the operating business at a rate not reflective of competitive market value.
Owner bonuses and benefits. Often the owner of a private business will pay themselves and family members more than the market value of their contribution to the business. It also works in reverse, owners are sometimes the worst paid staff members, working longer hours than anyone else, just to keep the wheels turning over. These anomalies need to be ‘normalised’
Support of redundant assets. Every business has redundant assets that would be jettisoned by a new owner. This stretches from old inventory still carried on the books, to premises not utilised, to the country retreat used occasionally for a sales conference, but usually for the summer holidays of the owner. These do not realistically impact the performance of the business, and a new owner, unencumbered by the past, and by costs not associated with the trading position of the business, will remove them from the P&L.
Asset and expense recognition. Treating an expense as an asset, ‘capitalising’ an expense is a common practise that will boost short term profitability by moving items from the P&L to the balance sheet. While this practise is subject to the scrutiny of tax and accounting rules and independent audit, it is pretty common. It is particularly common in the treatment of repairs and maintenance. As with many items, the accounting treatment can be used both ways to ‘manage’ short term profitability.
One time costs. Items such as litigation, insurance claim recoveries, one-off professional fees, even charitable donations, that are not a normal part of trading operations need to be identified and ‘normalised’ to build the picture of repeatable trading outcomes.
Inventories. Every business has inventories, for many it is a significant item. Manufacturing businesses have physical inventories in raw material, Work in Progress, and finished goods, while service providers have projects in various stages of completion. The method of valuation of inventories is subject to all sorts of shenanigans, and the amount of inventory, subject to mismanagement, sloppy processes, and a host of other curses. Aggressive and consistent inventory valuation is a vital part of understanding the working capital needs of a business, and it often the most contested piece in the valuation puzzle.
When you have all that out of the way, you should be able to calculate a reliable figure for the free cash flow generated, or consumed by the business. A further vital number, and the one upon which many acquisition/divestment decisions have been taken.
As a consultant, looking to help businesses improve their financial and strategic performance, I often quietly do a ‘normalisation’ exercise on a clients P&L. This process almost always offers up those difficult questions that need to be asked and answered before an improvement process can be truly effective.
Header cartoon credit: Dilbert and Scott Adams again capture the idea.
Aug 10, 2020 | Analytics, Management, Strategy
Data is inherently tactical, just numbers without intelligence. It takes structure, capability development, and governance to turn it into a useable asset that adds value. In the absence of a structure that is designed to enable the identification, analysis, and leveraging of that data, and to turn it into useable intelligence, it will remain just data.
To go about that task, ask yourself a number of questions:
What are the data flows?
Through the enterprise, who uses the data, how do they use it, and to what outcome?
Where are the interconnections that occur, to what extend are they compounding positively? Data can also compound negatively, usually because it reinforces an existing confirmation bias that is flawed.
Data is functionally agnostic, should be readily available to all, and the outcomes of use transparent so they can be built upon and compounded.
Who ‘owns’ the data?
Too many times I see the IT department generating data, and keeping to it themselves. Similarly, the finance department is guilty, as are all functions. This is usually not malicious; it is just reflecting a lack of cross functional collaboration. It is becoming more common that marketing is driving a large part of the data agenda, enabled by digital tools, but few marketers have the capability to do it effectively.
Often, there is an expectation that ‘digitisation’ of the enterprise will change the way data is used. Not so, it is no more than putting a new coat of paint on the building, unless the internal structures are changed as well, nothing really changes, you just get a few press releases and nice photos for the annual report.
What data is used?
Piles of data is generated, often collated, and distributed, or made available, but never put to productive use. Usually the missing ingredient is curiosity. Those who are curious approach the data with a ‘why’ and ‘what if’ attitude, they ask questions which identify holes in the data, drives them to be filled, and seeks new sources.
Where does the data add competitive value? Competitive value is a two sided coin. On one side is the need to keep up with what your competition is doing, to leverage the opportunities for productivity and not fall behind in your customers eyes. The second is to find ways for data, and more specifically the knowledge that comes from analysing data, to give you a competitive edge. If a proposed investment does not do at least one of these two things, why would you proceed?
How well do the data outcomes reflect alignment with strategy?
Having data and the analyses that goes with it that leads to conclusions that are inconsistent or divergent from the stated strategy must cause you to question the data, its analysis, and the strategy. In these circumstances, it makes sense to deploy the scientific method, create a hypothesis, test it, collect more data and rinse and repeat until you have alignment between the strategy and its supporting data.
Where are you on the digital adoption curve?
Data is just another asset, it requires explicit actions to build the capabilities necessary to generate, use and fund it. There has to be explicit policies and priorities given, or the investments in data development and the capabilities required, or it will not happen. There needs to be a clear picture of the structures of data domains, from engineering, finance, marketing, sales, and they need to be prioritised and organised to deliver the best return in the long term.
The tools being used to accumulate, process and analyse data are just tools, no different to the hammer that drives a nail. It is how we use them that make the difference. Tools everyone should have are those that ensure the data is both clean and robust. Decisions based on data that fails either of these ‘sanitary’ tests will be sub-optimal at best.
We have entered the digital world. Data and its organisation, funding, leveraging and governance are rapidly becoming the key to competitive survival.
How well are you, and your enterprise placed?
Header cartoon: courtesy Tom Gauld at tomgauld.com.
Jun 15, 2020 | Management, Operations
Metrics at their best deliver game changing insight and wisdom. At their worst, they are misleading , irrelevant and a pain in the arse to collect.
So, what are the two characteristics that make a great metric?
The metric is a leading indicator.
A Leading indicator is a reliable measure of what will happen.
For example, if you have the data that shows that for every lead you generate, you convert 5% at an average purchase price of $50, and those customers buy twice a year for an average lifetime of 3 years, you can calculate with some confidence what each lead is worth to you. In this case, it would be: 100 leads X 5% X $50 X twice a year X 3 years = $1500.
The metric is causal.
The most common mistake I see, is metrics that confuse cause with correlation. There are many things that correlate, despite the fact that there is no relationship between them. One does not cause the other.
For example, there is a correlation between ice cream sales and drownings, which on a graph looks identical, but there is no causation between the two. Look deeper, and you might see that on sunny days, more people eat ice cream, and more people also go to the beach, swim, and therefore risk drowning. There is also a close correlation between ice cream consumption and a shark attack. This second correlation would also suffer from very ‘thin’ data, which make any sort of causal relationship even further from the truth. However, a glance at a graph, which takes on some credibility as someone has actually created a graph, would suggest there is some causation.
For a metric to be of any real use, it has to be the catalyst that changes behaviour, and delivers a predictable result. It is not always easy to sort the causal from the correlative. When you need some experienced wisdom, give me a call.
Jun 12, 2020 | Governance, Management
Preparing forecasts is an integral part of most jobs these days, even if it is just how much available capacity there might be on the machine tomorrow, and how best to fill it.
Most forecasting I see is based on the financials, and is one of two methods: The ‘spreadsheet method’, where 4.5% is added across the board, and bingo, a forecast. Easy. The second method is driven by numbers of a different sort: the Net Present Value equation, the present discounted value of forecast future cash flow, which is more often than not driven by spreadsheets with hurdles imposed.
Neither is much good by themselves.
More recently, a range of pretty sophisticated modelling tools have become generally and cheaply available, which despite their sophistication, still have to be fed data and assumptions to spit out an answer.
Effective forecasting takes in a range of qualitative factors, some of which can be massaged into the algorithms, with the caveat that they then have some sort of relative weight applied.
- A realistic assessment of the resources required to reach an objective. Of increasing importance in this calculation are the capabilities of the people required to deliver the outcome.
- An assessment of the strategic, competitive and regulatory environment in which the forecast lives. Generating forecasts without due consideration of a range of factors external to the business, over which they have little if any control, becomes little more than wishful thinking.
- An assessment of the impact the successful initiatives will have on the external environment, particularly competitors. I see way too many forecasts that ignore the simple fact that competitors will not sit still while you eat their lunch. Failure to adequately anticipate and accommodate their reactions in the tactics to be deployed, and their forecast outcomes, is just plain dumb.
- A continuous and rolling After Action Review process. This process ensures the impact of tactical actions can be assessed, and the lessons applied to following forecasts. A forecast should be a ‘living’ document, something that accommodates, adjusts and builds on the facts and changing circumstances as they emerge.
Back in the day when I ran large marketing departments, forecasting was a key part of any project plan. When product managers came to me with their forecasts, I was not so much concerned with the numbers, as I was with the assumptions included and the relative weights of those assumptions. I also insisted that any forecast had three components, a best case, worst case, and forecast case, prepared separately, with different weights allocated to the variables. This gave us a range with which to work, and importantly, ensured some thought had been put into the implications of the ‘pear-shaped’ outcome.
The disturbing thing was always how inaccurate our forecasts were, no matter how hard we worked. This does not mean we did not try hard enough, simply that telling the future is a challenging task, not to be undertaken lightly.
Once again, my thanks for the header to Scott Adams and Dilbert.