Nov 9, 2020 | Analytics, Sales
Getting people to change their minds is tough.
Just look at the divisions in the US over the last months in the lead up to the presidential (have you ever seen anything less presidential?) election.
Most set out to change somebody’s mind by telling them they are wrong, here is the right answer.
This rarely works well, as the natural reaction is pushback as people defend their existing position.
Instead, you have to find the things that they want that are consistent with the position you hold, and deliver those to them.
A former client had been successful for a long time selling the manufacture of large capital items to its natural Australian customer base. Over time, their market share had diminished as lower priced overseas competitors ate away at their base. Their focus was on price, and they cut corners in all sorts of little ways in an attempt to remain competitive, and the erosion continued. The turnaround came when they moved their focus from the procurement functions to engineering, giving the engineering staff of their customers the ammunition to argue the case that in fact, the more expensive invoice cost of their gear was better value than the ‘cheaper’ items sourced offshore. They created what they called ‘the TLC index’: Total Lifetime Cost. This was a calculation based on data and case studies over a considerable period that included scheduled maintenance and replacement parts, reliability data, the costs of downtime caused by offshore supply chain delays, ease of access to those who did the design and fabrication of the component parts, and several other items.
The invoice price on the purchase order was then shown in an entirely different light.
They did not tell their customers that the cheaper offshore item was an inferior product, they focussed on the things the engineering staff were concerned about, and gave them the ammunition to carry the argument based on the value of efficiency and reliability over the life of the items, rather than just focussing on the invoice price of the initial procurement.
It is easier in most cases to focus on price. However, demonstrating that price is only one part of the value equation delivers better results over time.
Dilbert again demonstrates insight: thanks to Scott Adams.
Oct 19, 2020 | Analytics, Marketing
How do you identify those who might emerge as competitors?
Might as well ask what the weather will be like next week, or next year. Just looking out the window will not help much, as things tend to change pretty quickly.
So it is with identifying potential competitors and the new value propositions they offer. However, like forecasting the weather, there are indicators, models that can be applied that will at least throw some light on the question.
It seems to me that there are three perspectives to this question, best described by the usual ‘think outside the box’ metaphor.
- Competitors from inside the current box
- Competitors from outside the current box
- Competitors from outside the postcode.
Leaving aside the great benefit of hindsight, it is generally pretty easy to see potential competitors from inside your current box. You are already rubbing up against them in some way, technically, geographically; you might have common customers for products with slightly different characteristics or value propositions. There is always the possibility for someone in an adjacent market expanding their control of the supply chain vertically, which will bring them into competition with you. Now, there are many digital tools that will assist the process from google key word searches, to explicitly looking at where similar offerings are emerging via social media. In addition, these often obvious emerging competitors are usually the ones your sales force are always on about, as they are the dog that got their homework, often by making cut price offerings of stripped down products to your customers.
Identifying potential competitors from outside the box is a bit harder. it is also increasingly important, as the barriers to entry to many industries have been blown away.
To do this well requires focussed critical thinking, and understanding of the drivers of purchase by your customers, the evolution of those purchase drivers, and the current sources of purchase friction encountered by customers. This takes some commitment to longer term strategic thinking.
Identifying those potential competitors from outside the postcode in a disciplined manner is extraordinarily hard, and beyond the resources of any SME I have ever seen. It not only requires the sort of time for deep thinking and scenario analysis available only to large organisations, it requires a very ‘absorbent’ innovation culture, one that accepts the inevitability of major disruption, and explicitly goes about discovering what those sources of disruption might be, and then disrupting themselves.
This sort of out of the postcode thinking opens up your mind to potential competitors. It also opens your mind to potential sources of future growth for yourself. As a result, having as a part of your strategic thinking and review process an explicit and disciplined ideation process should be a part of every strategic exercise and review.
These three perspectives all have in common the requirement that any enterprise, to be successful competitively over the long term, needs to ‘understand itself’ and be very sensitive to changes in their operating and competitive environment. Then, they have to be able to respond, which is often the hardest bit because it demands change. In the case of out of the postcode opportunities, radical change, and a high level of risk tolerance are required. As Sun Tzu is recorded as having said ‘know others and know thyself, and you will not be endangered by innumerable battles”.
The header illustration comes form the extensive StrategyAudit toolbox.
Oct 16, 2020 | Analytics, Management, Operations
Certainty in forecasting is the holy grail, being certain of the future means success. However, as we know the only thing we know for certain about the future, is that it will not be the same as the past, or present.
Quantifying uncertainty appears to be an oxymoron, but reducing the degree of uncertainty would be a really useful competitive outcome.
When you explicitly set about quantifying the degree of uncertainty, risk, in a decision, you create a culture where people look for numbers not just supporting their position, but those that may lead to an alternative conclusion. This transparency of forecasts that underpin resource allocation decisions is enormously valuable.
How do you go about this?
- Start at the top. Like everything, behaviour in an enterprise is modelled on behaviour at the top. If you want those in an enterprise to take data seriously, those at the top need to not just take it seriously, but be seen to be doing just that.
- Make data widely available, and subject to detailed examination and analysis. In other words, ‘Democratise’ it, and ensure that all voices with a view based on the numbers are heard.
- Ensure data is used to show all sides of a question. In the absence of data showing every side of a proposition, the presence of data that emphasises one part of a debate at the expense of another will lead to bias. Data is not biased, but people usually are. In the absence of an explicit determination to find data and opinion that runs counter to an existing position, bias will intrude.
- Educate stakeholders in their understanding of the sources and relative value of data.
- Build models with care, and ensure they are tested against outcomes forecast, and continuously improved.
- Choose performance measures with care, make sure there are no vanity or one sided measures included, and that they reflect outcomes rather than activities.
- Explicit review of the causes of variances between a forecast and the actual outcomes is essential. This review process, and the understanding that will evolve will lead to improvement in the accuracy of forecasts over time.
Data is agnostic, the process of turning it into knowledge is not. Ensure that the knowledge that you use to inform the forecasts of the future are based on agnostic analysis, uninfluenced by biases of any sort. This is a really tough cultural objective, as human beings are inherently biased; it is a cognitive tool that enables us to function by freeing up ‘head space’ reducing the risk of being overwhelmed.
Consistent forecast accuracy is virtually impossible, but being consistently more accurate than your competition, while very tough, is not. Forecast accuracy is therefore a source of significant competitive advantage.
Header cartoon courtesy Scott Adams and his side-kick, Dilbert.
Forecast in cartoons
Oct 12, 2020 | Analytics, Marketing
It is easy to define the value of a piece of machinery. It is the revenue generated by the machine, divided by the costs to generate that revenue.
Accounting with the benefit of hindsight is easy. It is not so easy when forecasting what the future value may be. Forecasting when the impact of the many relevant variables can only be estimated is an exercise in fortune telling. Quantifying these relative unknowns to allocate a numerical ‘value’ becomes a task with several parts.
- Defining the factors that may impact the calculation
- Allocating a relative weight to all the identified factors
- Determining the ‘base’ figure from which to build the numbers that enable a calculation.
- Repeating the above process for all the costs involved.
The calculation is then easier:
Value = weighted benefit 1 x weighted benefit 2 x weighted benefit 3: divided by:
weighted cost 1 x weighted cost 2 x weighted cost 3.
It becomes way harder when setting out to value an intangible asset, such as the value of a brand. For example, a pair of sunglasses purchased in a general retailer for a fraction of the price of an almost identical pair, apart from a brand, sold through a specialist optical retailer. Too many, the more expensive branded glasses represent value for a range of emotional reasons, to others, they would be a rip-off.
At some point early on, and subjected to continuing evolution based on experience and research, you need to be able to identify the factors that add value to a target customer, and their relative contribution to the end result.
Always, the complicating factor is context.
I need a new computer, this one is getting a bit old, and while it still does the job well, at some point, something will reach the end of its life, and ‘poof’, gone. At that point the context changes, as does the value equation.
What was something needed but not urgent, that had a calculable value, suddenly becomes a whole new game, as I need the new computer: Now!
A whole different value equation!
The variables may be the same, but the relative weights have changed dramatically, determined by context.
Sep 23, 2020 | Analytics, Management, Operations
When you want superior performance, implement a number of key cross functional metrics.
Gaining agreement on a set of metrics that genuinely track a projects cross functional performance is not a simple task. KPI’s are usually focussed on functional performance, whereas optimal performance requires that cross functional dependencies are reflected in the KPI’s put in place.
The standard response of functional management to such an idea is that if they cannot control a process, how can they be held accountable for its performance?
To get over this reasonable question requires that there is agreement across three domains, and collaboration around the tactical implementation of a processes improvement.
Let us use a reduction of Working Capital requirements as an example, requiring 4 steps.
Agreement on strategic objectives, and accompanying KPI’s.
The strategic objective becomes making the enterprise more resilient, and therefore able to adjust to unforeseen shocks. One of the strategies agreed is the reduction of Working capital. There are many parts that make up working capital, inventory being a major one in a manufacturing environment. As the joint objective is to make the enterprise more resilient, it is agreed that Inventory levels must be reduced.
Agreement on what ‘success’ looks like.
The absence of an outcome that signals success means that any improvement will do. There are numerous measures that can be applied, how much, when, what outcomes, compliance to standards, variation from the mean, and many others. In this case, a reduction of inventory levels by 15% without compromising customer service, is the agreed metric of success. Agreement across functions that this is a sensible measure will deliver the opportunity for cross functional alignment, and will contribute to delivering the strategic objective of resilience.
Agreeing on tactical diagnostics.
Tactical diagnostics are aimed at tracking and optimising the short term performance detail of the components of the agreed objective. Which parts of a project are working as expected, and which are not. You can make the changes in these on the run, experiment, learn, adjust. It is usually not necessary to have these on the high level dashboard, they are for the teams and individuals responsible for the execution of a strategy to determine the best way of doing them. What is critical at the tactical level, is that those involved clearly understand the wider objective, and their role in achieving it.
Application of the diagnostics.
As the old saying goes, ‘what gets measured, gets done’. In this case, to reduce inventory without compromising customer service, requires the co-ordination of many moving parts, some of which will need some sort of a scoreboard to track progress on the tactical improvements. For example, transparency of raw materials inventory and incoming delivery schedules to those doing production planning, matching production to real demand, improving forecast accuracy, managing DIFOT levels, levelling production flow between work stations, and many others. These should be made visual to the teams engaged in the work, at the place where the work gets done.
For all this to work, the KPI’s need to be simple, visual, apparent to everyone, and as far as possible dependently cross functional. In other words, build mutual KPI’s that reflect both sides of a challenge.
For example, stock availability and inventory levels. Generally those responsible for selling do some of the forecasting, so they always want inventory, manufactured yesterday, to be available when a customer needs it. As a result of uncertainty, they tend to over forecast to ensure stock availability when an order arrives. By contrast, Operations tends to like to do long runs of products to satisfy productivity KPI’s, so you end up running out of stock of the fast movers, while having too much stock of the slow lines.
The solution is to make the sales people responsible for inventory levels, and the operations people responsible for stock availability. In that way, they collaborate to achieve the optimum mix of production and inventory. This mutuality ensures functional collaboration at the tactical level, leading to making decisions for which they are jointly accountable.
You are in effect, forcing cross functional collaboration where it does not naturally exist in a traditional top down management model.
None of this is easy. If it was, everybody would be doing it. That is the reason you should be on this journey, it is hard, and so delivers competitive sustainability.
Sep 7, 2020 | Analytics, Management, Marketing
As a marketer, I want data to better understand the risks and impact of investments in marketing. I am a true believer in data, which also means that the limitations of data are factored into my thinking.
The nonsense pushed around for decades that by default, human beings respond to stimuli in a binary way is increasingly being recognised for the bunkum it is. Marketing effectiveness is not as easily subject to risk analysis and probability based reasoning as most, including myself, would like to be the case.
Data that represents what has happened in the past might be objectively true, but as we see every day, can easily be interpreted and presented differently to deliver the message the carrier wants to be heard.
If we can do it with real data collected from past activities, imagine the vagaries that can be built into the data that is supposed to be telling us what will happen!
The selling point of all the digital data around is that it is both accurate and actionable. Tactically this is partly true, strategically it ranks with the fortune teller in the local fete as a base from which to make long term choices.
The two fundamental drivers of calculating an objective assessment of the impact of a marketing investment are:
Attribution.
Attribution is a particularly difficult and often overlooked problem. Is that purchase because of the anonymous display ads on Google, the annoying branded email that follows you around for weeks after a casual search, the fact that the truck that went past your door delivering was clean, the TV advertising, or that the packaging looked good on a supermarket shelf? All these factors play a role in creating a successful marketing investment, but how do you sort out the relative weights of the impact with one dimensional data?
The unpredictability of human behaviour.
Then you have the fact that people simply do not act rationally, or always in their own best interests, the two foundations of econometrics. They act on a range of impulses and learned behaviours that have little to do with rational economics, and everything to do with psychology. We are only just beginning to understand the impact of psychology on an individuals decision making.
Between them, these two factors make assessment of marketing effectiveness an elusive target. It is best served with the combination of data, and intelligent hindsight, mixed with a high degree of qualitative sensitivity to the drivers in the market, and instinct. These characteristics are only gathered with deep experience, years down in the marketing weeds, learning by doing. It does not come from a textbook, online course, or a few years following instructions.