11 ways to uncover the lies in data

11 ways to uncover the lies in data

Data does not have an agenda, it does not lie, but it rarely shows the whole story.

Think of the data that would be gathered and analysed after the announcement of a chunk of native forest was opened up for logging. The botanists would have one set of data and analysis of the impact, the accountants another, the entomologists another, those concerned with native animal habitat another, and so on. None are wrong, but all are incomplete without the input of  the others.

Corporate use of data does have an agenda, performance, and unfortunately often personal advancement. Similarly, data delivered as fact by a politician has an agenda: getting elected.

The data does not have an agenda, those who use it often do.

Bias in data can be conscious, as well as unconscious. Someone has to decide what data is collected,  what hypotheses to test, and how it is to be used. All can be shaped to meet a predetermined outcome.

When making a major decision we all look for the data that will give us confidence in our choice.

However, we are all also familiar with the nagging feeling that the data we are looking at is nothing short of bullshit.

So how can you tell?

Here are 11 simple tests to apply.

  • Where did the data come from? Organisations, geographies, people, all make a difference.
  • Was the collection method designed by someone with a vested interest in the outcome?
  • What are the gaps in the data? These can easily be created by the manner in which questions are asked, or often, not asked.
  • What assumptions were made in assembling and analysing the data? No data survives the filtering imposed by the assumptions in the assembly and analysis processes.
  • What statistical measures have been applied? The number of initial data points, upper and lower control limits, confidence levels, all the statistical tools available, but too often dismissed by non statisticians and those running an agenda.
  • Be wary of creative articulation. Percentages are regularly thrown about as ‘proof’ of something. A 50% increase in accidents in your suburb in the past year may mean there were 3 compared to 2 last year. Similarly, averages are often misleading. We expect the mean to be close to the median (middle point in a range) but often it is not.
  • Who gains or loses from the outcome? Just look at the current political ‘debate’ in this country for ample evidence of this. There are no laws about truth in advertising for political ads, therefore the numbers quoted are heavily edited, or it would seem, often just made up.
  • Is the data describing just correlation or is it truly causation. This is often used to make a case. For example this compelling case put forward by the economist a while ago ‘proving’ that intelligence increased with consumption of ice cream.
  • What are the alternative explanations of the conclusions articulated, and what are we not being told?.
  • Is the data giving you the answer to the question being asked, or to some other question? And, how well is the question reflected in the answer?
  • Has anyone with an established perspective opposite to the outcome of the data had a critical look at it? This is often a good way of finding the holes in the collection and analysis.

While statistics can be made to lie, they will also deliver transparency when you understand the basic measures. People will often tell you what they think you want or need to hear, and when it is backed by data, it becomes more credible, particularly if it confirms an already established point of view.

Finally, if it seems too good to be true, there is a fair chance that it is, our instincts are usually pretty good, so follow them until proved otherwise. 

I am  by no means a data nerd, but I do believe that good data can make our collective lives better by improving decision making, and removing just a little of the bullshit sprayed at us so regularly and methodically by everyone with a cause.

Data does not lie, people using data can, and do.

The header cartoon is from David Somerville’s Random Blather blog, an extension of Hugh McLeod’s original.

 

 

 

How to assess the value of information

How to assess the value of information

The term ‘monetisation’  is thrown around like confetti at a wedding. It almost always refers in one way or another to the process of squeezing money out of information of some sort. The real key to monetisation success is to identify who may have value created for them by  the access to, and use of, the information and the outcomes it can bring.

Think about the differences between an X-ray and a CT scan.

An X-ray is a one dimensional ‘picture’, and you only see the bones with any clarity. It is the ‘first port of call’ in a diagnosis, offering a limited view of the location and orientation of a skeletal injury. By contrast, a CT (computed tomography ) scan is multidimensional. You see not just the bones, but the soft tissue as well, and you can see it from a variety of perspectives. it is a far more complete picture.

This is a fine analogy for the value of information.

Financial information is just like an x-ray. It cannot tell you much beyond a one dimensional analysis of a current situation, and it is incomplete. A strategic analysis of information is more like a CT scan, you can see a dramatically increased information set, and examine it from a range of perspectives. This depth of information can deliver understanding and insight about the connections and interrelationships that exist. 

An x-ray capability is relatively simple and cheap, whereas a CT scan is more complex and requires a far greater commitment of resources to deliver that far more detailed picture.  CT scanning equipment costs in the region of 3 times as much as an X-ray set up, and in use, delivers perhaps 100 times the radiation of an x-ray. Not something to  be undertaken without due consideration. A CT scan also requires a far better trained staff than an x-ray, generating greater operational and fixed costs. 

So it is with the information you gather and analyse in your business.

Information is the currency of success these days. Various studies identify in excess of  80% of the market valuation of listed companies coming from intangible assets. Considering this fact,  it makes sense to have an information strategy.

Leaving the IT department to develop such a strategy fails to recognise the importance of information as an essential foundation of success.

Our standard accounting processes include an asset register, on which all assets are recorded, often down to the pens and pencils in the stationary cupboard, but I have yet to see one that puts a rationally articulated value on the information held in the data files of an enterprise. Is it just because it is hard to do, or is it because there is  no place for it on our balance sheets and in our statutory accounts?

There appears to me to be 4 parameters for considering the value of your data.

  • Leading indicator: a source of information about what may happen
  • Lagging indicator: a record of what has happened
  • Focus is improvement of management discipline
  • Focus is creating new value for stakeholders

Creating metrics for each of these is challenging.

Metrics are usually financial, and then usually only one dimensional, based entirely on the costs incurred as recorded. This data is available in some form in every business, but only tells a part of the story. There are opportunities to record and measure costs in other ways.

Elsewhere I have considered the 5 types of cost in every business, direct, indirect, opportunity, transaction, and short cut costs, and noted the challenges of putting numbers to some of them.

Financial data can also be ‘fattened up’ by consideration of  several other parameters:

  • The value of the information by understanding the costs that would be incurred if it was suddenly unavailable.
  • What someone else might pay for it, particularly a competitor
  • The extent to which this information contributes to the bottom line.

For example, these metrics could be considered in the context of the value of the customer and lead information in your CRM, how much does that information deliver to margins? These days that is often a readily available metric.

The additional valuation parameter is strategic:

  • How complete is the information in delivering a picture of how the competitive environment in which you compete is evolving?
  • How does that information inform your strategic decision-making, and what would be the costs of not having it measured considering the 5 costs?
  • How does the information articulate the key drivers of performance?
  • How well does the information contribute to the strategic outcomes being sought?

Tackling this challenge of quantifying intangibles, recognising the truth of Peter Drucker’s throwaway that ‘what gets measured gets done‘ is not easy, and not cheap, but like the difference between an X-ray and a CT scan, the results are worth the effort when done well.

Header photo from: https://www.oceantomo.com/2015/03/04/2015-intangible-asset-market-value-study/. This is the second time I have used this graphic to make a crucial point about the value of intangibles in your business.

The problems with Google

The problems with Google

Google is a wonderful tool, ask it any question, and the answer will come back, or at least a million references that may give an answer will come back.

That is terrific, except for a few minor, or major flaws, depending on your mindset, such as:

Confirmation.

The certainty you get from Dr. Google tends to confirm the things you may already know, at least it confirms the path you are on, by giving you easy answers to the question you face. As a kid, in PG (pre-google) days, we had to go looking for the answer, feed our curiosity, critically review the few sources available, and in the process, stumble across other information that might just be a useful addition to the path we were on. Some would point out that Google in delivering millions of references does the same thing. However, we mostly only look at the first page, which reflects best what others asking similar questions have opened. Confirmation bias at work, silently, in the background. 

No challenge

We do not have to work to find information, we just have to ask.

What if we do not know what to ask?

It seems to me we have lost the itch that is curiosity to see things that are different, divergent, and have a different perspective.

There is also another side to it. If we go to a library, and find the book we really like, the book next door is like the one you love, but just a bit different, that is the way libraries are organised, by topic. Google is not. It does not necessarily give you the thing most like what you are seeking, it just gives you a lead on the things that other people have sought by asking similar questions.

Currency

Google assumes that the newest stuff is the most useful. Often this is the case, but equally as often not.  Increasingly the current stuff is just Google-fodder, crap, of little value.

Dr Google removes the random.

As I get older, it seems I have become more curious, As a result I seem to collect random facts, stories, reports, and pieces of information. Sometimes they get used quickly, often they sit on the metaphorical shelf for ages until a use emerges, or it gets merged with another thought at another time. This collection of random curiosities is facilitated by Google, but not encouraged as everything you ask is there when you ask, but you never know what it is that you have not asked. The beauty of having a ‘library’ of trivia is that at some point, that piece of trivia, that random fact or report will add enormously to whatever it is you are doing. Google will never know this, so you need to collect the disconnected random facts like squirrels keeping nuts for winter.

Serendipity cannot be digital

Serendipity comes about from unexpected outcomes, things that go against the common understanding, the tenuous thread you see between two logically disconnected facts. Making these connections requires a multidimensional ‘intelligence,’ not one dependent on a logical algorithm, no matter how ‘smart’ it might be.

 

Google has not only become the default for the world, it is becoming the primary source, along perhaps with its digital stablemate Facebook, that is if anything, better than Google at eliminating the instinctive drive for creativity and curiosity. 

How do we encourage critical thinking when there are only two sources of information?

 

 

 

Believe what they do, not what they say.

Believe what they do, not what they say.

Last week I was reminded, again, to take what people told you with a grain  of salt, and to watch closely what they did, rather than believing what they said.

I watched as the CEO of a significant business took a decision that was in direct conflict with the values he regularly espouses to staff and customers, in the interests of a short term cost mitigation.

He did not seem to accept the inconsistency when it was pointed out.

In the early 70’s as a student, I did a couple of holiday stints as a door to door market researcher. In one project, we were banging on doors and asking which brand of cigarette was smoked (in those days, smoking was widespread). When the answer was one of a couple of premium brands, we had to persuade the respondent to show us the packets in the house, and half the time, it was one of the cheaper brands.

Had we accepted what they said, rather than confirming with what they did, the research results would have been even more rubbish than they were.

Putting yourself in the shoes of a research respondent is really hard. It requires empathy, close observation, robust but sensitive questioning, and savvy choices in who you talk to if the results are to be reliable. It also offers the opportunity to gather insights into behavior that enables better product and service design, uncovering unstated or unrecognised problems being faced.

I hesitate to mention, we are about to go into an election campaign, the reality is we are already there, with the welter of blather, tired clichés and bullshit about to overwhelm us, again. As a community, we should really point out to all who want our votes the truth of the post headline.

Illustration credit: Tom Gauld from Instagram.

How to build a hierarchy of performance measures.

How to build a hierarchy of performance measures.

 

Corporate KPI’s should be evolved as a hierarchy, that measures the cause and effect relationships through an organisation, and be largely agnostic to the individual. After they are in place, you can develop the KPI’s for a role to be filled, for which an individual allocated to that position has responsibility.

There are 4 levels in most organisations that I see.

Measures of  sustainability.

These measures are connected to the purpose of the enterprise, they answer the question, how do you know if you are successful?. Sustainability is used in it broadest sense, commercial, cultural, and ecological.  In effect they are the harbingers of future success as well as current levels. Most organisational KPI’s that I see are all about financial success, which is critical, but is an outcome of success in other areas, not in itself a driver of success.

Measures of  strategic success.

These measures are directly related to the strategic priorities set. As strategy is about choices, so the performance measures should reflect the quality of  the choices made, and progress towards the agreed objective. Some will be financial, ROI, shareholder value, but the most effective ones will be about customer churn, geographic footprint, innovation, customer satisfaction, reflecting the strategic resource allocation decisions made to prioritise activities.

Process measures.

Process measures are those tactical measures that reflect the performance of the processes in the business that deliver value to customers, and feed the measures of strategic success. These will vary widely dependent on the type of business, but logically they include things like customer satisfaction, delivery performance, lead conversion, revenue, customer profitability, and so on. They tend to be the measures most appropriately reviewed on a shorter time scale than those above.

Operational KPI’s.

Operational measures should deliver a picture of  how the individual cogs in the wheel are operating. They should be directed at the items that are at the root of process productivity and efficiency. Measures such as machine availability, lost time injuries, rework, inventory turn, daily output to plan, and so on.

Together these measures should offer a complete picture of the way the separate parts of the organisation mesh together to deliver the enterprise purpose, the ‘Why’ you are here.

Ensuring measures are transparent across and through the organisation gives them ‘life’ beyond the dry review process.

Financial measures play a role at each level. However, because it is generally easier to gather financial information, and they are more commonly understood, they have become the default and only measures many use, which is to their detriment. They also fail the test of telling you why an outcome occurred, they just tell you it did.

Mapping the cause and effect chains summarised as KPI’s is always a useful exercise. Many people learn and understand visually, particularly when they have a role in the process mapping, and such an exercise enables a connection of KPI’s throughout an enterprise to be made. Experience shows it is a great way of generating the strategic alignment and buy-in so hard to find in most businesses.

 

What does marketing to Supermarkets and Pharmaceutical research have in common?

What does marketing to Supermarkets and Pharmaceutical research have in common?

Quantifying the ROI of marketing investments remains the single most challenging task of marketers. While marketing costs  remain being seen as a variable expense, stuck in the monthly P&L , it will remain hostage to the whims of expediency, corporate politics, and short term thinking. The real KPI of marketing investment should be the sustainable margin delivered over a considerable time, as you would with an investment in machinery.

The obvious problem is that you can measure the output and productivity improvements associated with a piece of machinery, the numbers become available with use, although, they are all in the past. Marketing investment is all about influencing the future, and measurement, even with the benefit of hindsight is very hard, and useful only as a learning tool.

Is there something we marketers can learn from elsewhere?

The  Kaplan Meier curve is a basic concept used all the time by medical and pharmaceutical researchers. For example, if they are testing a new drug for say, patients with diagnosed terminal prostate cancer, you plot on a daily curve the lifespan of those on the test drug, and those on the placebo.

Assuming there are 100 patients in the trial, at day 1, all 100 are alive, then  you plot the numbers who remain alive daily with, and without the drug. If the plot line of those with the drug goes above the line of those without, you can imply the outcome of longer life, and you have some numbers to support the conclusion. If the line of those on the drug dips below the placebo line, you are killing patients. Lines that stay together indicate the drug has no impact.

Simple idea, widely used in medical research.

For years I have watched suppliers to supermarkets being screwed by those supermarkets, and increasingly allocating advertising funds aimed at brand building , which delivers margins over time to the brand owners, and indirectly despite the protestations to the contrary, to the retailers. This reallocation of advertising to working capital and margin via in store promotional activities, and supermarket profitability, at the expense of advertising, has been a huge mistake.

It has seen the demise of some great brands. To be fair however, consumers have benefitted by cheaper prices, at the expense of choice.

A few weeks ago,  the recently merged businesses of Kraft and Heinz, announced a disastrous profit result. This came about as progressively brand advertising that gave consumers confidence in the  brands has been redirected to price promotion that is the primary competitive tool of supermarkets. Meanwhile, those  same retailers have introduced house brands that look very similar, and that trade off the value proposition developed by Heinz and Kraft over many years.

The same thing has happened in Australia, perhaps more so given the concentration of supermarket retailing.

I was around as a junior product manager in the early  days of Meadow Lea brand building, at what was then Vegetable Oils Pty Ltd, a long gone business, swallowed up by corporate stupidity.

 ‘You ought to be congratulated’ is one of the great propositions of Australian brand building. In a hugely crowded margarine market, Meadow Lea held at its height, a 23% percent market share at premium prices, four times that of its closest rival. This was a direct outcome of a good product, great advertising, and a brand that delivered.

I had a look in a supermarket yesterday, and had trouble finding anything labelled Meadow Lea.

What happened?

Retailer power happened, combined with the lack of  understanding of the power of great brand building consumer propositions by retailers. Meadow Lea was squeezed by retailers for more and more promotional dollars that ended up  being funded by reductions in the brand advertising and building activity, with the end result that the brand in effect no longer exists.

It has become nothing more than a label!

I wonder where the  next market building initiative will come from?

Certainly not from the manufacturers, as they know that immediately they create a market the retailers will undermine it with cheap versions, so there is no value in the risks involved in the innovation necessary, and no reward.

Back to where I started, and I do not have the data for this, but I bet that applying a Kaplan Meier analysis to  the delivered margin from Meadow Lea over time, both to the now owners of the brand, and the retailers, would show that the allocation of brand activity to the low prices demanded by retailers had hurt everybody concerned, including consumers.

Image credit: Wikipedia