The importance of what we cannot measure

The importance of what we cannot measure

 

Psychology drives our behaviour, and yet we struggle miserably to forecast the impact it will have. Therefore, we cannot predict behaviour with any real accuracy, except with the benefit of hindsight, or across the average, assuming we ask the right questions.

There are five important psychological factors that profoundly impact the sorts of decisions, big and small we make every day.

Status, Certainty, Autonomy, Relatedness, Fairness.

Psychologists put them together into the ‘SCARF’ model as they set about understanding the drivers of behaviour, which centres around ‘away’ movements to minimise threats, and ‘towards’ movements to maximise rewards.

Status. We all know it is important, that is how Mercedes manage to squeeze 4 or 5 times the money out of buyers than a perfectly adequate, reliable, and outfitted with bells and whistles Korean or Chinese alternative. It is why people pay tens of thousands for a watch, assume crushing debt to have a luxury car in the drive, and Louis Vuitton is the world’s most valuable luxury brand.

Certainty. Uber nailed this one. The time we wait for a taxi is different to the time we wait for an Uber, even when the Uber wait is longer. This is because we are waiting with certainty, we know when the Uber will arrive, we know where it is right now, and we can walk out of the building as it pulls up, which adds a feeling of status to the equation. By contrast, call a taxi and then wait, uncertain when it will turn up.

Autonomy. We all like to feel we are making our own decisions, even when we are not. We love that feeling of freedom, even when it is an illusion, or inside a tiny arena of personal space.

Relatedness. Human beings are social animals, we like to feel like others are aware of us, and concerned with our needs, views, and ideas. It is like being in a book club, there are psychological rewards to being in a group that values your presence. We also need the group for protection, as it is the outliers that become a lion’s breakfast.

Fairness. Instantly we rate things on a fairness scale, we like to be seen as fair, even when we are diddling the books. Is it fair that the bloke next door who does the same job gets paid 20k more?

None of these things appear in economic models.

It was Einstein (amongst others) who said, ‘not everything that matters can be measured, and not everything that can be measured matters’.

 

 

 

 

The single biggest challenge in marketing analytics?

The single biggest challenge in marketing analytics?

 

Almost every marketing so called guru, yours truly included, will bang on about calculating an ROI from your investment in marketing.

Marketing like any other investment should seek a return, and there should be accountability for those numbers.

Almost nobody will disagree.

The challenge is how you do it.

How do you attribute an outcome to any specific activity or individually weighted group of activities?

The amount spent divided by the sales, or margin returned from that activity.

Pretty easy in the case of a piece of machinery, another matter entirely for anything beyond a specific tactical action, such as an ad in Facebook or Google where the response can be counted.

In the case of marketing investment, how do you allocate the sales outcome to that activity?

When a sale is generated, was it because of the activity we are calculating for, or was it the phone call from the sales rep, attractive copy on the website, clean delivery truck, or the referral from some other satisfied customer?

How can we tell?

When some analytics nerd cracks the code on attribution, he will become histories fastest billionaire.

So, when some fast talker promising world market domination will result from investing in their new ‘thing’, run as fast as you can, unless they can prove they are the one who cracked the attribution code, which I do not expect any time soon.

 

 

 

A marketer’s explanation of 6 sigma

A marketer’s explanation of 6 sigma

6 sigma is a statistical toolbox designed to assist process improvement. It was originally developed by Motorola in the 80’s as they struggled with quality problems in their booming but now extinct mobile phone business. The tools seek to identify and remove the causes of variability and resulting defects in manufacturing processes. It uses statistics to identify problems, formulate remedial action, then track the impact of improvements as they are implemented.

In simple terms, 6 sigma compliance means there is less than 3.5 defects in a million opportunities for that defect to occur. This can apply to a specific machine or action, or whole production line. Clearly the latter creates many more opportunities for error, and therefore harder to stay ahead of the 3.5 defects/million opportunities benchmark.

Improvement projects are run to a proven statistical ‘recipe’ going by the acronym of DMAIC.

Define. Using statistics, define the problem, and the deliverables of the project.

Measure. By collecting data, you measure the ‘current state’ of the process or activity. This is the starting point from which the improvements will be measured.

Analyse. By analysing the data from each point of input, usually by experimentation, you isolate the cause-and-effect chains in the activity. This identifies the root causes of the variation being investigated.

Improve. Removal of the causes of variation will result in improved performance. The improvements require that changes be made and that the improved processes become the Standard Operating Procedures (SOP).

Control. Control is the continuing monitoring of the improved process to ensure that there is no ‘back-sliding’.

When engaged in a 6 sigma type project, I like to combine it with the SMART methodology in each component of the improvement process. This enables pro-active project management of the components of the process.

6 sigma is often confused or conflated with ‘Lean’ methodology. They use a similar toolset while coming at problems from different perspectives. In my view, and some disagree, they are highly complementary.

A marketer’s explanation of ‘Box Score’

A marketer’s explanation of ‘Box Score’

 

To improve performance, the key challenge is to identify the drivers of outcomes in real time, and enable the changes to be made that will improve the performance.

The ‘Box score’ is a term that has been hijacked from the recording of individual sporting performances in team sports by a few accountants seeking to capture real time operational data. The term originated with Baseball, but all team sports have a system that in some way records individual performances which when taken together are the source of team performance.

In a commercial operational context, the collection of metrics plays the same role, capturing the real performance of a part of a process, adding through to the totals for the whole ‘team’. It is a more accurate and responsive way of tracking the costs incurred in an operational situation, specifically a manufacturing process, than the favoured standard costing system.

Typically, standard cost systems while better than nothing, fail to reflect the actual costs incurred by a process. They are ‘lazy’, displaying the averages of past calculations, and as we know, averages hide all sorts of misdemeanours, errors, and potentially valuable outliers.

Sometimes these systems also have a component added to the cost of each unit of production that is noted as: ‘overhead absorption’. This just makes the inaccuracy and inflexibility of the standard costing system even more inaccurate and misleading, resulting in poor data upon which to make decisions.

Accounting has only two functions: the first is reporting to outside stakeholders. That has become a formulaic process with a template and rules about how things will be treated, this is to ensure that you are always able to compare apples with apples across industries.

The second function is to provide the information necessary to improve the quality of management decisions. The two are not connected except at the base level, the originating data.

This is where the ‘box score’ approach adds huge value: it captures the actual cost of a process.

A well thought out standard cost of goods sold (COGS) calculation typically includes calculations for the cost of packaging, materials used in manufacturing, and the labour cost consumed by the process. The calculation assumes standards for all three, and then throws out variances from the standard to be investigated. Standards would typically be updated regularly to accommodate variances that appear intractable. Changes such as labour rates, machine throughput, and price changes in materials, should be included in updated standards, but often they are not, and when they are, it is after the fact, and as averages.

A ‘box score’ by contrast captures the actual cost in real time, or close to it, so that more informed management decisions can be made.

30 years ago, I did an experiment in a factory I was running, the objective of which was to identify the exact cost of the products running through a line. To collect the data, a host of people needed to stand around with clipboards, stopwatches, and calculators. At the time it was called Activity Based Costing, ABC. The result was good, but the improvements resulting from the information gathered did not generate a return on the investment necessary.

These days with the digital tools available to collect data, there is little excuse not to invest the small amount required to measure the real throughput and resources allocated to get the better information for informed decisions. The options to collect real time data are numerous and cheap, and in modern machinery, just part of the control mechanisms. These devices can collect data and dump it into anything from Excel to advanced SCADA systems, which enable the data to be analysed, investigated and the outcomes recorded and leveraged for improvement.

Managing operations using the actual costs captured and reflected in a ‘Box Score’ manner enables more accurate and immediate decisions to be taken at the point of causation. It is no different to a cricket captain taking a bowler off because the batsman is belting him out of the park. When you can see what is happening in real time, you can do something about it.

Header: courtesy Wikipedia. The scorecard in the header is the scorecard of day 1 of the 1994 ashes test in Brisbane. It progressively captures the days play as it happened: a ‘Box score’

 

 

How your data is giving you the wrong answers.

How your data is giving you the wrong answers.

 

The old adage that you can find data to support any proposition, almost no matter how wild, has never been as prevalent as it is today.

We have the sight of politicians on the one hand telling us the science is wrong as it reflects the looming catastrophe of climate change, while at the same time lauding science in the way the world has responded to the covid pandemic with new vaccines in record time.

The contradiction is extreme, however, there is always data to ‘prove’ whatever point is required.

Following are some of the common ways data is manipulated to mislead, misinform, and bamboozle the unwary.

  • Confusing correlation with causation. This is very common, and I have written about it on several occasions. Just because the graphs of ice cream sales and shark attacks mirror each other, does not mean one caused the other.
  • The Cobra effect. This refers to the unintentional negative consequences that arise from an incentive designed to deliver a benefit. The name comes from an effort by the British Raj to reduce the number of cobras, and associated deaths that occurred in Delhi, by offering a bounty on each dead cobra. Entrepreneurial Indians started to breed them for the bounty. The identical situation applied when the French wanted to reduce the rat population of the French Indochina. They stuck a bounty on rats’ tails, which resulted in enterprising Vietnamese catching the rats for their tails and then releasing them to breed further.
  • Cherry Picking. Finding results, no matter how obscure, that support your position, and excluding any data that might point out the error. This is the favourite political ploy, having a great run currently.
  • Sampling bias. Relying on data that is drawn from an unrepresentative sample from which to draw conclusions. It is often challenging to select a sample that delivers reliable conclusions, and often much too easy to select one which delivers a predetermined outcome. Again, a favoured political strategy.
  • Misunderstanding probability. Often called the gamblers fallacy, this leads you to conclude that after a run of five heads in a two-up game, the next throw must be tails. Each throw is a discreet 50/50 probability, no matter what the previous throws have been. Poker machine venues rely on the players increasing belief that the ‘next one’ will be the ‘jackpot’ after a run a ‘bad ones’ for their profits.
  • The Hawthorne effect. The name comes from a series of experiments in the 1920’s in the Hawthorne Works factory in the US producing electrical relays. Lighting levels were altered minimally to observe the impact on worker productivity, and concluded that they improved when lighting was increased, but later dropped. The effect of the lighting was later disproved, when psychologists recognised that people’s behaviour changes when they are, or believe they are, being observed. This can be a nasty trap for the inexperienced researcher conducting qualitative research.
  • Gerrymandering. Normally this refers to the alteration of geographic boundaries, usually in the context of electoral boundaries. It can equally be used to describe the boundaries set around which source data can be included in any sample. ‘Fitting’ the data to deliver the desired outcome. The term originated from the manipulation of electoral boundaries in Boston in 1812 when the then Governor Elbridge Gerry signed a bill that created a highly partisan district in Boston that resembled the mythical salamander. The national party held government in QLD for 32 years until 1989 as a result of a massive gerrymander in their favour, perhaps better remembered as a ‘Bjelkemander’
  • Publication bias. Interesting or somehow sensational research is more likely to be published and shared than more mundane studies. In this day of social media, this becomes compounded by the ‘echo chamber’ of social platforms.
  • Simpson’s paradox. This describes the situation where a trend evident in several data sets is eliminated or reversed when the data is combined. An example might be the current debate about university admissions favouring males over females. If you take subsets of the data for different faculties, this may be true, but combine the faculties, and the numbers will be virtually even, perhaps even favouring females. This was demonstrated in a study of admissions to UC Berkely in 1973 and is a regular feature of misleading political commentary.
  • McNamara Fallacy. This comes about when reliance is placed on data only in extraordinarily complex situations, ignoring the ‘big picture’, and assuming rationality will prevail. The name comes from reference to Robert McNamara, US Secretary of Defence under Presidents Kennedy and Johnson who used data to unintentionally lead the US into the disaster that was Vietnam, later acknowledging his mistake.

Using data to is an essential ingredient in making your case, as they convey rationality and truth. When listening to a case being made to you, be very careful as numbers have the uncanny ability to lie. To protect yourself, ask at least some of these eleven questions.

Header illustration credit: Smithsonian. The drawing is of the electoral district created by Massachusetts Governor Elbridge Gerry in 1812 to ‘steal’ an election.

 

 

 

 

 

The digital unicorns’ growth secret

The digital unicorns’ growth secret

 

Question: What has enabled the geometric growth rates of Facebook, Google, Amazon, Atlassian, and other digital unicorns?

Answer: Wide and deep feedback from the market enabling them to aggressively focus resources on areas that deliver the best returns.

Technology is only the tool that has enabled this unprecedented level of feedback, in real time. It is the feedback itself that has been the driver of growth.

It has always been so.

Finding ways to build an understanding of the drivers of superior performance has been the goal of intelligent marketers, and management more generally, forever. The experts in the pre-digital age were the direct response advertisers, who were able to determine quickly which version of a magazine or TV ad caused the phones to ring and the coupons to be redeemed. Post digital, it is those who are able to collate and analyse the response in just the same way, except it can be done in real time.

They have become the masters of absorbing market and customer feedback, then being able to evolve rapidly and continuously by leveraging that knowledge on an ongoing basis.

Amazon started off as a bookseller, the plan was never to become the master of retail. This evolved with them as they quickly noted what worked, and doubled down on it, continuously. Meanwhile, they were prepared to invest in the adjacencies that emerged, several of which, such as AWS, have become monster businesses.

That process continues, even as Jeff Bezos invests in blue sky projects like satellite internet, drone delivery, electric cars, and space vehicles.

The hardest part is building the initial momentum. Once you have it, that momentum will drive other ‘flywheels’ becoming a virtuous cycle that is almost self-perpetuating.

 

Header credit: Scribbled ‘Flywheel’ diagram by Jeff Bezos on a restaurant napkin in 2001. It is driven by input metrics, specifically market feedback.