11 research traps novice marketers stumble into, regularly.

11 research traps novice marketers stumble into, regularly.

The implication of the word ‘research,’ is that you are setting out to understand something. All too often over the years, I have observed situations where that is not the case.

Market research can be a money trap, consuming resources with little or no payback. It can also be a huge capability to be leveraged for great benefit when done well.

The challenge is that it is a set of interrelated disciplines from statistics, psychology, behavioural economics, and science, and therefore requires a wide breadth of skill and acquired wisdom to be useful.

Doing commercially productive market research is a bit like learning to swim.  No matter how much you read about it, study wise texts, and observe others, until you get into the pool, immerse yourself, you will never really understand it.

Some things are relatively easy to research. Usually they are adverse outcomes that have happened, and have been quantified. The research is aimed at understanding the drivers of those adverse outcomes. More challenging is research that seeks to put a shape around the future. If this happens, what then?

Most material published on the topic is about the techniques, the templates to use. They are very useful, but fail to accommodate the realities that intrude in real commercial situations, that impact on a research outcome.

Following are some of the hard won lessons from doing marketing and market research over the last 40 years. The tools have changed dramatically in the last decade, the principals remain unchanged.

Not understanding the ‘scientific method’.

Most are familiar with ‘the scientific method’. Identify a problem, form a hypothesis, test that hypothesis, adjust the hypothesis based on the results, rinse and repeat. However, most do not recognise the foundation of the scientific method is to set out to disprove an idea. This objective to disprove a proposition, ensures that all relevant information is made available. All contrary data, opinions and untested ideas are brought to the table for examination. It often happens that information that may be relevant is not considered. Ego, confirmation bias, existing standard procedures, and just lack of critical thinking clouds the process. Over the years I have seen piles of research that is setting out to prove a theory, and does so by, usually unconsciously, excluding data that might not confirm the proposition.

Failure to identify the problem/opportunity.

Useful research depends on providing answers to problems, or offering insight into the scale and location of opportunities. In the absence of clarity of the objective of the research, you cannot reasonably expect there to be any value delivered.

Asking poor questions.

Not all questions are created equal. Asking good questions implies that there has been enough work done to identify what is, and what is not, a good question. Also important is the manner in which the questions are asked. It is easy to generate different responses by seemingly subtle variations in the way questions are asked. Eg. ‘How big do you like the fruit pieces to be in your brand of yoghurt? This implies that the fruit in yogurt is in pieces, and ignores the possibility that the fruit may be added as a puree. Those who may prefer a homogeneous product using puree are thus precluded from giving an accurate response. Such a question would be relevant to the marketer of fruited yogurt seeking a point of differentiation, which would influence the product ingredients and choice of processing equipment.

Less than rigorous & neutral data collection & analysis.

We all know numbers can lie, we see it every day. Numbers can be used to support any proposition you choose, when managed to that end. The absence of rigor from research methodology and analysis, will lead to flawed conclusions every time.

Not knowing what you will do with the outcomes

In the absence of a clear use for the research, why do it? The answer to this is usually found amongst ego, seeking validation of a currently expressed position, or as a crutch to avoid making a decision. How often have we heard the phrase:  ‘More research is required’

Selective use of results.

Selective use of research outcomes is standard practice in many places. Parts of the research that supports a proposition are used in the presentation of a position, and any parts that do not support the position are ignored. You see this all the time in the political discourse in this country. Politicians of differing parties, taking the same research reports and claiming opposite conclusions is common. Exactly the same process exists in corporate bureaucracies.

Lack of understanding of the techniques

You do not have to be a statistician to be able to understand the outcomes of data. However, you do need to understand what the terms mean, and the implications they carry. This applies from sampling techniques, to the tools of statistical analysis and results presentation. You must understand the principals sufficiently well to be able to ask informed questions, and recognise gobbledy gook when it comes back to you.

Not considering Anthropology & Context

Anthropology might seem a bit misplaced in market research, as it is the study of behaviour in varying cultural settings. However, consider how different the answer to a question about your work might be if asked while sitting at your desk absorbed by a task, to when asked the same question while on a holiday. Same question, different context.

These days we are often allocated to teams at work that are set up to solve problems, generate ideas, or just manage work flow. How different are our reactions inside those groups, to those to which we choose to belong outside the work context, and how differently do we behave?

Conducting research in the absence of such considerations can generate misleading outcomes. E.g. Conducting research on a new piece of packaging around a group discussion table will evoke responses, and a conclusion. How different might the reactions of those same people be when confronted by the new pack while shopping in a supermarket.

Failure to understand the drivers of Behaviour

Psychology plays a huge role in the development and reporting of research. Our brains are hard wired to reduce cognitive load, so can be easily tempted to accept a conclusion not supported by research. It is relatively easy to persuade others of the veracity of a conclusion, simply by the manner in which they are presented. E.g. Which milk is better for you: one that contains 3% fat, or one that is 97% fat free? They are identical products, but in research, a significant majority will answer ‘B’: 97% fat free.

Similarly in a qualitative group discussion, a proposition seemingly supported by most around the table can gather overwhelming support, irrespective of the accuracy of the proposition. This outcome has been repeated endlessly in first year psychology experiments, based on Solomon Asch’s 1951 experiment seeking to examine the power of a group to influence the expressed opinion of an individual.

What people say they do and what they actually do can be very different.

When you ask questions, they are answered from within the existing frame of reference of those being questioned. Their ‘mental Models’ dominate how they see things.  Henry Ford was right when he quipped: ‘ He would not consult customers on what they wanted because he already knew, a faster horse. Steve Jobs expressed exactly the same opinion, in different words on several occasions, and was again proven correct.

Too much research is aimed at connecting the future dots to give a sense of certainty about the future, just to make people feel more comfortable. If we could tell the future accurately, we would all be at the local casino for a few nights until we got banned for winning too much.

Respecting the status quo too much

We humans are keen to retain the status quo, simply because it has been proven to work, and change involves risk. We are hard wired to avoid risk, a function of evolutionary psychology, when taking risks often meant you became breakfast for something nasty. The promise of a reward must be many times stronger than the downside of a behaviour before most of us are prepared to entertain the risk.

Presenting a research finding that is inconsistent with the well known view of the Managing Director is a risky undertaking that is often avoided. This is commonly called a HiPPO (Highest Paid Persons Opinion) and is pervasive. It is particularly challenging when the person concerned (often a bloke) is repeating the opinion of someone else. In consumer products, this is often his partner.

Poor presentation of results & Conclusions.

The errors I have seen in presentations are myriad. However, the worst are:

  • Lack of clarity and simplicity in the conclusions, which limits useability.
  • They do not answer the question. Generally this is because the question was ambiguous, unnecessary or stated a proposition someone wanted verified.
  • Death by Powerpoint.

 

Every research project can be placed somewhere on the matrix in the header. The more right hand side and higher you go, the greater the degree of uncertainty is involved. In the bottom left quadrant, you are seeking answers that are quantifiable, things that have happened, that you are seeking to understand. Top right quadrant is the future, and contains things we do not know much, if anything, about. Often we do not even see them. Research that puts numbers against hypotheses that fall into this quadrant should not be believed. At best they are an estimate of a probability, at worst, just a WAG. (Wild Arsed Guess). What is important in these circumstances is that you understand the risks. Remember that old cliché, ‘plan for the worst, hope for the best’

 

 

A marketers explanation of ‘normalising’ your P&L.

A marketers explanation of ‘normalising’ your P&L.

 

You will not hear the term ‘normalising’ the P&L very often. When you do, it is often an indication that the business is in a frame of mind open to change.

It is a common starting point of valuing a business, a process that has two basic buckets:

  • Financial value. This is where any valuation process will start, with the numbers.
  • Strategic value. Far more qualitative than the numbers, a potential buyer will set out to put a value on such things as market share, customer profiles, geographic location, cultural fit, and so on.

Valuing a business is a complex exercise, particularly valuing the contents of the ‘strategic bucket’.

Creating a financial value is much better understood, and almost always starts at the same place:   EBITDA. Earnings Before Interest, Tax, Depreciation, and Amortisation.

EBITDA is a construction of the profit and Loss account, which reflects the trading results.  Usually the P&L is completed on a monthly basis, and so long as the classifications of the expenses remains consistent, can be used for comparisons over time to give a good picture of trends.

However, the P&L can also be the repository of all sorts of costs and activities that bear little relationship to the competitive trading health that determines the value of a business.  Therefore, an exercise to arrive at a value will seek to remove from, or add back, items that reflect more accurately the trading health. The usual term is to ‘normalise’ the P&L.

This is particularly relevant in the sale process of a private company, less subject to the rigors of governance that apply to listed companies with professional rather than family management.

The common items to be ‘normalised’ I have seen are:

Related party revenue or expenses. Purchases from, or sales to another business related in some way to the one being investigated, that are above or below market value. A common practise is for the owner of a private business to have their superannuation fund own the premises from which the business operates. The premises are then leased back to the operating business at a rate not reflective of competitive market value.

Owner bonuses and benefits. Often the owner of a private business will pay themselves and family members more than the market value of their contribution to the business. It also works in reverse, owners are sometimes the worst paid staff members, working longer hours than anyone else, just to keep the wheels turning over. These anomalies need to be ‘normalised’

Support of redundant assets. Every business has redundant assets that would be jettisoned by a new owner. This stretches from old inventory still carried on the books, to premises not utilised, to the country retreat used occasionally for a sales conference, but usually for the summer holidays of the owner. These do not realistically impact the performance of the business, and a new owner, unencumbered by the past, and by costs not associated with the trading position of the business, will remove them from the P&L.

Asset and expense recognition. Treating an expense as an asset, ‘capitalising’ an expense is a common practise that will boost short term profitability by moving items from the P&L to the balance sheet. While this practise is subject to the scrutiny of tax and accounting rules and independent audit, it is pretty common. It is particularly common in the treatment of repairs and maintenance. As with many items, the accounting treatment can be used both ways to ‘manage’ short term profitability.

One time costs. Items such as litigation, insurance claim recoveries, one-off professional fees, even charitable donations,  that are not a normal part of trading operations need to be identified and ‘normalised’ to build the picture of repeatable trading outcomes.

Inventories. Every business has inventories, for many it is a significant item. Manufacturing businesses have physical inventories in raw material, Work in Progress, and finished goods, while service providers have projects in various stages of completion. The method of valuation of inventories is subject to all sorts of shenanigans, and the amount of inventory,  subject to mismanagement, sloppy processes, and a host of other curses. Aggressive and consistent inventory valuation is a vital part of understanding the working capital needs of a business, and it often the most contested piece in the valuation puzzle.

When you have all that out of the way, you should be able to calculate a reliable figure for the  free cash flow generated, or consumed by the business. A further vital number, and the one upon which many acquisition/divestment decisions have been taken.

As a consultant, looking to help businesses improve their financial and strategic performance, I often quietly do a ‘normalisation’ exercise on a clients P&L. This process almost always offers up those difficult questions that need to be asked and answered before an improvement process can be truly effective.

Header cartoon credit: Dilbert and Scott Adams again capture the idea.

6 questions to assess: ‘How strategic is your data’?

6 questions to assess: ‘How strategic is your data’?

 

Data is inherently tactical, just numbers without intelligence. It takes structure, capability development, and governance to turn it into a useable asset that adds value. In the absence of a structure that is designed to enable the identification, analysis, and leveraging of that data, and to turn it into useable intelligence, it will remain just data.

To go about that task, ask yourself a number of questions:

What are the data flows?

Through the enterprise, who uses the data, how do they use it, and to what outcome?

Where are the interconnections that occur, to what extend are they compounding positively? Data can also compound negatively, usually because it reinforces an existing confirmation bias that is flawed.

Data is functionally agnostic, should be readily available to all, and the outcomes of use transparent so they can be built upon and compounded.

Who ‘owns’ the data?

Too many times I see the IT department generating data, and keeping to it themselves. Similarly, the finance department is guilty, as are all functions. This is usually not malicious; it is just reflecting a lack of cross functional collaboration. It is becoming more common that marketing is driving a large part of the data agenda, enabled by digital tools, but few marketers have the capability to do it effectively.

Often, there is an expectation that ‘digitisation’ of the enterprise will change the way data is used. Not so, it is no more than putting a new coat of paint on the building, unless the internal structures are changed as well, nothing really changes, you just get a few press releases and nice photos for the annual report.

What data is used?

Piles of data is generated, often collated, and distributed, or made available, but never put to productive use. Usually the missing ingredient is curiosity. Those who are curious approach the data with a ‘why’ and ‘what if’ attitude, they ask questions which identify holes in the data, drives them to be filled, and seeks new sources.

Where does the data add competitive value? Competitive value is a two sided coin. On one side is the need to keep up with what your competition is doing, to leverage the opportunities for productivity and not fall behind in your customers eyes. The second is to find ways for data, and more specifically the knowledge that comes from analysing data, to give you a competitive edge. If a proposed investment does not do at least one of these two things, why would you proceed?

How well do the data outcomes reflect alignment with strategy?

Having data and the analyses that goes with it that leads to conclusions that are inconsistent or divergent from the stated strategy must cause you to question the data, its analysis, and the strategy. In these circumstances, it makes sense to deploy the scientific method, create a hypothesis, test it, collect more data and rinse and repeat until you have alignment between the strategy and its supporting data.

Where are you on the digital adoption curve?

Data is just another asset, it requires explicit actions to build the capabilities necessary to generate, use and fund it. There has to be explicit policies and priorities given, or the investments in data development and the capabilities required, or it will not happen. There needs to be a clear picture of the structures of data domains, from engineering, finance, marketing, sales, and they need to be prioritised and organised to deliver the best return in the long term.

The tools being used to accumulate, process and analyse data are just tools, no different to the hammer that drives a nail. It is how we use them that make the difference. Tools everyone should have are those that ensure the data is both clean and robust. Decisions based on data that fails either of these ‘sanitary’ tests will be sub-optimal at best.

We have entered the digital world. Data and its organisation, funding, leveraging and governance are rapidly becoming the key to competitive survival.

How well are you, and your enterprise placed?

Header cartoon: courtesy Tom Gauld at tomgauld.com.

The 8 benefits of ‘Numerical Ambidexterity’ to marketers

The 8 benefits of ‘Numerical Ambidexterity’ to marketers

 

Being a useful marketer has many foundations, most of them untouched in the course of a marketing degree.

One of the ‘must have’ but seemingly rare skills amongst most so called marketers I see, is a relationship with numbers.

In a seeming paradox, I do not like numbers, the piles of them I often see squeezed onto dense spreadsheets, with little thought or imagination beyond getting as much data as possible assembled in the one place. This drives me nuts.

On the other hand, I love numbers for what they can tell me. Once that data has been cleaned and organised in a way that enables smart, and curious questions to be asked, then answered. Data that moves towards knowledge, then to the source of insight is essential to success. It also clearly demonstrates the parameters of holes in the data, and your ability to address the  challenges presented.

Analytical skill is a foundation of successful marketing.

Typically, marketing is seen as a creative exercise. I think this is why many marketers appear almost innumerate, and why the accountants and engineers who run many organisations have little time for those supposed to be running marketing. They love numbers, and assume anyone who does not is an idiot.

Well used, numbers tell a story, and marketing is all about stories. However, stories that do not have some sort of quantitative foundation are commonly called fairy tales. Children love fairy tales, but the accountant in the corner office making the resource allocation decisions, thinks they are for his grandchildren only.

Being analytical is way more than just having the numbers. It requires that they are turned from just the numbers into actionable insights, which generate further numbers to be understood and used to gain leverage for the investments being made. It does not matter if the investment is one on brand building, or buying a new machine, they are both investments, upon which a return should be expected.

We are not generally taught to have this sort of intimacy with numbers.  We are not taught that they are key enablers of critical thinking, curiosity, and creativity.

A hypothesis without the means to test and validate it  is at best, a nice idea.

I managed to pass (just) a reasonably high level of maths at the HSC, almost 50 years ago. I passed purely because I worked at remembering the formulas and circumstances where they worked. I never had the slightest idea of where this gobbledy-gook stuff might be useful, so by the time I had recovered from the post exam hangover I had forgotten everything. The absence of that key item, understanding, is why many of us shy away from numbers, we were never taught where and why they might be useful.  We had formulas jammed down our young throats, and hated it, a dislike that coloured the rest of our lives.

Get over it, and allow numbers to speak to you, to help you understand the stories they are hiding.

  • Look for, and identify the trends, and patterns in the data, and when there is an anomaly, be able to ask and find the answer to the simple question: Why?
  • Find the gems of truth hidden amongst the averages we always seem to be fed.
  • Understand what ‘normal’ looks like, so you can see the bits sticking out, and again find out the ‘why’
  • Find the boundaries of an idea, circumstance, impact, and potential.
  • Discover variances, and use the boundaries of those variances to improve performance over time. This is the core technique of continuous improvement in factories, engineers love it, and I have found it just as useful in many other circumstances.
  • Numbers enable some sort of quantitative boundary to be thrown around uncertainty, particularly useful at the moment. By testing the numbers, then revising and retesting, you can progressively increase the level of certainty, reducing risk.
  • Enable yourself to use perhaps the oldest and most useful tool in the marketers arsenal, the 80/20 rule, courtesy of Italian mathematician Vilfredo Pareto. In 45 years of commercial life, this simple technique has been used over, and over, and over again to uncover many ‘Why’s’
  • Understanding the data enables you to be ‘numerically ambidextrous’. You can zoom out to see the whole picture, and then zoom in to see the details of anything that for one reason or another looks different, interesting, or just a hole in the data that might lead to an insight.

All these skills are just as useful to a marketer as they are to an accountant or engineer. When you have them, your credibility with those in the corner office will soar.

 

 

Why ‘SMART’ does not work well for digital

Why ‘SMART’ does not work well for digital

 

 

Marketing has moved significantly into the digital domain, online. It appears to make sense, as it appears ‘SMART’ (Specific, Measurable,  Achievable, Realistic and Time driven).

The engineers and accountants amongst us warm to this sort of seemingly measurable expenditure, they can look at a dashboard of quantitative outcomes, and feel good that they are not wasting money.

However, a closer look might give them pause.

Specific.

Yes, you can have a specific, focused activity that either happened, or did not, and people can be held accountable for them.

Measurable.

Yes, you can measure an activity done on line, so long as you are prepared to discount the bots and fakery hiding in the digital supply chain. The ad did appear, we got X 000 likes, Y 00 email addresses when they downloaded the clickbait, and sales reps are now chasing them as qualified leads. Hopefully a few of them actually are, but we may never really know.

Achievable.

Yes, the goal of getting likes and qualified leads has been achieved

Accountable.

Again, you know the intern in the marketing department was accountable for ensuring that there were X entries in the twitter feed, Y   postings on Facebook and Instagram, and that the agency supplied a white paper a week as clickbait.

Timely.

Again, yes, the boss wanted this all done by the end of the month, as it was, Hooray!!

Problem with all of this is that we are measuring the wrong things. They are all about activity, nothing about outcomes. When we understand and can quantify the cause and effect links between activity and outcomes, a really tough problem, SMART goals may become useful.

‘Digital marketing’ has replaced using ‘digital’ as a tool of marketing. Those amongst us who do not understand the wide impact of ‘marketing’ have got it all the wrong way around. They have been seduced by the new shiny thing that appears to be useful, and sometimes it is, but not often as a standalone strategy, it is by its nature a short term tactic only.

How much Marketing Automation is good automation?

How much Marketing Automation is good automation?

 

One of the questions occupying my newly monastic mind over the past few weeks has been: ‘what changes can we expect in the revenue generation processes as a result of the ‘Bug?’.

In the lead up to this crisis, I have been considering how automated everything was becoming, at the expense of humanity.

There is an inherent conflict between the centralising force that is the ‘Martech’ (marketing technology) automated decision making processes, and the front line sales function.  Martech investment requires that a range of decisions to buy and install various combinations of software be made that automates a selling relationship. The decentralised nature of the sales front line does not benefit from such automation, as people still prefer to buy from people, particularly in cases where the investment is large, or there is an emotional element to the purchase.

To my mind, it has become too clinical and automated in most large businesses. This creates opportunities for smaller businesses whose niche is perhaps more clearly defined, and who lack the resources and capability to leverage an integrated ‘Martech stack’.

The Bug has brought the question to the fore.

On one hand, we are now compelled by circumstances to interact using the digital tools, but there is a steep learning curve for many, and SME’s are rapidly discovering their capability shortcomings. On the other, human contact will become more valuable than ever, and those same SME’s may be in a better position than most large companies to be ‘Human’.

Where on the scale does your business fit?