Algorithm groundhog day

Algorithm groundhog day

In 1911, Frederick Winslow Taylor published his book ‘The Principals of Scientific Management’, which used logic and maths to describe the pathway to efficiency. It shaped the practice of management for the next 80 years, until, slowly, we realised that not all behaviour was rational, able to be broken down in a binary way. In fact, most of our behaviour is not binary in the way envisaged by Taylor. It is shaped by the forces that have driven our evolutionary success, best described by Daniel Kahneman in his great book, ‘Thinking, Fast & Slow’.

Increasingly I am seeing and reading stuff that reflects the explosive growth of AI, and the impact it will have on our lives, working and private. I cannot help but wonder if this is another manifestation of the same mistake that Taylor made.

Artificial intelligence will be a huge boon to all sorts of tasks, it is way better than we mere humans at all sorts of things, but it cannot, at least yet, reflect the nuances of human behaviour, and reactions to the things that makes us a successful species.

How will AI deliver us the elements of pride and accountability we have in a complex job, when that job is broken down into a series of sequential tasks to be done by the ‘recipe’ without variation? Where does the insight and creativity that comes from doing such a job emerge when it is being done by the numbers generated by a machine?

My mother in law used to do paintings by the numbers, her unit had quite a number of ‘originals’ by famous artists, all done by the numbers, with great care and attention, and the application of considerable skill in attending to the minutest details. However, they were not the  originals, not even great copies of the originals, they were by the numbers.

Algorithms are great, but not at everything.

 

 

 

8 things to remember about that hugely persuasive data.

8 things to remember about that hugely persuasive data.

‘In God we trust, all others bring data’ . This statement is generally attributed to W. Edwards Deming, way back in the 50’s.

It is as true now as it was then, with some pretty significant caveats. 

You need to know the provenance of any data you choose to use, as data is just data, and it can be managed, coalesced, manipulated, misrepresented, and outright lied about, so be careful.

Some things to remember.

  • Data is mostly history, and the future is rarely the same as the past. Data that is really a forecast should not be called data, it is a ‘best guess,’ or wishful thinking,’ or ‘what the boss told me to say,’ or ‘I need this to keep my job,’ or a thousand other things.
  • Data is always incomplete no matter how complete you think it is. There is always some level of context that can give it greater, or even a different meaning, that is missing. Part of the challenge is being able to make a decision without being overwhelmed and developing a form of ‘common sense blindness’, caused by a tsunami of data.
  • Data provenance is always useful to know. What is the source of the data? When it comes from customers, it may be more useful than when it comes from a supplier trying to sell you something. When you see claims like ‘This lotion has been scientifically proven to cure male pattern baldness in 92.5% of cases’, you know the ‘scientific test’ was done on 10 hairy blokes from the local footy side.
  • Data is objective, but the analysis of data is not. It is subject to a host of human emotions and contexts, and can be interpreted in a number of ways, depending on the mood, experience, domain knowledge and a host of other things, of the analyser.
  • Data ages quickly. What was pretty right now, might not be so right in a years time.
  • The world is full of conflicting data, the challenge is to know which pieces to believe and use, and which to discard. Anticipating the actions of your competitor with the same set of data is a very useful exercise. Put yourself in their position and ask yourself what would I do now?
  • Data can distract, as we are visual animals, and visuals are a powerful way of communicating, so be sure that what is being communicated by those fancy graphs is actually what the data says.
  • Data should be able to tell us which is correlation, and what is simply some random causation. These two may be the two most confused states, and are certainly amongst the most used red herrings

Data should be one of the foundations of all our decision making, and we rarely have all we need. Therefore we are forced to make often difficult choices with limited data, implementing those decisions, measuring the impacts, and adjusting tactically as you learn. It pays to understand what you are relying on when you make those choices.

Cartoon header: courtesy www.XKCD.com

Where is the demarcation between Accountability, Responsibility and Authority?

Where is the demarcation between Accountability, Responsibility and Authority?

The words ‘Accountability’ ‘Responsibility’ and ‘Authority’ are often mixed up, used inconsistently, and often as synonyms.

How often have I heard someone say they have accountability, but not the responsibility, as well as the opposite?.

In any organisation, the ‘language’ used has to be crystal clear. Without clarity, ambiguity and finger pointing creeps in.

Let’s put this one to bed.

In my world, the demarcation between these words is very clear.

Accountability.

The clue is in the word: ‘count’. The person with accountability is the one keeping track of progress, counting it. They may not have the power to make all  the decisions, their role is to be the one who gives voice to issues as they arise, and should be independent of the role the person plays in the organisational hierarchy.  In former marketing management roles I held product managers accountable for margins of the products for which they held responsibility. They did not set final prices, nor did they control the promotional spend or COGS, but they were accountable for margins, and it was their role to monitor, communicate, and persuade, to deliver both the percentage and dollar outcomes.

Responsibility.

Anyone who is in a position to ‘respond’ carries responsibility. An individual does not have to carry either accountability for outcomes, or the authority to make decisions to be responsible for actions taken, most particularly their own. It is in this area of responsibility that the cultural aspects of an enterprise are felt most keenly. When those without any institutional power feel attachment to an outcome, and act accordingly, they are exhibiting a level of responsibility, and it is a powerful marker to a positive, productive culture. 

Authority.

This belongs to the person who has the final say, the power of veto. Authority can be delegated, even to the lower levels in an enterprise. On a production line where there is an ‘Andon’ line in place, workers carry the authority to stop the line when they see a quality fault, rather than allowing it to proceed further down the line.

The larger an organisation becomes, the more nuanced and ambiguous these definitions can  become as people interpret their position and role, and that of others, slightly differently.

A regular and blatant misuse of the word authority occurs when it is used to point at someone who is expected to be an expert. The word sometimes carries the preposition, ‘an’, in front of it, becoming ‘an authority’, as in the header illustration. The doctor was used in the header ad because he was seen as ‘an authority’, and therefore had an opinion that should count, but had no authority over the actions of an individual.  

As a further example, In most organisations, the CFO is accountable for the cash. They literally count it, report on it, and recommend actions that impact on it. The CEO retains authority over the cash, as they have the final say in how it is managed and allocated, and everyone in the organisation has a responsibility to ensure that cash is spent wisely, with appropriate governance and reporting.

Having clarity around these definitions, and a culture that respects and responds to them, is crucial to any improvement process.

 

11 ways to uncover the lies in data

11 ways to uncover the lies in data

Data does not have an agenda, it does not lie, but it rarely shows the whole story.

Think of the data that would be gathered and analysed after the announcement of a chunk of native forest was opened up for logging. The botanists would have one set of data and analysis of the impact, the accountants another, the entomologists another, those concerned with native animal habitat another, and so on. None are wrong, but all are incomplete without the input of  the others.

Corporate use of data does have an agenda, performance, and unfortunately often personal advancement. Similarly, data delivered as fact by a politician has an agenda: getting elected.

The data does not have an agenda, those who use it often do.

Bias in data can be conscious, as well as unconscious. Someone has to decide what data is collected,  what hypotheses to test, and how it is to be used. All can be shaped to meet a predetermined outcome.

When making a major decision we all look for the data that will give us confidence in our choice.

However, we are all also familiar with the nagging feeling that the data we are looking at is nothing short of bullshit.

So how can you tell?

Here are 11 simple tests to apply.

  • Where did the data come from? Organisations, geographies, people, all make a difference.
  • Was the collection method designed by someone with a vested interest in the outcome?
  • What are the gaps in the data? These can easily be created by the manner in which questions are asked, or often, not asked.
  • What assumptions were made in assembling and analysing the data? No data survives the filtering imposed by the assumptions in the assembly and analysis processes.
  • What statistical measures have been applied? The number of initial data points, upper and lower control limits, confidence levels, all the statistical tools available, but too often dismissed by non statisticians and those running an agenda.
  • Be wary of creative articulation. Percentages are regularly thrown about as ‘proof’ of something. A 50% increase in accidents in your suburb in the past year may mean there were 3 compared to 2 last year. Similarly, averages are often misleading. We expect the mean to be close to the median (middle point in a range) but often it is not.
  • Who gains or loses from the outcome? Just look at the current political ‘debate’ in this country for ample evidence of this. There are no laws about truth in advertising for political ads, therefore the numbers quoted are heavily edited, or it would seem, often just made up.
  • Is the data describing just correlation or is it truly causation. This is often used to make a case. For example this compelling case put forward by the economist a while ago ‘proving’ that intelligence increased with consumption of ice cream.
  • What are the alternative explanations of the conclusions articulated, and what are we not being told?.
  • Is the data giving you the answer to the question being asked, or to some other question? And, how well is the question reflected in the answer?
  • Has anyone with an established perspective opposite to the outcome of the data had a critical look at it? This is often a good way of finding the holes in the collection and analysis.

While statistics can be made to lie, they will also deliver transparency when you understand the basic measures. People will often tell you what they think you want or need to hear, and when it is backed by data, it becomes more credible, particularly if it confirms an already established point of view.

Finally, if it seems too good to be true, there is a fair chance that it is, our instincts are usually pretty good, so follow them until proved otherwise. 

I am  by no means a data nerd, but I do believe that good data can make our collective lives better by improving decision making, and removing just a little of the bullshit sprayed at us so regularly and methodically by everyone with a cause.

Data does not lie, people using data can, and do.

The header cartoon is from David Somerville’s Random Blather blog, an extension of Hugh McLeod’s original.

 

 

 

How to assess the value of information

How to assess the value of information

The term ‘monetisation’  is thrown around like confetti at a wedding. It almost always refers in one way or another to the process of squeezing money out of information of some sort. The real key to monetisation success is to identify who may have value created for them by  the access to, and use of, the information and the outcomes it can bring.

Think about the differences between an X-ray and a CT scan.

An X-ray is a one dimensional ‘picture’, and you only see the bones with any clarity. It is the ‘first port of call’ in a diagnosis, offering a limited view of the location and orientation of a skeletal injury. By contrast, a CT (computed tomography ) scan is multidimensional. You see not just the bones, but the soft tissue as well, and you can see it from a variety of perspectives. it is a far more complete picture.

This is a fine analogy for the value of information.

Financial information is just like an x-ray. It cannot tell you much beyond a one dimensional analysis of a current situation, and it is incomplete. A strategic analysis of information is more like a CT scan, you can see a dramatically increased information set, and examine it from a range of perspectives. This depth of information can deliver understanding and insight about the connections and interrelationships that exist. 

An x-ray capability is relatively simple and cheap, whereas a CT scan is more complex and requires a far greater commitment of resources to deliver that far more detailed picture.  CT scanning equipment costs in the region of 3 times as much as an X-ray set up, and in use, delivers perhaps 100 times the radiation of an x-ray. Not something to  be undertaken without due consideration. A CT scan also requires a far better trained staff than an x-ray, generating greater operational and fixed costs. 

So it is with the information you gather and analyse in your business.

Information is the currency of success these days. Various studies identify in excess of  80% of the market valuation of listed companies coming from intangible assets. Considering this fact,  it makes sense to have an information strategy.

Leaving the IT department to develop such a strategy fails to recognise the importance of information as an essential foundation of success.

Our standard accounting processes include an asset register, on which all assets are recorded, often down to the pens and pencils in the stationary cupboard, but I have yet to see one that puts a rationally articulated value on the information held in the data files of an enterprise. Is it just because it is hard to do, or is it because there is  no place for it on our balance sheets and in our statutory accounts?

There appears to me to be 4 parameters for considering the value of your data.

  • Leading indicator: a source of information about what may happen
  • Lagging indicator: a record of what has happened
  • Focus is improvement of management discipline
  • Focus is creating new value for stakeholders

Creating metrics for each of these is challenging.

Metrics are usually financial, and then usually only one dimensional, based entirely on the costs incurred as recorded. This data is available in some form in every business, but only tells a part of the story. There are opportunities to record and measure costs in other ways.

Elsewhere I have considered the 5 types of cost in every business, direct, indirect, opportunity, transaction, and short cut costs, and noted the challenges of putting numbers to some of them.

Financial data can also be ‘fattened up’ by consideration of  several other parameters:

  • The value of the information by understanding the costs that would be incurred if it was suddenly unavailable.
  • What someone else might pay for it, particularly a competitor
  • The extent to which this information contributes to the bottom line.

For example, these metrics could be considered in the context of the value of the customer and lead information in your CRM, how much does that information deliver to margins? These days that is often a readily available metric.

The additional valuation parameter is strategic:

  • How complete is the information in delivering a picture of how the competitive environment in which you compete is evolving?
  • How does that information inform your strategic decision-making, and what would be the costs of not having it measured considering the 5 costs?
  • How does the information articulate the key drivers of performance?
  • How well does the information contribute to the strategic outcomes being sought?

Tackling this challenge of quantifying intangibles, recognising the truth of Peter Drucker’s throwaway that ‘what gets measured gets done‘ is not easy, and not cheap, but like the difference between an X-ray and a CT scan, the results are worth the effort when done well.

Header photo from: https://www.oceantomo.com/2015/03/04/2015-intangible-asset-market-value-study/. This is the second time I have used this graphic to make a crucial point about the value of intangibles in your business.

The problems with Google

The problems with Google

Google is a wonderful tool, ask it any question, and the answer will come back, or at least a million references that may give an answer will come back.

That is terrific, except for a few minor, or major flaws, depending on your mindset, such as:

Confirmation.

The certainty you get from Dr. Google tends to confirm the things you may already know, at least it confirms the path you are on, by giving you easy answers to the question you face. As a kid, in PG (pre-google) days, we had to go looking for the answer, feed our curiosity, critically review the few sources available, and in the process, stumble across other information that might just be a useful addition to the path we were on. Some would point out that Google in delivering millions of references does the same thing. However, we mostly only look at the first page, which reflects best what others asking similar questions have opened. Confirmation bias at work, silently, in the background. 

No challenge

We do not have to work to find information, we just have to ask.

What if we do not know what to ask?

It seems to me we have lost the itch that is curiosity to see things that are different, divergent, and have a different perspective.

There is also another side to it. If we go to a library, and find the book we really like, the book next door is like the one you love, but just a bit different, that is the way libraries are organised, by topic. Google is not. It does not necessarily give you the thing most like what you are seeking, it just gives you a lead on the things that other people have sought by asking similar questions.

Currency

Google assumes that the newest stuff is the most useful. Often this is the case, but equally as often not.  Increasingly the current stuff is just Google-fodder, crap, of little value.

Dr Google removes the random.

As I get older, it seems I have become more curious, As a result I seem to collect random facts, stories, reports, and pieces of information. Sometimes they get used quickly, often they sit on the metaphorical shelf for ages until a use emerges, or it gets merged with another thought at another time. This collection of random curiosities is facilitated by Google, but not encouraged as everything you ask is there when you ask, but you never know what it is that you have not asked. The beauty of having a ‘library’ of trivia is that at some point, that piece of trivia, that random fact or report will add enormously to whatever it is you are doing. Google will never know this, so you need to collect the disconnected random facts like squirrels keeping nuts for winter.

Serendipity cannot be digital

Serendipity comes about from unexpected outcomes, things that go against the common understanding, the tenuous thread you see between two logically disconnected facts. Making these connections requires a multidimensional ‘intelligence,’ not one dependent on a logical algorithm, no matter how ‘smart’ it might be.

 

Google has not only become the default for the world, it is becoming the primary source, along perhaps with its digital stablemate Facebook, that is if anything, better than Google at eliminating the instinctive drive for creativity and curiosity. 

How do we encourage critical thinking when there are only two sources of information?