To successfully innovate, ask better questions.

To successfully innovate, ask better questions.

 

 

Innovation sessions typically involve an expensive consultant who has some sort of manicured track record exhorting the group to ‘Be creative, let your mind wander, nothing is silly, think outside the box’ sorts of session.

That does not work very well, except of course for the consultant.

What is usually missing from these sessions is diversity. Not of gender, but of expertise, training, experience, and knowledge gained in seemingly unrelated areas.

Pose a difficult problem to an accountant, and you will usually get a numerical answer. Pose the same question to an environmentalist, and you will get a different, but entirely valid answer. People see problems and their potential solutions through the perspective of their training, domain knowledge and experience.

Imagine you are running an innovation session for Australia’s new space agency. Chances are you will have 25 rocket scientists in the room. All will be applying their skills and knowledge to the problem to be solved. Would you rather add another rocket scientist to that group, which may not add much to the 25 already there, or a biologist, musician, or surgeon, any of whom may not know anything about rocket science, but just may have a solution to your problem that comes from an entirely different field.

The best solutions to really difficult problems are more likely to come from asking better questions of different people, than from just asking more of the same ones directed to the same people.

Header cartoon credit: the great Gary Larson with thanks.

 

 

The classic disruption timeline

The classic disruption timeline

 

 

As a kid in the sixties, some of my friends had extensive record collections, mostly albums, but also singles of the ‘hits’ from albums. The Beatles dominated, Sgt Pepper’s Lonely Hearts Club Band selling millions of copies when released in 1967, and was still selling millions into the 70’s.

In 1963 Phillips introduced the compact cassette, portable, and it offered the choices of fast forward and replay. I can remember carefully taping favourite songs from the radio to make personalised ‘playlists’. Sales built rapidly, then took off when the Sony Walkman was introduced in the early 80’s.

Meanwhile, Philips had been developing the CD, born in their labs in 1974, and by 2000, held 96% of all sales of recorded music.

Again, parallel development was happening, and the digital audio format called MP3 was born in the late 90’s. This format enabled the conversion of music into a digital file that could be shared. Up popped Napster and similar sites, from which you could download music for free, in breach of copyright, but free.

Meanwhile Apple had made MP3 players sexy by putting ‘A thousand songs in your pocket’ with the iPod. The music industry, tightly held by a small number of large corporations sued, and won, but it was a pyrrhic victory, as Pandoras music box had been opened. As a side note, the sight of an industry body suing to ensure that their product was not distributed is a touch unusual.

Then along came Apple, again, with iTunes and its multifunctional devices we still call phones, followed by more streaming devices and services. Spotify changed the face of the industry, again, and the fight became the more traditional marketing fight for your attention, and money

You can follow a similar path with the development of the movie industry, motor cars, aeroplanes, computers, electricity, and many others.

The point is, the seeds of destruction are planted well before the visible disruption occurs. The timelines we typically think about when considering disruptive innovations are much longer when you step back and look at the lead-up changes that prepared the ground for the disruption.

What is happening in your industry that could bite you on the arse?

 

 

The case for doing something boring. Wool.

The case for doing something boring. Wool.

 

 

All the recent focus of industry development, Control of IP, and sovereign manufacturing, has been on High tech.

Should we, or perhaps why don’t we, look to areas where we have dropped the ball in the past, but still have the opportunity to shape world markets, built capability, and diversify our economy.

Should we be looking at some of the obvious, but perhaps boring stuff that can make a significant difference, and where we already have a huge head start.

This race towards the newest shiny thing is fun, generates a lot of press releases, is exciting, attracts attention, as well as capital and competition, but is it the whole game?

In years gone past, Australia supplied a huge percentage of the world’s wool.

We grew it, and processed it through the many stages to the production of yarn, and exported the highly value added product to the world.

No more.

We have been supplanted as the number 1 producer by, you guessed it, China. We proudly, for now, occupy second place in the production stakes. China also is the biggest importer of Australian greasy wool, which they then process and gain the huge value add that the processing stages contribute.

I do not have all the numbers, but the current mean fibre diameter of the Australian clip is 20.8 microns, (AWPFC numbers) which is significantly less that the average of other major producers. At the extreme, production of wool at 13-15 microns is very small, requiring very considerable skill, animal husbandry, and investment in genetics. However, that investment is returned with huge price premiums paid by high end fashion manufacturers. That fine wool sells at auction for up to and sometimes more than $150/kilo, 15 times the average.

Australia’s share of world fine wool production is upward of 80%.

Why is it beyond our capability to capitalise on such a premium position, based as it is on 150 years of experience, a continuing production advantage in the preferred raw product, and many millions of dollars on R&D?

Australian Wool Innovation has been pissing around for the 30 years I have been watching, and from time to time dipping a toe into the water. They have wasted growers money and matched funding from the public purse, while failing to build a sustainable industry value chain that builds Australia’s competitive position. Making excuses, and generally having a fine old time has been the outcome of their efforts.

Having just read the latest strategic plan I can find, that sorry situation is not going to change.

As part of the National Reconstruction Fund, should we revisit old friends like wool that despite the best efforts of the last 40 years, we have failed to kill off? Surely that level of resilience requires some examination and consideration for rebuilding the supply chains that delivered many of the foundations of the prosperity we still enjoy. Such an effort would tick 5 of the 8 priority areas nominated in the reconstruction fund legislation.

13 years ago in a post I asked ‘Where next for wool‘. The question needs to be asked again, and this time we should be expecting some sensible answers.

The header graph is the average price of greasy wool over time. You can see the impact of the wool industry pricing model that ended in tears in July 1995, leaving a huge inventory of unsold wool that screwed the market for a decade. As with all averages, the graph hides the huge opportunity that has been facing us for years, which we continue to ignore. 

 

 

 

 

 

The simple choice marketers must make.

The simple choice marketers must make.

 

When building a marketing plan, one of the key choices that must be made early is a deceptively simple one that most fail to recognise.

  • Are you setting out to serve existing demand?
  • Are you setting out to generate new demand?

Ninety-nine times out of a hundred, when I look at a marketing program, I have no idea which the marketer has chosen. Usually this is because they have jumped the early and challenging question of translating the strategic objectives back into the planning of marketing activities. Marketing is simply the means by which the strategic objective is translated into a series of actions which are communicated to those who might be interested in paying you to consume your product.

There is of course a third option: attempt to do both. However, to do both effectively requires a specific strategic choice. Allowing the ‘do both’ option to become the default of not deciding where the priority lies is commercial cowardice. It leads in most cases to sub-optimal allocation of the limited available resources.

Header cartoon credit: Dilbert demonstrates that choices must be made for clarity.

 

 

 

 

 

Plans never reflect what happens, so why bother?

Plans never reflect what happens, so why bother?

 

Commercial success, that which delivers more than a wage, comes from only two places:

    • Critical thinking
    • The ability and willingness to be a bit different, experiment and embrace risk.

Why is it then that the web is full of ‘7 point plans to….’ Templates designed to remove the need to think, and assuring us that if we follow the plan, all will be well.

I have been as guilty as most, reducing some of what I publish on this blog to lists of sequential actions. This sort of headline increases readership of a post significantly, people want packaged solutions that promise an answer to a complex problem but removes the need to think.

I have been as seduced as anyone by the vanity of page views.

The important part of any plan, from the most complex to the mundane list of what you must do today is that it is the result of critical thought.

What is important vs urgent?

Is this the best use of that absolutely finite resource: Your time?

How will this impact on those around me?

General Eisenhower made the observation that ‘plans are worthless, but planning is everything‘.  Eisenhower further noted that emergencies were unexpected, and therefore planning for them was impossible.

Noted philosopher Michael Tyson’s contribution is perhaps the best known “everyone has a plan until they get hit in the mouth”.

Besides, without a plan, and associated goals, how will you ever know how you are performing?

The act of planning should be an act of critical and creative thinking, not filling in a formulaic set of generic questions.

 

Header credit: Scott Adams with an early question from Dilbert.

 

 

 

 

 

‘Is this current explosion of AI real and lasting, or just another tech bubble?’

‘Is this current explosion of AI real and lasting, or just another tech bubble?’

 

 

There is no way around the fact that AI is now with us, and evolving at logarithmic rates. The unanswered question is ‘so what?

There are two extreme schools of thought, and everything in between.

On one hand we have those who are extremely wary:

#  It will replace jobs, creating an unemployed under-class

#  It will take away peoples rights to privacy, choice, and freedom, creating risk from baddies

#  The buggars will take over, we become the slaves of some dystopian thinking ‘terminator’ machines.

On the other hand, there are those who see:

#  Huge commercial and community benefits from the automation and efficiency AI brings

#  Every platform change in the last 200 years from coal to electricity, horses to cars, vacuum tubes to integrated circuits, PC networks to the cloud, all delivering huge benefit. Why not again?

#  The risks are manageable, and less than the benefits that will flow, besides, it is now an unstoppable force, so choices are limited.

Let’s first have some context.

We have been idolising AI from our earliest times, seeking assistance, advice and guidance from all manner of sources. The beguilingly named Ada Lovelace, daughter of Lord Byron wrote what is seen as the first ‘software’ for the Babbage machine in around 1840, with Babbage taking the credit. In 1943 the first paper that associated the neural networks in our brains to electrical circuits was published. In 1950, 73 years ago, Alan Turing wrote a paper called ‘Computing machinery and Intelligence’ which posed the ‘Turing test’.  This remains the central question of AI: ‘When can machines think?

The term AI emerged from a 1956 workshop held at Dartmouth College, seen as the birth of modern AI. It kicked off research work in many corners of the scientific world. Google, Microsoft, Amazon, Apple, scientists, and many startups such as Deep Mind, now part of Google, and OpenAI the designer of Chat and Dall-E, significantly funded by Microsoft, have been working on this since the 90’s. The ‘T’ in ChatGPT stands for ‘Transform’ a patented technology breakthrough by Google.

This long scientific road led to an inflection point last November when OpenAI let Chat GPT out into the wild to see what would happen, and take the strategic ‘first mover’ advantage.

What AI is: the application of maths and software code that ‘teach’ computers to synthesise information and generate output. It is controlled by people, although even the scientists are not always sure of what goes on inside the black box of software.

What AI is not: Killer software and robots that spring to life and take over by killing and/or subjugating people.

How does it work? Statistics and probability, combined with huge computing power.

The probability of a ‘u’ following a ‘q’ in English is very high, the probability of that q being followed by any other letter is very low. The probability of that ‘u’ being followed by an ‘e’ is higher than it being followed by a ‘z’. And so it goes, letter by letter, word by word, progressively taking on the context in which those letters, words, sets of words, and sentences are reflected, such that the difference between a ‘party’ in the sense of a happy event, versus a ‘party’ in the political sense is clear.

Having sorted all that out, what are the things we should be thinking about?

  • AI as an augmenter. A tool that can assist us to outcomes that are smarter, quicker, and more comprehensive than we might have reached on our own. The role of humans will not be eliminated, but it will be changed.
  • AI as a broker. AI stands between us, and an outcome we may not know how to reach, but can be facilitated by AI. You want to write some code, now you do not have to be a coding whizz, AI can do it for you quickly, and with reasonable levels of success.
  • AI as a magnifier. Every kid can have an IA tutor, every doctor an AI coach, every scientist an AI collaborator, this will lead to potential productivity growth, scientific breakthroughs, creative boundaries being busted, reduce death in wars. The downside is also magnified, there is always a flip side to be managed.
  • Should we be concerned with ‘Synthetic Empathy’? we humans are social animals, what impact will this accelerating trend to isolation from physical contact and interaction have on our collective psyche?
  • Blue Vs White collar displacement. Every platform change in our economies over the last 250 years have displaced blue collar workers, in favour of white collar so called ‘knowledge workers’. This one is different, it is the white collar knowledge workers, those who shuffle stuff around who are in the gun. There is no AI/robotics that can replace Albert the plumber, or Steve the sparkie. AI will change the support mechanisms they use, but will not change the simple act of fixing the leak in your bathroom or installing that extra powerpoint in the kitchen..
  • Regulation. How can, and indeed should, we regulate, somehow. It is remarkably difficult to regulate something that does not exist. We have failed to regulate social media, despite with the benefit of hindsight, recognising the damage it can do. Compared to AI, regulating Social media would be easy, and we have failed to get that done. The problem is how do we go about crafting regulations that do anything at all beyond catching silly stuff, when it is in the outliers, and things we do not see other than with hindsight, that the real danger hides.

To answer the question posed in the header, it is my view that AI is an enormous avalanche of technical, cultural and digital change. We need to either get with the program, or get out of the way. If it is the latter, you will be consigning yourself to irrelevance.

This is not to imply it is all good.

AI does not have goals, it is not alive, it is just your toaster on steroids, so you can control it. AI is a tool, like any other, which can be used for good and bad, but indifference will lead to whacking your thumb with the hammer. The other thing about tools is that over time, they build equality and productivity.

However, the potential downsides are huge, the opportunity for evil have never been greater, but as the avalanche will not be stopped, you have to be in front of it to see and prepare for the pitfalls before you trip over them and are consumed.

Suck it up and enjoy the benefits!

Header cartoon credit: XKCD comic from the scary mind of Randall Munroe