The 74 year journey of AI

The 74 year journey of AI

 

 

We’re all familiar with the standard XY graph. It shows us a point on 2 dimensions.

AI does a similar thing except that it has millions, and more recently, trillions, of dimensions.

Those dimensions are defined by the words we write into the instructions, built upon the base of raw data to which the machine has access.

The output from AI is a function of the data that the particular AI tool has been ‘trained’ on and accesses to respond to the instructions given.

Every letter, word, and sentence, generated is a probability estimate given what has been said previously in the database of what the next word, sentence, paragraph, chapter, and so on, will be.

Generative pre-training of digital models goes back into the 1990’s. Usually it was just called ‘machine learning’, which plays down the ability of machines to identify patterns in data and generate further data-points that fit those patterns. The revolution came with the word ‘transformer’, the T in ChatGPT. This came from the seminal AI paper written inside Google in 2017 called ‘Attention is all you need’.

The simple way to think about a transformer, is to imagine a digital version of a neural network similar to the one that drives our brains. We make connections, based on the combination of what we see, hear, and read, with our own domain knowledge history and attitudes acting as guardrails. A machine simulates that by its access to all the data it has been ‘trained on’, and applies the instructions we give it to then assemble from the data the best answer to the question asked.

The very first paper on AI was written by Alan Turing in 1950 was entitled ‘Computing machinery and intelligence’. He speculated on the possibility of creating machines that think, introducing the concept of what is now known as the ‘Turing Test.’

The original idea that drove the development of the transformer model by Google was a desire to build a superior search capability. When that was achieved, suddenly the other capabilities became evident.

Google then started thinking about the ramifications of releasing the tool, and hesitated, while Microsoft who had been also investing heavily through OpenAI, which started as a non-profit, beat them to a release date, forcing Google to follow quickly, stumbling along the way.

Since the release of ChatGPT3 on November 20, 2022, AI has become an avalanche of tools rapidly expanding to change the way we think about work, education, and the future.

 

Header cartoon credit: Tom Gauld in New Scientist.

 

The fundamental management distinction: Principle or Convention?

The fundamental management distinction: Principle or Convention?

My time is spent assisting SME’s to improve their performance. This covers their strategic, marketing, and operational performance. Deliberately, I initially try and downplay focus on financial performance as the primary measures, as they are outcomes of a host of other choices made throughout every business.

It is those choices around focus, and resource allocation that need to be examined.

Unfortunately, the financial outcomes are the easiest to measure, so dominate in every business I have ever seen.

When a business is profitable, even if that profit is less that the cost of capital, management is usually locked into current ways of thinking. Even when a business is marginal or even unprofitable, it is hard to drive change in the absence of a real catalyst, such as a creditor threatening to call in the receivers, or a keystone customer going elsewhere.

People are subject to their own experience and biases, and those they see and read about in others.

Convention in a wider context, status quo in their own environment.

Availability bias drives them to put undue weight in the familiar, while dismissing other and especially contrary information.

Confirmation bias makes us unconsciously seek information that confirms what we already believe, while obscuring the contrary.

Between them, these two forces of human psychology cements in the status quo, irrespective of how poor that may be.

Distinguishing between convention and principle is tough, as you need to dismiss these natural biases that exist in all of us. We must reduce everything back to first principles, incredibly hard, as we are not ‘wired’ that way.

The late Daniel Kahneman articulated these problems in his book ‘Thinking fast and Slow’ based on the data he gathered with colleague Amos Tversky in the seventies. This data interrogated the way we make decisions by experimentation, which enables others to quantitively test the conclusions, rather than relying on opinion.

That work opened a whole new field of research we now call ‘Behavioural Economics’ and won Kahneman the Nobel prize. Sadly however, while many have read and understand at a macro level these biases we all feel, it remains challenging to make that key distinction between convention, the way we do it, the way it has always been done, and the underlying principles that should drive the choices we make.

As Richard Feynman put it: “The first principle is that you must not fool yourself—and you are the easiest person to fool. So, you have to be very careful about that.

How do we prepare for AI roles that do not exist? 

How do we prepare for AI roles that do not exist? 

 

 

Most BBQ conversations about the future of AI end up as a discussion about jobs being replaced, new jobs created the balance between the two, and the pain of those being replaced by machine.

It is difficult to forecast what those new jobs will be, we have not seen them before, the circumstances by which they will be created are still evolving.

18 months ago, a new job emerged that now appears to be everywhere.

‘Prompt engineer’.

Yesterday it seems, there was no such thing as a ‘prompt engineer’. Nobody envisaged such a job, nobody considered the capabilities or training necessary to become an effective prompt engineer. Now, if you put the term into a search engine there are millions of responses, thousands of websites, guides, and courses have popped up from nowhere. They promise riches for those who are skilled ‘prompt engineers’ and training for those who hop onto the gravy train.

What is the skill set required to be a prompt engineer?

There are no traditional education courses available, do you need to be an engineer, a copywriter, marketer, mathematician?

This uncertainty makes recruiting extremely difficult. The usual guardrails of qualifications and past experience necessary to fill a role are useless.

How do you know if the 20-year-old with no life experience and limited formal education might be an effective and productive prompt engineer?

How many job descriptions will emerge over the next couple of years that are currently not even under any sort of consideration?

Recruiting rules no longer play a role. We need to hire for curiosity, intellectual agility, and some form of conceptual capability that I have no word for.

The challenging task faced by businesses is how they adjust the mix of capabilities to accommodate this new reality.

Do they proactively seek to build the skills of existing employees which requires investment? Do they clean house and start again, losing corporate memory and costing a fortune? Do they try and find some middle path?

Where and how do you find the personnel capable of building for a future that is undefined?

 

 

 

 

Are we in an AI bubble?

Are we in an AI bubble?

 

 

Nvidia 2 years ago was a stock nobody had heard of. Now, it has a market valuation of $US2.7 trillion. Google, Amazon, and Microsoft from the beginning of this year have invested $30 billion in AI infrastructure, seen their market valuations accelerate, and there are hundreds of AI start-ups every week.

Everybody is barking up the same tree: AI, AI, AI…..

Warren Buffett, the most successful investor ever, is famous for saying he would not invest in anything he did not understand.

He conceded many opportunities have passed him by, but he gets many right. Berkshire is the single biggest investor in Apple, a $200 billion investment at current market value that cost a small fraction of that amount.

Does anyone really understand AI?

Are we able to forecast its impact on communities and society?

We failed miserably with Social media, why should AI be any different?

Even the experts cannot agree on some simple parameters. Should there be regulatory controls? Should the infrastructure be considered a ‘public utility’? when, and even if, will sentience be achieved?

Bubbles burst, and many investors get cleaned out, but when you look in detail, there are always elements of the bubble that remain, and prosper.

The 2000 dot com bubble burst, and  many lost fortunes. However, there are a number of businesses that at the time looked wildly overvalued, that are now dominating the leaderboards: Apple, Amazon, and Google for example.

The tech was transformative, and at any transformative point, there are cracks that many do not see, so stumble. From the rubble, there always emerges some winners, often unexpected and unforecastable.

Is AI just another bubble, or is it as transformative as the printing press, steam, electricity, and the internet?

Header cartoon courtesy of an AI tool.

 

 

 

Treat ‘prompt engineering’ as you would a ten year old.

Treat ‘prompt engineering’ as you would a ten year old.

Management is all over the place, scrambling to ‘get AI’.

A common failure of that scramble is a reality: rubbish in — rubbish out.

Outcome quality depends on two factors:

  • Data quality. The quality of the data that is used to generate that outcome. The quality, depth and breadth of the data is dictated by the databases on which the system was trained,
  • Instructions given. The instructions you give the machine will drive the type and weight it gives to the available data in response to your instructions.

AI is a ‘machine’, an electronic warehouse of information it makes available on request.

They are machines, not people. They cannot ‘think, they do as instructed, using predetermined ‘training’ to prepare an answer.

Most people are radically unprepared for the changes coming.

The best known problem solving metaphor has always been Einstein’s.

He observed that if he had an hour to solve a life defining problem, the 1st 50 minutes would be spent defining the problem, the rest is just maths.

It is identical in the deployment of AI.

It seems to me that when a system ‘hallucinates’ it is a sign that it has been inadequately briefed. Think about the briefing as you would explaining something to any intelligent 10-year-old!

Keep it simple.

Explicitly define what information is to be used.

Explicitly define the objective to which the information will be directed.

Explicitly give any contextual information that may be helpful.

Explicitly define the range of outcomes you might be looking for.

The key to leveraging the speed and depth of data made available by AI systems is in the preparation of the data and the matching of that data to the problem being addressed.

If you use this simple process, the one you should have practised on your children, you can dodge the expensive and largely useless ‘prompt engineering’ courses, books, and gurus that have sprung up like mushrooms after rain. They are there to drain your pockets by offering seemingly easy solutions to difficult challenges.

There is no such thing as an easy solution that negates the necessity to ‘do the work’.

 

Header credit: DALL-E.

Cash flow as the lifeblood is only half a metaphor.

Cash flow as the lifeblood is only half a metaphor.

 

 

Cash flow is often described as the lifeblood of a business.

While it is correct, it leaves a lot on the table.

If cash flow is the lifeblood, you also need a heart to pump it around the body. The leaner and more efficient the body in which the heart resides, the easier it is to pump, reducing the stress on the mechanism, reducing risk.

Similarly, to be effective blood requires oxygen to be attracted and distributed through the system.

Oxygen is what keeps everything working, it is the source of the power required to run the system, without which the system rapidly grinds to a halt.

In a business context, the oxygen is the input of information, the lungs and heart are the analysis and leveraging of that information, and the culture of the organisation is the body that holds it all together.

You go to the doctor to get a physical, where do you go to get a ‘commercial’?

An accountant will give you part of the picture, based on the books.

A ‘lean’ expert might offer many insights into the operational processes, particularly in a factory, and at the same time offer cultural insights.

A ‘6 sigma’ expert will deliver an arithmetic analysis of the efficiency of each part of a process.

A marketing expert (if you can find a bullshit-free one) will give you opinions based often on questionable and partial information, and usually biased towards their particular view of the role of marketing.

A sales expert will opine that everything else will be OK if you just get more leads for them to convert, and here is how!!

The point is that each will give you a picture of your business as they see it based on their experience, training, predisposition, domain knowledge, and their own assessment of WIFM.

Finding someone who ties all that together, and offers a complete, unbiased, and expert picture is a challenge.