Jul 15, 2024 | AI, Governance, Leadership
In a world dominated by discussions around AI, electrification to ‘save the planet’ and its impact on white collar and service jobs, the public seems to miss something fundamental.
All this scaling of electrification to replace fossil fuel, power the new world of AI, and maintain our standard of living, requires massive infrastructure renewal.
Construction of that essential electricity infrastructure requires many skilled people in many functions. From design through fabrication to installation, to operational management and maintenance, people are required. It also requires ‘satellite infrastructure’, the roads, bridges, drivers, trucks, and so on.
None of the benefits of economy wide electrification and AI can be delivered in the absence of investment in the hard assets.
Luckily, investment in infrastructure, hard as it may be to fund in the face of competing and increasing demands on public funds, is a gift we give to our descendants.
I have been highly critical of choices made over the last 35 years which have gutted our investment in infrastructure, science, education, and practical training. Much of what is left has been outsourced to profit making enterprises which ultimately charge more for less.
That is the way monopoly pricing works.
When governments outsource natural monopolies, fat profits to a few emerge very quickly at the long-term expense of the community.
Our investment in the technology to mitigate the impact of climate change is inherently in the interests of our descendants. Not just because we leave them a planet in better shape than it is heading currently, but because we leave them with the infrastructure that has enabled that climate technology to be deployed.
Why are we dancing around short-term partisan fairy tales, procrastinating, and ultimately, delivering sub-standard outcomes to our grandchildren?
Header illustration via Gemini.ai
Jul 2, 2024 | AI, Change, Strategy
AI is the latest new shiny thing in everybody’s sightline.
It seems to me that AI has two faces, a bit like the Roman God Janus.
On one hand we have the large language models or Generatively Pre-trained Transformers, and on the other we have the tools that can be built by just about anyone to do a specific task, or range of tasks, using the GPT’s.
The former requires huge ongoing capital investments in the technology, and infrastructure necessary for operations. There are only a few companies in the position to make those investments: Microsoft, Amazon, Meta, Apple, and perhaps a few others should they choose to do so. (in former days, Governments might consider investing in such fundamental infrastructure, as they did in roads, power generation, water infrastructure)
At the other end of the scale are the tools which anybody could build using the technology provided by the owners of the core technology and infrastructure.
These are entirely different.
Imagine if Thomas Edison and Nikola Tesla between them had managed to be the only ones in a position to generate electricity. They sold that energy to anybody who had a use for it from powering factories, to powering the Internet, to home appliances.
That is the situation we now have with those few who own access to the technology and anybody else who chooses to build on top of it.
The business models that enabled both to grow and prosper are as yet unclear, but becoming clearer every day.
For example, Apple has spent billions developing the technology behind Siri and Vision Pro, neither of which has evolved into a winning position. In early June (2024) Apple and OpenAI did a deal to incorporate ChatGPT into the Apple operating system.
It is a strategic master stroke.
Apple will build a giant toll booth into the hyper-loyal and generally cashed up user base of Apple. Going one step further, they have branded it ‘Apple Intelligence’. In effect, they have created an ‘AI house-brand.’ Others commit to the investment, and Apple charges for access to their user base, with almost no marginal cost.
Down the track, Apple will conduct an auction amongst the few suppliers of AI technology and infrastructure for that access to their user base. To wrangle an old metaphor, they stopped digging for gold, and started selling shovels.
Masterstroke.
It means they can move their focus from the core GPT technology, to providing elegant tools to users of the Apple ecosystem, and charge for the access.
What will be important in the future is not just the foundation technology, which will be in a few hands, but the task specific tools that are built on top of the technology, leveraging its power.
Jun 24, 2024 | AI, Change
We’re all familiar with the standard XY graph. It shows us a point on 2 dimensions.
AI does a similar thing except that it has millions, and more recently, trillions, of dimensions.
Those dimensions are defined by the words we write into the instructions, built upon the base of raw data to which the machine has access.
The output from AI is a function of the data that the particular AI tool has been ‘trained’ on and accesses to respond to the instructions given.
Every letter, word, and sentence, generated is a probability estimate given what has been said previously in the database of what the next word, sentence, paragraph, chapter, and so on, will be.
Generative pre-training of digital models goes back into the 1990’s. Usually it was just called ‘machine learning’, which plays down the ability of machines to identify patterns in data and generate further data-points that fit those patterns. The revolution came with the word ‘transformer’, the T in ChatGPT. This came from the seminal AI paper written inside Google in 2017 called ‘Attention is all you need’.
The simple way to think about a transformer, is to imagine a digital version of a neural network similar to the one that drives our brains. We make connections, based on the combination of what we see, hear, and read, with our own domain knowledge history and attitudes acting as guardrails. A machine simulates that by its access to all the data it has been ‘trained on’, and applies the instructions we give it to then assemble from the data the best answer to the question asked.
The very first paper on AI was written by Alan Turing in 1950 was entitled ‘Computing machinery and intelligence’. He speculated on the possibility of creating machines that think, introducing the concept of what is now known as the ‘Turing Test.’
The original idea that drove the development of the transformer model by Google was a desire to build a superior search capability. When that was achieved, suddenly the other capabilities became evident.
Google then started thinking about the ramifications of releasing the tool, and hesitated, while Microsoft who had been also investing heavily through OpenAI, which started as a non-profit, beat them to a release date, forcing Google to follow quickly, stumbling along the way.
Since the release of ChatGPT3 on November 20, 2022, AI has become an avalanche of tools rapidly expanding to change the way we think about work, education, and the future.
Header cartoon credit: Tom Gauld in New Scientist.
Jun 3, 2024 | AI, Change, Leadership
My time is spent assisting SME’s to improve their performance. This covers their strategic, marketing, and operational performance. Deliberately, I initially try and downplay focus on financial performance as the primary measures, as they are outcomes of a host of other choices made throughout every business.
It is those choices around focus, and resource allocation that need to be examined.
Unfortunately, the financial outcomes are the easiest to measure, so dominate in every business I have ever seen.
When a business is profitable, even if that profit is less that the cost of capital, management is usually locked into current ways of thinking. Even when a business is marginal or even unprofitable, it is hard to drive change in the absence of a real catalyst, such as a creditor threatening to call in the receivers, or a keystone customer going elsewhere.
People are subject to their own experience and biases, and those they see and read about in others.
Convention in a wider context, status quo in their own environment.
Availability bias drives them to put undue weight in the familiar, while dismissing other and especially contrary information.
Confirmation bias makes us unconsciously seek information that confirms what we already believe, while obscuring the contrary.
Between them, these two forces of human psychology cements in the status quo, irrespective of how poor that may be.
Distinguishing between convention and principle is tough, as you need to dismiss these natural biases that exist in all of us. We must reduce everything back to first principles, incredibly hard, as we are not ‘wired’ that way.
The late Daniel Kahneman articulated these problems in his book ‘Thinking fast and Slow’ based on the data he gathered with colleague Amos Tversky in the seventies. This data interrogated the way we make decisions by experimentation, which enables others to quantitively test the conclusions, rather than relying on opinion.
That work opened a whole new field of research we now call ‘Behavioural Economics’ and won Kahneman the Nobel prize. Sadly however, while many have read and understand at a macro level these biases we all feel, it remains challenging to make that key distinction between convention, the way we do it, the way it has always been done, and the underlying principles that should drive the choices we make.
As Richard Feynman put it: “The first principle is that you must not fool yourself—and you are the easiest person to fool. So, you have to be very careful about that.”
May 29, 2024 | AI, Governance
Most BBQ conversations about the future of AI end up as a discussion about jobs being replaced, new jobs created the balance between the two, and the pain of those being replaced by machine.
It is difficult to forecast what those new jobs will be, we have not seen them before, the circumstances by which they will be created are still evolving.
18 months ago, a new job emerged that now appears to be everywhere.
‘Prompt engineer’.
Yesterday it seems, there was no such thing as a ‘prompt engineer’. Nobody envisaged such a job, nobody considered the capabilities or training necessary to become an effective prompt engineer. Now, if you put the term into a search engine there are millions of responses, thousands of websites, guides, and courses have popped up from nowhere. They promise riches for those who are skilled ‘prompt engineers’ and training for those who hop onto the gravy train.
What is the skill set required to be a prompt engineer?
There are no traditional education courses available, do you need to be an engineer, a copywriter, marketer, mathematician?
This uncertainty makes recruiting extremely difficult. The usual guardrails of qualifications and past experience necessary to fill a role are useless.
How do you know if the 20-year-old with no life experience and limited formal education might be an effective and productive prompt engineer?
How many job descriptions will emerge over the next couple of years that are currently not even under any sort of consideration?
Recruiting rules no longer play a role. We need to hire for curiosity, intellectual agility, and some form of conceptual capability that I have no word for.
The challenging task faced by businesses is how they adjust the mix of capabilities to accommodate this new reality.
Do they proactively seek to build the skills of existing employees which requires investment? Do they clean house and start again, losing corporate memory and costing a fortune? Do they try and find some middle path?
Where and how do you find the personnel capable of building for a future that is undefined?
May 27, 2024 | AI, Governance
Nvidia 2 years ago was a stock nobody had heard of. Now, it has a market valuation of $US2.7 trillion. Google, Amazon, and Microsoft from the beginning of this year have invested $30 billion in AI infrastructure, seen their market valuations accelerate, and there are hundreds of AI start-ups every week.
Everybody is barking up the same tree: AI, AI, AI…..
Warren Buffett, the most successful investor ever, is famous for saying he would not invest in anything he did not understand.
He conceded many opportunities have passed him by, but he gets many right. Berkshire is the single biggest investor in Apple, a $200 billion investment at current market value that cost a small fraction of that amount.
Does anyone really understand AI?
Are we able to forecast its impact on communities and society?
We failed miserably with Social media, why should AI be any different?
Even the experts cannot agree on some simple parameters. Should there be regulatory controls? Should the infrastructure be considered a ‘public utility’? when, and even if, will sentience be achieved?
Bubbles burst, and many investors get cleaned out, but when you look in detail, there are always elements of the bubble that remain, and prosper.
The 2000 dot com bubble burst, and many lost fortunes. However, there are a number of businesses that at the time looked wildly overvalued, that are now dominating the leaderboards: Apple, Amazon, and Google for example.
The tech was transformative, and at any transformative point, there are cracks that many do not see, so stumble. From the rubble, there always emerges some winners, often unexpected and unforecastable.
Is AI just another bubble, or is it as transformative as the printing press, steam, electricity, and the internet?
Header cartoon courtesy of an AI tool.