Where is the AI business model hiding?

Where is the AI business model hiding?

 

 

AI is the newest, shiniest thing we have seen since, well, perhaps ever, at least in the speed with which it has overtaken consciousness.

ChatGPT was released to the ‘wild’ in November 2022. In commercial terms, yesterday.

In that time, it has overtaken discussion, business planning, capability questions, and profoundly changed the face of stock markets.

An amazing outcome for a technology without a business model.

The committed AI infrastructure spending over the next year by the big 5 LLM builders, OpenAI, Amazon, Microsoft, Amazon and Google is over $200 billion. Depending on your sources, this might vary a bit, but may even be on the low side. It does not count the billions being spent by everybody else, largely on setting about leveraging the ‘infrastructure’ delivered by the LLM’s.

Again, depending on your sources, the revenues being generated over the next year by AI suppliers, both of the infrastructure and tools rapidly becoming available is probably $20 billion.

Nowhere in history has there been a tsunami of investment of this size and speed in the absence of a solid business model. There is no clear way forward to generating a return on that investment.

This is the equivalent to a goldrush, except, in a gold rush if you were the lucky one to find those elusive nuggets, you had some idea what they were worth, and an established way of monetising the metal.

Nothing of the sort exists with AI.

I have done plenty of capital proposals in my time, some with forecasts that bordered on the wildly optimistic because I believed a change of some sport would be generated by the object of that Capex. In my wildest dreams, I have never proposed anything like the ratio of capex to current revenue exhibited by this investment.

There is confusion around the term ‘Trillion’. Historically, the US and UK definitions differed, the UK version being 10^18, 1,000 times larger than the US trillion which is to the power of 12, or one million million. I explain this for clarity and comparative purposes.

On current stock market valuations (August 2024) Nvidia, a business few had heard of a year ago, is the most valuable company on earth with a valuation of US3.2Trillion. They trade places regularly with Apple for the No. 1 spot. Currently Apple is number 2, also at a rounded 3.2 trillion, but a few tens of millions behind Nvidia. Microsoft is third with 3.1 trillion, followed by Amazon at 1.9, and Meta at 1.3. The comparison I wanted to highlight is with the GDP of Australia, of US1.7 trillion. Australian GDP is just over half the market valuation of Nvidia and Apple, a sobering thought.

An investment of 200 billion against current revenues of 20 billion is simply the biggest financial gamble in the history, by a logarithmic amount.

The people running these massive businesses are not stupid. They see and are betting their companies (and they are ‘their’ companies, as control is in a very few hands) on massive returns, which means in turn that the fabric of everything we see and do must change, very quickly. The business models will change, and they will not be just everybody subscribing for modest monthly amounts to the latest LLM model. There will have to be whole new industries being ‘invented’ with successful business models in place for there to be a return on the capex being deployed.

The windows of opportunity that will open, and close just as quickly, over the next decade are immense.

No wonder there is a gold rush, it is just the location of the gold still in question.

 

 

 Synthetic Data: A Game Changer for Small Business.

 Synthetic Data: A Game Changer for Small Business.

 

 

AI promises a multitude of productivity benefits for all enterprises.

For the thousands of SMEs competing with much larger rivals, AI offers the potential for easily accessible, reliable, and credible data on an unprecedented scale.

One such opportunity lies in market research, which has often been out of reach for SMEs due to its high cost.

AI systems are sophisticated probability machines. Given a base to ‘learn’ from and a set of instructions, AI can predict the next letter, word, sentence, illustration, piece of code, or conclusion. Feed it the right data to learn from, prompt that ‘learning’ with instructions, and the probability machine goes to work.

‘Synthetic data’ is the analysed outcome of a well-articulated AI search for relevant data from publicly available sources, potentially enhanced by data from a company’s own resources.

For instance, an FMCG supplier might need ‘attitude and usage’ research to support ranging of a new product in major retailers. Traditionally, they might spend $100-200k on a combined qualitative and quantitative market research project, which could take several months to complete.

Way out of the reach of most SME’s.

Alternatively, they could invest $15-25k in an AI application to scan social media, relevant publicly available statistics, and their own sales and scan data. This AI-generated ‘synthetic data’ might not be quite as accurate as a well-designed and executed market research study. However, it could be produced quickly, relatively cheaply, and be sufficiently accurate to provide compelling market insights and consumer behaviour forecasts.

Suddenly, opportunities previously out of reach for SME’s can be leveraged. Combined with their shorter decision cycles and less risk averse nature, SME’s now have the potential to haul back some of the ground they have lost to deeper pocketed large businesses.

Header illustration is via a free AI tool. it took less than 30 seconds to brief and deliver.

 

 

5 ways to discriminate between the guru and the copy-cat?

5 ways to discriminate between the guru and the copy-cat?

 

 

Increasingly, we must distinguish between ‘content’ created by some AI tool, masquerading as thought leadership and advice, and the genuine output of experts seeking to inform, encourage debate and deepen the pool of knowledge.

I’m constantly reminded as I read and hear the superficial nonsense spread around as serious advice, of the story Charlie Munger often told of Max Planck and his chauffeur.

Doctor Planck had been touring Europe giving the same lecture on quantum mechanics to scientific audiences. His constant chauffeur had heard the presentation many times, and had learnt it by heart. One night in Munich, he suggested that he give the lecture while Doctor Planck acting as the chauffeur sat in the audience, resting.

After a well received presentation a question from a professor was asked to which the chauffeur responded, ‘I am surprised that in an advanced city like Munich, I get such an elementary question. I am going to ask my chauffeur to respond’.

It is hard at a superficial level to tell the difference between a genuine expert, and someone who has just learned the lines.

To tell the difference between those two you must

  • Dig deeper to determine the depth of knowledge, where it came from. Personal stories and anecdotes are always a good market of originality.
  • Understand how the information adjusts to different circumstances, and contexts. An inability to articulate the ‘edge’ situations offers insight to the depth of thinking that has occurred.
  • Look for the sources of the information being delivered. Peer reviewed papers and research is always better than some random Youtube channel curated for numbers to generate ad revenue.
  • Consider the ‘tone of voice’ in which the commentary is delivered. AI generated material will be generic, bland, average. By contrast, genuine originality will always display the verbal, written and presentation characteristics of the originator.
  • Challenge the ‘expert’ to break down the complexity of the idea into simple terms that a 10 year old would understand.

These will indicate to you the degree of understanding from first principles, the building blocks of knowledge, that the ‘Guru’ has.

The header is a photo of Max Planck in his study, without his chauffeur.

 

 

 

The ultimate ‘AI machine’ between our ears.

The ultimate ‘AI machine’ between our ears.

 

 

Our brains work on 3 levels.

At the most basic is the ‘reptilian brain.’ This is the ancient wiring that is common with every other animal. It monitors and manages the automatic things that must happen for life. Our instincts, temperature control, heart rate, respiration reproductive drives, everything necessary for the survival of the animal.

The limbic system. This manages our emotional lives, fear, arousal, memories, it is where we store our beliefs. It in effect provides the framework through which we look to make sense of the world.

The Neo cortex, the newest part of our brain that differentiates us from other animals. It is where we make choices, it controls our language, imagination, and self-awareness.

This three-part picture is a metaphor. The parts of the brain do not act independently, but in an entirely integrated manner, each having an impact on the others, and receiving input from the others.

Consider the parts of this complex interconnected and interdependent neuro system that is replaceable by AI. There is not all that many of them, beyond the extrapolation of language and imagery from what is in the past.

Despite the hype, we have a long way to go before artificial sentience will be achieved, if it is possible. (Expert opinion varies from ‘Within the decade’ to ‘Never’).

However, who cares?

The productivity gains from AI are present in some form in every current job, and the numbers of new jobs that will emerge are huge. Nobody had conceived of the job of ‘prompt engineer’ 3 years ago!

These new jobs in combination with the renewal of those currently available, will deliver satisfaction, and a standard of living out kids will thank us for.

Sadly, there is always a flip side. In this case it is the dark downsides we all see emerging from social media, which will also be on steroids, and the social dislocation that will occur to those on the sharp end of the changes in jobs.

How we manage that balance will be the challenge of the 2030’s.

 

Image by Canva.com

 

Neglect to Necessity:  Infrastructure is a gift.

Neglect to Necessity:  Infrastructure is a gift.

 

 

In a world dominated by discussions around AI, electrification to ‘save the planet’ and its impact on white collar and service jobs, the public seems to miss something fundamental.

All this scaling of electrification to replace fossil fuel, power the new world of AI, and maintain our standard of living, requires massive infrastructure renewal.

Construction of that essential electricity infrastructure requires many skilled people in many functions. From design through fabrication to installation, to operational management and maintenance, people are required. It also requires ‘satellite infrastructure’, the roads, bridges, drivers, trucks, and so on.

None of the benefits of economy wide electrification and AI can be delivered in the absence of investment in the hard assets.

Luckily, investment in infrastructure, hard as it may be to fund in the face of competing and increasing demands on public funds, is a gift we give to our descendants.

I have been highly critical of choices made over the last 35 years which have gutted our investment in infrastructure, science, education, and practical training. Much of what is left has been outsourced to profit making enterprises which ultimately charge more for less.

That is the way monopoly pricing works.

When governments outsource natural monopolies, fat profits to a few emerge very quickly at the long-term expense of the community.

Our investment in the technology to mitigate the impact of climate change is inherently in the interests of our descendants. Not just because we leave them a planet in better shape than it is heading currently, but because we leave them with the infrastructure that has enabled that climate technology to be deployed.

Why are we dancing around short-term partisan fairy tales, procrastinating, and ultimately, delivering sub-standard outcomes to our grandchildren?

Header illustration via Gemini.ai

 

The two separate faces of AI.

The two separate faces of AI.

 

AI is the latest new shiny thing in everybody’s sightline.

It seems to me that AI has two faces, a bit like the Roman God Janus.

On one hand we have the large language models or Generatively Pre-trained Transformers, and on the other we have the tools that can be built by just about anyone to do a specific task, or range of tasks, using the GPT’s.

The former requires huge ongoing capital investments in the technology, and infrastructure necessary for operations. There are only a few companies in the position to make those investments: Microsoft, Amazon, Meta, Apple, and perhaps a few others should they choose to do so. (in former days, Governments might consider investing in such fundamental infrastructure, as they did in roads, power generation, water infrastructure)

At the other end of the scale are the tools which anybody could build using the technology provided by the owners of the core technology and infrastructure.

These are entirely different.

Imagine if Thomas Edison and Nikola Tesla between them had managed to be the only ones in a position to generate electricity. They sold that energy to anybody who had a use for it from powering factories, to powering the Internet, to home appliances.

That is the situation we now have with those few who own access to the technology and anybody else who chooses to build on top of it.

The business models that enabled both to grow and prosper are as yet unclear, but becoming clearer every day.

For example, Apple has spent billions developing the technology behind Siri and Vision Pro, neither of which has evolved into a winning position. In early June (2024) Apple and OpenAI did a deal to incorporate ChatGPT into the Apple operating system.

It is a strategic master stroke.

Apple will build a giant toll booth into the hyper-loyal and generally cashed up user base of Apple. Going one step further, they have branded it ‘Apple Intelligence’. In effect, they have created an ‘AI house-brand.’ Others commit to the investment, and Apple charges for access to their user base, with almost no marginal cost.

Down the track, Apple will conduct an auction amongst the few suppliers of AI technology and infrastructure for that access to their user base. To wrangle an old metaphor, they stopped digging for gold, and started selling shovels.

Masterstroke.

It means they can move their focus from the core GPT technology, to providing elegant tools to users of the Apple ecosystem, and charge for the access.

What will be important in the future is not just the foundation technology, which will be in a few hands, but the task specific tools that are built on top of the technology, leveraging its power.