The market seems to be content, for now at least, to keep betting big on AI.
While the value of some companies integral to the AI boom like Nvidia, Oracle and Coreweave have seen their value fall since the highs of the mid-2025, the US stockmarket remains dominated by investment in AI.
Of the S&P500 index of leading companies, 75% of returns are thanks to 41 AI stocks. The "magnificent seven" of big tech companies, Nvidia, Microsoft, Amazon, Google, Meta, Apple and Tesla, account for 37% of the S&P's performance.
Such dominance, based almost exclusively on building one kind of AI - Large Language Models is sustaining fears of an AI bubble.
Nonsense, according to the AI titans.
"We are long, long away from that," Jensen Huang, CEO of AI chip-maker Nvidia and the world’s first $5trn company, told Sky News last month.
Not everyone shares that confidence.
Too much confidence in one way of making AI, which so far hasn't delivered profits anywhere close to the level of spending, must be testing the nerve of investors wondering where their returns will be.
The consequences of the bubble bursting, could be dire.
"If a few venture capitalists get wiped out, nobody's gonna be really that sad," said Gary Marcus, AI scientist and emeritus professor at New York University.
But with a large part of US economic growth this year down to investment in AI, the "blast radius", could be much greater, said Marcus.
"In the worst case, what happens is the whole economy falls apart, basically. Banks aren't liquid, we have bailouts, and taxpayers have to pay for it."
Could that happen?
Well there are some ominous signs.
By one estimate Microsoft, Amazon, Google Meta and Oracle are expected to spend around $1trn on AI by 2026.
Open AI, maker of the first breakthrough Large Language Model ChatGPT, is committing to spend $1.4trn over the coming three years.
But what are investors in those companies getting in return for their investment? So far, not very much.
Take OpenAI, it’s expected to make little more than $20bn in profit in 2025. A lot of money, but nothing like enough to sustain spending of $1.4trn.
The size of the AI boom - or bubble depending on your view - comes down to the way it’s being built.
Computer cities
The AI revolution came in early 2023 when OpenAI released ChatGPT4.
The AI represented a mind-blowing improvement in natural language, computer coding and image generation ability that grew almost entirely out of one advance: Scale
GPT-4 required 3,000 to 10,000 times more computer power - or compute - than its predecessor GPT-2.
To make it smarter, it was trained on far more data. GPT-2 was trained on 1.5 billion "parameters" compared to perhaps 1.8 trillion for GPT-4 - essentially all the text, image and video data on the internet.
The leap in performance was so great, "Artificial General Intelligence" or AGI that rivals humans on most tasks, would come from simply repeating that trick.
And that’s what’s been happening. Demand for frontline GPU chips to train AI soared - and hence the share price of Nvidia which makes them doing the same.
The bulldozers then moved in to build the next generation of mega-data centres to run the chips and make the next generations of AI.
And they moved fast.
Stargate, announced in January by Donald Trump, Open AI’s Sam Altman and other partners, already has two vast data centre buildings in operation.
By mid-2026 the complex in central Texas is expected to cover an area the size of Manhattan's Central Park.
And already, it’s beginning to look like small fry.
Meta’s $27bn Hyperion data centre being built in Louisiana is closer to the size of Manhattan itself.
The data centre is expected to consume twice as much power as the nearby city of New Orleans.
The rampant increase in power demand is putting a major squeeze on America’s power grid with some data centres having to wait years for grid connections.
A problem for some, but not, say optimists, firms like Microsoft, Meta and Google, with such deep pockets they can build their own power stations.
Once these vast AI brains are built and switched on however, will they print money?
Stale Chips
Unlike other expensive infrastructure like roads, rail or power networks, AI data centres are expected to need constant upgrades.
Investors have good estimates for "depreciation curves" of various types of infrastructure asset. But not so for cutting-edge purpose-built AI data centres which barely existed five years ago.
Nvidia, the leading maker of AI chips, has been releasing new, more powerful processors every year or so. It claims their latest chips will run for three to six years.
But there are doubts.
Fund manager Michael Burry, immortalised in the movie The Big Short, for predicting America's sub-prime crash, recently announced he was betting against AI stocks.
His reasoning, that AI chips will need replacing every three years and given competition with rivals for the latest chips, perhaps faster than that.
Cooling, switching and wiring systems of data centres also wears down over time and is likely to need replacing within 10 years.
A few months ago, the Economist magazine estimated that if AI chips alone lose their edge every three years, it would reduce the combined value of the five big tech companies by $780bn.
If depreciation rates were two years, that number goes up to $1.6trn.
Factor in that depreciation and it further widens the already colossal gap between their AI spending and likely revenues.
By one estimate, the big tech will need to see $2trn in profit by 2030 to justify their AI costs.
Are people buying it?
And then there’s the question of where the profits are to justify the massive AI investments.
AI adoption is undoubtedly on the rise.
You only have to skim your social media to witness the rise of AI-generated text, images and videos.
Read more from Sky News:
Epstein victims react to partial release of files
Fears Palestine Action hunger striker will die in prison
Kids are using it for homework, their parents for research, or help composing letters and reports.
But beyond casual use and fantastical cat videos, are people actually profiting from it - and therefore likely to pay enough for it to satisfy trillion-dollar investments?
There's early signs current AI could revolutionise some markets, like software and drug development, creative industries and online shopping,
And by some measures, the future looks promising, OpenAI claims to have 800 million "weekly active users" across its products, double what it was in February.
However, only 5% of those are paying subscribers.
And when you look at adoption by businesses - where the real money is for Big Tech - things don’t look much better.
According to the US census bureau at the start of 2025, 8-12% of companies said they are starting to use AI to produce goods and services.
For larger companies - with more money to spend on AI perhaps - adoption grew to 14% in June but has fallen to 12% in recent months.
According to analysis by McKinsey, the vast majority of companies are still in the pilot stage of AI rollout or looking at how to scale their use.
In a way, this makes total sense. Generative AI is a new technology, with even the companies building still trying to figure out what it’s best for.
But how long will shareholders be prepared to wait before profits come even close to paying off the investments they’ve made?
Especially, when confidence in the idea that current AI models will only get better is beginning to falter.
Is scaling failing?
Large Language Models are undoubtedly improving.
According to industry "benchmarks", technical tests that evaluate AI’s ability to perform complex maths, coding or research tasks, performance is tracking the scale of computing power being added. Currently doubling every six months or so.
But on real-world tasks, the evidence is less strong.
LLMs work by making statistical predictions of what answers should be based on their training data, without actually understanding what that data actually "means".
They struggle with tasks that involve understanding how the world works and learning from it.
Their architecture doesn’t have any kind of long-term memory allowing them to learn what types of data is important and what’s not. Something that human brains do without having to be told.
For that reason, while they make huge improvements on certain tasks, they consistently make the same kind of mistakes, and fail at the same kind of tasks.
"Is the belief that if you just 100x the scale, everything would be transformed? I don't think that's true," Ilya Sutskever, the co-founder of OpenAI told the Dwarkesh Podcast last month.
The AI scientist who helped pioneer ChatGPT, before leaving OpenAI predicted, "it's back to the age of research again, just with big computers".
Will those who've taken big bets with AI be satisfied with modest future improvements, while they wait for potential customers to figure out how to make AI work for them?
"It's really just a scaling hypothesis, a guess that this might work. It's not really working," said Prof Marcus.
"So you're spending trillions of dollars, profits are negligible and depreciation is high. It does not make sense. And so then it's a question of when the market realises that."
(c) Sky News 2025: Is the AI bubble about to burst? If so the consequences could be dire
Stars and fans including Liam Gallagher, Paul Weller and David Beckham gather for Gary 'Mani' Mounfield's funeral
Major incident declared as sinkhole drains canal and swallows narrowboats in Shropshire
Bondi Beach shooting suspect trained with father before attack, police say
Fears Palestine Action hunger striker will die in prison after 43 days without food
Puppy farming and trail hunting to be banned - but critics warn of 'war on the countryside'