Every major technology conference these days has an odd tension in the air. Venture capitalists on one side of the room discuss how artificial intelligence will transform civilization in a generation with almost religious fervor. On the quieter side, researchers and economists sort through data that reveals a more nuanced narrative. The same technology is being examined by both groups. They are coming to quite different conclusions.
The figures being discussed are practically unbelievably high. Over the next five years, OpenAI and Oracle have agreed to invest $300 billion in computing infrastructure. The chip company Nvidia, which provides the engines driving this whole gold rush, momentarily surpassed a $5 trillion valuation, which ten years ago would have been unthinkable. In just seven months, Alphabet’s stock doubled, bringing the company’s valuation to $3.5 trillion. By the end of this year, Meta, Microsoft, Amazon, and Google are expected to spend about $560 billion on capital expenditures related to AI. These are substantial wagers. These are bets on civilization.
| Detail | Information |
|---|---|
| Primary Subject | Global AI investment boom and emerging market concerns |
| Key Company | Alphabet Inc. (Google’s parent), OpenAI, Nvidia, Microsoft, Meta, Amazon |
| Sundar Pichai (Alphabet CEO) | Warned of “elements of irrationality” in the AI boom; stated no company is immune if the bubble bursts |
| Alphabet Valuation | ~$3.5 trillion (shares doubled in seven months as of late 2025) |
| Nvidia Valuation | Reached $5 trillion — first company to do so |
| OpenAI Committed Spend | $300 billion in computing power with Oracle over five years |
| AI Capital Expenditure (2025 est.) | $61.9 billion by businesses; over 1.1% of U.S. GDP growth driven by AI spending |
| ROI Reality Check | 95% of 52 organizations studied showed zero return on investment despite $30–40 billion spent |
| AI Project Abandonment Rate | 42% of pilot projects abandoned by end of 2024 (up from 17% in 2023) |
| Historical Parallel | The dot-com boom and bust of the late 1990s |
| Reference | BBC News — Google boss on AI irrationality |
However, something doesn’t seem to add up. Nearly eight out of ten businesses now use generative AI, according to McKinsey & Company research, but just as many claim there has been no discernible effect on their profitability. Ninety-five percent of projects failed to increase profits, according to a study that tracked over 300 publicly disclosed AI initiatives. According to Gartner, a research company that tracks the development of new technologies, artificial intelligence has entered what it refers to as the “trough of disillusionment.” That isn’t a newsletter headline from a skeptic. That comes from one of the world’s most well-known technology advisory firms.
It’s difficult to ignore how much this sounds like a tale we’ve heard before. Internet companies demanded exorbitant valuations in the late 1990s on the promise of a technology that would transform everything. Eventually, the technology did transform everything. It was not the concept that fell apart. The belief that the payout was about to happen was what fell apart. In an open interview with the BBC at Google’s California headquarters, Alphabet CEO Sundar Pichai made precisely this point.
He carefully chose his words when discussing the current AI boom, saying that the industry can “overshoot” in investment cycles like this. His company has a lot to gain from optimism and a lot to lose from a correction. Although the development of AI has been “an extraordinary moment,” he admitted that there are “elements of irrationality” present. He continued by saying that no business, including his own, would be safe if something went wrong.
It is worthwhile to sit with someone in Pichai’s position who is so honest. This is not the opinion of a pessimistic hedge fund manager. The tone of Alan Greenspan’s well-known 1996 warning about “irrational exuberance” in stock markets, which came four full years before the dotcom crash actually occurred, is echoed by the head of one of the most significant AI companies in the world.
The mechanics of possible problems are starting to make sense. A fundamental tenet of the AI industry, sometimes known as the scaling laws, is that performance increases consistently when models are made larger, fed more data, and given more processing power. This was true for a number of years. There were 175 billion parameters in GPT-3. Ten times larger and clearly more capable, GPT-4 was estimated to be around 1.8 trillion. Benchmarks increased. Boardrooms became enthusiastic. Money poured in.
However, the gains seem to be diminishing. The more recent models have significantly fewer dramatic leaps and are much more expensive. More than three-quarters of the 475 AI researchers surveyed in 2024 said it was “unlikely” or “very unlikely” that current methods could result in a system that could match human intelligence. In the words of renowned AI researcher and longtime skeptic Gary Marcus, these systems are “giant statistical machines” that learn correlations rather than comprehension. They have factual hallucinations. Anything that is significantly different from their training data causes them great difficulty. Most importantly, they might be completely out of that data. Earlier this year, Elon Musk asserted that “the cumulative sum of human knowledge has been exhausted in AI training.” Musk is never the most composed pundit, but he can be blunt at times.
Generally speaking, investors have responded by continuing to spend. That makes sense, even if it unnerves onlookers. Businesses like Alphabet, Microsoft, and Nvidia aren’t just dot-com-era startups. They are actually earning hundreds of billions of dollars. The industry is “not in a bubble…” yet, according to Goldman Sachs, which is not known for being cautious when dealing with markets. The qualifier does a lot of work in that statement. Since 2023, more than half of the S&P 500’s growth has come from the seven big tech firms most associated with AI. These are real companies, regardless of one’s opinion of the valuations.
However, the discrepancy between evidence and ambition is becoming unsettling. The think tank METR conducted a thorough study in which seasoned software developers were asked to finish coding tasks both with and without AI support. AI’s strongest area, according to its proponents, should be coding. Experts anticipated a nearly 40% increase in productivity prior to the experiment. According to the developers themselves, they completed their work 20% more quickly. Researchers found that developers using AI finished tasks 20% more slowly when they actually measured the output. The encounter was compared by one participant to the “digital equivalent of shoulder-surfing an overconfident junior developer.”
This might just be the early, unpleasant phase of what economists refer to as the productivity J-curve, which is the time when a new technology causes friction before it generates value. It wasn’t until Henry Ford restructured manufacturing in the 1910s, thirty years after electric power became accessible, that electricity revolutionized factory production. The productivity boost from AI is still a few years off, according to some serious economists. It is a plausible argument. It might very well be correct.
However, the unsettling reality—which serious analysts are beginning to publicly state—is that trillions of dollars are being committed on the basis of a timeline that is still wholly speculative. According to Jamie Dimon of JPMorgan, some of the capital invested in AI will “probably be lost.” “People will overinvest and lose money” during this stage, according to Sam Altman himself. According to Jeff Bezos, the current state of affairs is “kind of an industrial bubble.” They are not strangers. These individuals, who are closest to the action, publicly acknowledge what many in the industry keep to themselves.
Perhaps the length of patience will determine whether this results in a gradual recalibration or something sharper. The machinery continues to operate at the moment. Electricity grids are being strained by the rapid expansion of data centers. There is a backlog of chip orders. Hiring is still going on. To put it another way, the factories are operating. However, there is a question that is becoming more difficult to put off: when must the expenditures be reflected in the outcomes?





