AI is headed for its dot-com moment. Its society-bending potential is real, but AI’s emerging reliability gaps are so significant – and the costs to cover for them so great – that its payoffs will be smaller and take years longer than any hype-mode AI CEO would have us believe.
That will predictably send markets toward a dot-com-like nosedive as this expectations vs. reality gap becomes more visible and more obvious. Open AI’s revenue slowdown is just the first pin to come at the bubble.
The tech works, but…
The 77% crash that wrecked the NASDAQ so badly it took fifteen years for the index to get back to pre-crash level didn't happen because the technology to launch a wild economic revolution was overhyped. The early internet vision was on target and the tech was real.
But when the true scale of the capital, time and infrastructure requirements to make the internet's promise into reality became evident (and proved to be wildly divergent from what the market had priced in) the correction was brutal.
It makes sense that both dot-com and AI CEOs would sell the dream and suggest its realization is right around the corner. That’s what it takes to raise the mind-boggling amounts of capital required, and at very high valuations for their companies.
So much capital was chasing the dot-com dream that investors convinced themselves the wildly expensive fiber optic and last-mile infrastructure buildouts required could be financed fully and executed quickly.
When that proved untrue, the crazy "do we really need that?" startups hit the wall hard. But so did the 2000-era version of our current "Magnificent Seven" stocks. Intel and Oracle, for instance, both dropped 80% from their dot-com boom peaks, as did the shares of most other major dot-com infrastructure companies.
Computational intelligence doesn’t mean operational readiness
The same dynamic is forming up now. The AI market has been pricing the LLM's computational intelligence as a proxy for plug-and-play operational readiness. But those are two wildly different things – and creating the operational readiness is painfully expensive.
What's been promoted to a bedazzled world is the idea of AI as an all-powerful computational replacement for human capability. But in real-life operations using LLMs, the errors and problems start to appear, and they are numerous and troublesome.
AI's reliability struggles have been euphemistically lumped under the term "hallucinations" almost from the beginning. But that distracts from AI's real business issues, because "hallucinations" implies a mostly functional machine with occasional crazy and easily spotted errors you can catch, correct, and move on from.
That’s not the operational problem, though. The problem is that AI systems fail quietly, at random, and in non-obvious ways that the LLMs themselves are good at covering up.
Humans in the AI loop are one of the flies in the ointment
The result is that in most critical operations, a human must be in the loop every single time, destroying the illusion of brilliant automation and severely damaging AI deployment economics.
I ran into a small version of this recently when conducting a legal-sector visibility study: I needed AI to simply count how many times any law firm was mentioned across a series of buyer-style prompts. In one case AI credited a tiny firm with roughly 25 "best in the nation" appearances in the query responses when the real figure was: One.
Turns out I was the idiot: My software chief later explained to me that LLMs are bad at counting because they’re probabilistic pattern‑matchers over text, not deterministic software processing symbols like numbers or characters.
I didn’t know that – and the fact that AI can’t count wasn’t mentioned by Dario Amodei when he was spouting off about half of all white-collar jobs being eaten by AI in the next couple years. I guess he should have said, “White collar jobs with no counting involved.”
And yes, you can write a Python script to do the counting, and yes AI can help you do that. But that’s the point: LLMs do less things well than the hype-cyclers are explaining. Building those bridges and setting up those safeguards and leaving humans in the loop eats up giant amounts of the promised economics.
That in turn degrades financial projections, which in turn shakes market confidence, which in turn slows down AI infrastructure deployments and software upgrades, which in turn makes the whole process longer and more expensive and…bursts the bubble.
Can it be fixed? Sure – with more compute, more software tools, more model development, and more time. All of which pound away at ROI.
Mainstream buyers haven’t fully processed these challenges
For many enterprise buyers, particularly the sub-Fortune 500 market, AI is being treated as a single category. There's no separation of language models from agent frameworks, retrieval systems, rules-based automation, or traditional software.
The result is that organizations are deploying expensive probabilistic tools in situations where cheaper, deterministic ones would outperform them. Meaning, roughly, that they're using a flighty, error-prone LLM application where a spreadsheet and an intern would be better.
But markets are valuing AI as if raw foundation models can perform enterprise work directly at scale. The SaaS-pocalypse assumes that LLMs incapable of accurately counting law firm name mentions in a document by 25X will shortly help companies write their own ERP software.
That's fantasy.
In practice, most production deployments require significant infrastructure layered on top — guardrails, monitoring, retrieval systems, human review, permissions, orchestration, logging, fallback logic, custom integrations. That infrastructure is expensive.
And in many cases those failsafe systems are more accurately described as the actual product than the LLMs running underneath them.
Back to the dot-com future
Pets.com wasn't wrong that people would eventually buy things online. It was wrong about the infrastructure required, the timeline for adoption, the level of competition, the reaction of incumbents, and what the unit economics would look like in the interim.
The benefit of the AI hype cycle is that it significantly reduced the cost of capital for all AI players, justified massive infrastructure spending, and financed a long experimental runway. But as reality sets in – a learning process I see unfolding now online and at trade shows – the cost of that capital’s going to head back up.
It’s not what the market has priced in. Even the “but the software improves itself” party line won’t cure this expectations chasm.
None of this means the technology fails. AI will over time become deeply embedded across industries, just as the internet became the infrastructure for everything. But there is a large gap between "This matters enormously over time," and "This justifies current valuations."
The TLDR: AI requires cheap, sustained, hype-fueled capital to become what people currently assume it already is. Stand by for the AI market bust.