A widely read piece in Futurism, picked up here at RealClearMarkets, declared on April 24 that "the horrible economics of AI are starting to come crashing down." The article concludes that AI firms will not bring in enough money to cover what they cost to run, and that if they fail, the broader economy goes with them.
The article points to specific evidence, starting with pricing plans that don’t cover costs. One case illustrates the point. On April 4, Anthropic stopped allowing Pro and Max subscribers to run high-volume third-party agents on flat-rate plans, shifting that traffic to usage-based billing after some users generated ten to fifty times their covered compute costs.
That’s real. But it doesn’t mean what the article claims. The argument makes three errors—and misses what may be the bigger risk in the AI buildout: what’s funding it. Let’s take a look.
A Pricing Reset, Not a Pricing Crisis
Anthropic and OpenAI are not collapsing. They are repricing, and the cause is straightforward. Their subscription plans were priced for people typing into a chat box for a few minutes at a time. A new wave of agent tools now lets people hand off a job and walk away while the software runs on its own for hours.
Anthropic’s April 4 move shows the pattern. A flat-rate plan can’t support users running agents around the clock; usage-based billing can. OpenAI is making a similar adjustment from a different angle, testing ads to generate revenue from users who don’t currently pay.
None of this signals a broken business model. It signals a pricing mistake being corrected. Some users were costing Anthropic and OpenAI ten to fifty times more than their flat fees covered. Usage-based billing brings price back in line with cost and ends what had become an unsustainable subsidy for the heaviest users. The fix did not come earlier because it did not have to. With investors tighter with cash, the companies now need to show they can make money on their own. The pricing fix was needed, and the cash pressure made now the time.
The 2 Trillion Number Doesn't Hold Up
The Futurism piece, drawing on Gartner, argues that if today’s costs and pricing held, AI companies would need close to 2 trillion dollars in annual revenue by 2029 to justify the trillions being poured into data centers. Hitting that target, on current pricing, would mean selling 50,000 to 100,000 times today’s volume of AI “tokens” -- the units providers use to meter usage, roughly one per word in or out. On its face, that sounds impossible.
But that back‑of‑the‑envelope rests on two big assumptions that are already starting to fail.
The first is that what providers earn on each unit of AI work stays roughly where it is today. In reality, most forecasters expect the cost of running an AI request to fall sharply over the rest of this decade, as chips get faster, software gets more efficient, and more work shifts from distant data centers to personal devices. If each request gets much cheaper to provide, the same amount of capital can be supported with less revenue than today’s static math implies, even if competition forces prices down as well.
The second is that AI will still be sold by the token in 2030. The fixation on tokens has already produced a workplace fad dubbed “tokenmaxxing,” built on the simple idea that more tokens must mean more productivity. Vendors are pushing in a different direction. Salesforce is experimenting with pricing tied to tasks completed, and HubSpot’s chief executive put it more bluntly: outcome‑maxxing beats tokenmaxxing. Tokens may still be counted behind the scenes, but customers are increasingly paying for tasks completed, seats licensed, or outcomes delivered.
The data back that shift up: tokens track usage, not value. In early 2026, Jellyfish analyzed 12,000 developers across 200 companies and found that heavy users paid almost ten times more for AI tools while getting only about twice as much work out the door. That is a falling return on spend that no CFO will fund. They pay for outcomes, not for ever‑larger token bills.
The 2‑trillion‑dollar calculation is built on inputs that are already obsolete. The number is not a forecast. It is the past projected forward.
Firm Trouble Is Not Economy Trouble
The third miss is the leap from sector to economy. Even granting the article's premise -- that AI firms could face revenue shortfalls -- nothing in the case it presents shows how losses at a few large AI companies would cascade to take the rest of the economy down with them.
For AI-firm losses to trigger an economy-wide downturn, three conditions would have to hold. The losses would have to be (1) large enough to impair major financial institutions, (2) spread widely enough across the credit system to make borrowing harder, and (3) tied to household or corporate balance sheets that would multiply the damage.
None of the three conditions holds. AI debt is held mostly by a small number of large technology firms, private credit funds, insurance companies, and pension portfolios. Those holdings are not yet large enough to seize up the broader credit system, nor are they broadly distributed -- neither embedded in retail bank balance sheets the way mortgage debt was in 2008, nor woven through bond indices and ETFs at a scale that would amplify a correction. Households are not leveraged to AI the way they were leveraged to housing.
A correction in the AI sector would be painful for investors, devastating for overleveraged firms, and disruptive for the venture and equity markets behind it. However, that is not the same as the catastrophic, economy-wide outcome the article reaches for. Futurism uses the language of systemic crisis without identifying a systemic mechanism.
The Real Risk Is in the Plumbing
The harder question sits beneath the surface debate. While attention is fixed on token pricing and revenue forecasts, far less scrutiny is being given to the financing structure behind the data center buildout.
The borrowing has surged. The five largest cloud platforms (Amazon, Alphabet, Meta, Microsoft, and Oracle) issued just over $120 billion in new bonds in 2025, up from $40 billion in 2020 and roughly four times the prior five-year average. Morgan Stanley estimates AI infrastructure capital spending will reach roughly $2.9 trillion cumulatively by 2028, with annual spending running at about three times pre-AI levels.
Look at how this debt is structured -- the plumbing of the AI build‑out -- and four risks come into focus.
Risk 1: The debt outlasts the asset cycle.
Oracle has struck a five‑year cloud deal with OpenAI worth up to 300 billion dollars and is issuing long‑dated bonds to help fund the build‑out. The chips those bonds help pay for wear out or become obsolete in three to five years, but some of the debt will not mature until the 2060s. Because this is general corporate debt, lenders have a claim on Oracle the company, not on the hardware itself. If the chip cycle gets harder to sustain, the bond payments keep coming whether the cash flow does or not.
Risk 2: A few customers carry the load.
Analysts at D.A. Davidson say the OpenAI deal could represent something like half of Oracle’s promised future revenue. Even allowing for accounting quirks and the fact that contracts can be renegotiated, that is a lot of dependence on one customer. If OpenAI stumbles, the damage would not stop at one sales line. It would make lenders question how safe Oracle’s debt really is and could make investors more cautious about buying any bonds tied to the broader AI build‑out.
Risk 3: Some debt is tied to a single project.
Meta has gone a step beyond the usual corporate bond. Through a joint venture arranged by Morgan Stanley with Blue Owl Capital, it raised about 27 billion dollars in private debt to finance one data‑center campus in Louisiana. That debt is ringfenced, which means lenders can only look to the cash flows and assets of that campus for repayment; they have no direct claim on Meta’s other businesses. The structure is safer for Meta than issuing a normal corporate bond, but it is riskier for the lenders. If the campus does not generate the expected revenue, the losses sit inside that project, unless Meta chooses to step in for reputational or strategic reasons.
Risk 4: Borrowing costs are flagging the risk.
The market is already charging extra to fund these structures. Meta’s project‑level debt carries roughly twice the interest rate the firm pays on its ordinary corporate bonds. That tells you that even sophisticated lenders see AI infrastructure projects as riskier than the company’s general credit. Oracle’s debt sends a similar signal: in late 2025, investors briefly demanded much higher interest to hold its bonds as questions mounted about how it would fund its AI expansion, then relaxed after the company laid out a new multibillion‑dollar funding plan.
In other words, credit markets are already treating AI build‑out debt as a distinct, higher‑risk exposure, even after you allow for the usual noise from interest‑rate moves and sector mood swings. That is the plumbing speaking: funding terms, not token prices, are where the risk first shows up.
None of the above risks guarantees a crisis, but together they define the kind of trouble the plumbing can cause. If too much AI debt is written on optimistic assumptions about demand and productivity, a rethink in credit markets would show up as higher borrowing costs, cancelled or delayed projects, and losses for the investors who funded the build‑out. That is painful for lenders and for overextended firms, and it could chill investment in other risky sectors, even if it never rises to the level of a 2008‑type shock.
Whether the financing holds depends on what AI actually delivers. If AI generates the revenue and productivity gains the spending assumes, the debt gets paid. If it does not, the losses hit the lenders first. The question then will not be whether tokens were the right unit. It will be whether the world misjudged AI’s real contribution to productivity.
Look at the Plumbing
AI may well change every facet of modern life, but that is no guarantee to the lenders underwriting 40‑year bonds for five‑year chips. The “horrible economics” of AI aren’t found in the price of a subscription. They are written into the terms of the debt, and they will determine whether the AI build‑out quietly pays for itself or forces credit markets to rethink how much of this risk they really want to hold.
Watch the plumbing for the early warning.