The fervent investment in artificial intelligence infrastructure, particularly high-performance chips, currently operates within a peculiar market dynamic where insatiable demand clashes with the long-term financial realities of capital expenditure. This tension formed the crux of a recent CNBC "Closing Bell Overtime" discussion, where Daniel Newman, CEO and Chief Analyst at The Futurum Group, spoke with host John Fortt about the sustainability of the current AI super-cycle and the emerging demand concerns in the semiconductor and AI markets.
Newman highlighted the inherent conflict between chip manufacturers and hyperscale cloud providers. Chip makers, exemplified by NVIDIA, perpetually tout the next generation of GPUs as "exponentially more efficient, more tokens, better economics," driving a constant upgrade cycle. Conversely, hyperscalers, who bear the immense capital expenditure, depreciation, and cash burn associated with these assets, prefer a longer useful life for their hardware. This divergence in objectives creates a foundational friction within the AI supply chain.
Despite this, the prevailing market condition is one of severe supply shortage. Newman observed, "Right now there is no equilibrium. So every chip that's available right now, there's someone out there that's willing to use it for an AI workload." This immediate, overwhelming demand has largely masked underlying financial risks, allowing even older generation NVIDIA chips (V-series, A-series) to "still command meaningful commercial pricing."
However, the long-term implications of this rapid build-out are beginning to surface. The current lack of clarity on how AI workloads will precisely impact the efficiency and longevity of these newest chip variations adds a layer of speculative risk.
Should the market eventually reach an equilibrium between supply and demand, perhaps "towards the end of the decade" as Newman suggested, the depreciation schedules of these massive hardware investments could become a significant financial burden. Companies that have invested heavily in AI infrastructure, particularly those building out capacity "on spec" rather than for immediate, guaranteed internal workloads, face considerable exposure.
Related Reading
- AI Infrastructure Spending: Not Your Dot-Com Bubble Debt
- Top investors gauge AI opportunity: Here's what to know
- Cramer Declares End of "Magical Investing" for Speculative AI Stocks
John Fortt astutely questioned whether companies building on spec face potential exposure, suggesting that "somebody's overbuilding here." Newman concurred, expressing particular concern for entities like CoreWeave, which operates on a six-year depreciation model for its GPUs. This model, coupled with the collateralization of GPUs and the limited "value-added services in bare metal," creates a precarious position if demand softens or newer, more efficient chips drastically devalue existing inventory. Hyperscalers, with their diverse internal workloads, possess a degree of insulation, as they can extract useful life from their data centers even if AI demand shifts.
Even conservative figures, such as AMD CEO Lisa Su's confidence in doubling her company's total addressable market (TAM), are underpinned by the current inability of the industry to meet demand. While this indicates robust growth for now, it also highlights the reliance on an ongoing supply-demand imbalance. Hyperscalers are currently acquiring compute because it offers "immediacy," not necessarily because it is their optimal long-term preference. Once this urgency subsides, or if the pace of innovation outstrips the ability to fully utilize existing infrastructure, the financial landscape could shift dramatically. The current lofty valuations, often predicated on an unending growth trajectory, are vulnerable to any deceleration in demand, however slight.



