CoreWeave CEO Mike Intrator, speaking live from Davos on CNBC’s Squawk Box, delivered a sharp assessment of the current AI infrastructure boom, describing the demand as a “violent change and violent demand.” This characterization cuts through the typical cautious corporate rhetoric, underscoring the unprecedented speed and magnitude of the foundational buildout currently reshaping the technology landscape. Intrator spoke with the CNBC anchors about the state of AI and cloud computing demand, the future of AI infrastructure buildout, and what the return on investment (ROI) looks like for the massive capital being deployed.
Intrator emphasized that the current demand surge for specialized compute resources is not a cyclical phenomenon but a sustained, structural shift driven by the foundational embedding of artificial intelligence into nearly every aspect of commerce and life. He noted that CoreWeave serves the entire AI ecosystem, from hyperscalers like Microsoft and Meta—with whom CoreWeave has massive infrastructure contracts—to silicon providers like NVIDIA, down to smaller AI labs and university computer science students. This wide-ranging client base confirms that the demand for high-performance compute is broad-based and deeply integrated across the economy.
One of the core insights Intrator shared was the long-term dividend AI is expected to pay. When asked about the potential ROI of the staggering amounts of money being spent on infrastructure, Intrator asserted: “I think in five or ten years, you’re going to be in a world where artificial intelligence is embedded into absolutely everything we do. And it will continue to pay dividends for the next 100 years.” This bold statement frames the current infrastructure investment—which includes CoreWeave’s multi-billion dollar contracts with OpenAI, Meta, and NVIDIA—not as speculative spending, but as the foundational capital expenditure for a century of economic and technological transformation.
The discussion quickly moved to addressing the inherent risks associated with such rapid, capital-intensive growth. One of the CNBC anchors raised three specific concerns: independent AI companies running out of capital, the risk of a major technological breakthrough that renders current infrastructure obsolete (the "Deep Seek" moment), and the challenge of chip depreciation schedules in a market where new hardware iterations arrive constantly. Intrator tackled these concerns head-on, noting that while industry consolidation and failure are inevitable in any new business wave—"there will be companies that shut down, there will be companies that go bankrupt"—CoreWeave manages this risk by focusing on a diversified portfolio of clients and securing long-term contracts with creditworthy counterparties.
Regarding the risk of technological obsolescence and accelerated depreciation, Intrator offered a crucial counterpoint focused on market behavior rather than accounting theory. He argued that the depreciation curve is defined by what clients are willing to pay for today, and if clients are willing to buy a contract that goes out for five or six years, “They are telling us that the compute has value to them over five or six years.” Furthermore, he explained that new technology doesn't necessarily eliminate the value of the old; rather, existing infrastructure is repurposed. For example, when a client upgrades to the latest NVIDIA H200 or GB200 chips for bleeding-edge model training, the older A100s or H100s are simply repurposed for other functions where compute is still required, such as inference or less demanding training workloads. This operational flexibility and the sustained demand for compute power across all generations of hardware mitigate the risk of sudden, total technological depreciation.
Intrator stressed that the sheer velocity of the buildout is historically unprecedented. The pace at which the base load infrastructure is being constructed was “not even considered” possible in prior technological revolutions. This rapid deployment is a direct response to the "violent demand" from companies desperate for the compute necessary to remain competitive in the AI race. The scarcity of high-end GPUs and the subsequent scramble for capacity highlight the bottleneck in the system—the physical limits of building, powering, and deploying these dense data centers. CoreWeave’s business model, focused on acquiring and deploying these specialized chips efficiently, positions it directly at the heart of this constrained yet explosive market.
The conversation highlighted that the current market dynamics are driven by a desperate need for scale. Companies that require massive infrastructure to train and run their models are entering into long-term contracts to secure capacity now, insulating CoreWeave from the shorter-term financial volatility of individual startups. The strategic importance of securing this infrastructure is evident in the scale of CoreWeave's publicized deals, such as the $22 billion OpenAI contract, the $14.2 billion deal with Meta, and the $6.3 billion NVIDIA capacity commitment through 2032. These commitments illustrate a fundamental belief among the market's heavyweights that compute capacity is the new strategic resource, and securing it warrants multi-year, multi-billion dollar expenditures. This demand transcends the typical ebb and flow of the tech cycle, signaling a permanent shift in how computing resources are valued and deployed.



