The fear that OpenAI is deliberately orchestrating a "too big to fail" scenario within the burgeoning AI sector is a significant concern for investors and policymakers alike. This provocative thesis, articulated by Sam Lessin, General Partner at Slow Ventures, on CNBC's 'The Exchange', suggests a calculated maneuver to embed AI so deeply into the global economic fabric that its failure would necessitate a government backstop, echoing the financial crisis of 2008. Lessin spoke with Jon Fortt about the rapid acceleration of AI development, its economic implications, and the potential social dangers of a concentrated, government-supported technological boom.
Lessin highlights the "fast and furious game" currently underway across the AI sector, characterized by a pervasive narrative of "setting expectations at infinity." This isn't merely organic market enthusiasm; it's a strategic alignment of incentives. The United States, grappling with an "enormous debt crisis at the federal level," urgently seeks accelerated GDP growth. Lessin posits that the AI narrative, championed by figures like Sam Altman, has been "slaughtered in beautifully" to serve this national imperative. It offers a compelling story of innovation and productivity that promises to uplift the economy, thereby creating a shared interest among various stakeholders—from Wall Street to global investors—in ensuring the sector's unfettered success.
The recent governance turmoil at OpenAI, particularly the brief ousting and subsequent reinstatement of Sam Altman, served as a stark illustration of this underlying dynamic. Lessin interprets the company's perceived search for a government backstop, or indeed, the broader push for partnerships that entangle major players, as a deliberate strategy. "They know what they are doing is unsustainable, but if they build partnerships where everyone ties hands together and jumps, they can't fail," Lessin asserts, outlining the fundamental premise of a "too big to fail" play. This approach aims to distribute the risk across a wide array of powerful entities, making individual failure a systemic threat.
The parallels to the 2008 financial crisis, where government intervention was deemed necessary to prevent catastrophic economic collapse, are not lost on the commentators. The interviewer explicitly draws this comparison, noting that the projected economic impact of AI over the next five to ten years could be "on par" with the scale of the financial crisis. Lessin unequivocally agrees that governments "should be thinking about it," underscoring the potential for AI to become a critical, systemic industry demanding state protection.
However, this narrative of inevitable growth and essential government support carries significant social risks. Lessin voices deep concern over the potential for "massive inequality growth" stemming from such a concentrated boom. He observes the public anxiety that "once again, we're going to pay for all this so that a small number of technologists get extremely wealthy." This sentiment reflects a broader societal unease about who truly benefits from these technological leaps and who bears the ultimate cost if the bubble bursts or the promised growth fails to materialize equitably.
Related Reading
- OpenAI's Trillion-Dollar Bet on AI Dominance
- OpenAI’s IPO Horizon: A Strategic Pause in the AI Race
- AI's Menacing Blob: Market Uncertainty Amidst Data Center Expansion and Government Inaction
The shift in AI from a niche, "frontier type investing environment" to a "front of mind" mainstream phenomenon has fundamentally altered the risk landscape for venture capitalists and private equity investors. The collective rush into AI, driven by the perception that "everyone else is doing them," creates a dangerous herd mentality. This rapid mainstreaming, Lessin warns, "is usually a recipe for failure and for disaster."
While embracing AI for new companies is now almost a given—"if you're starting a company or building something and you're not using AI, that's crazy"—this widespread adoption does not guarantee the 4% or 7% GDP growth figures many are pushing for. The magnitude of current investment, Lessin cautions, may not necessarily be aligned with the actual returns or broader economic benefits. This disparity between expectation and reality, coupled with the increasing social stakes, makes the current AI investment climate particularly precarious.

