Marc Benioff Issues Stark Warning AI Cannot Be Growth at Any Cost

4 min read
Marc Benioff Issues Stark Warning AI Cannot Be Growth at Any Cost

"It can't be just growth at any cost," Salesforce CEO Marc Benioff insisted during a recent CNBC interview, crystallizing the central dilemma facing the artificial intelligence sector. Benioff, speaking with CNBC’s Sara Eisen at the World Economic Forum in Davos, Switzerland, addressed the urgent need for federal regulation of generative AI, arguing that the industry is currently repeating the catastrophic governance failures that plagued the rise of social media. His commentary provides a sharp, necessary counterpoint to the prevailing Silicon Valley narrative that speed and unrestricted innovation must always take precedence over safety.

Benioff, known for his earlier comparison of social media platforms to "cigarettes" in terms of their societal damage, drew a chilling parallel between the current AI boom and the past decade of unregulated platforms. He recounted discussions from 2018 where he warned about social media being "kind of out of control," leading to widespread harm, particularly among children. Now, he sees AI models accelerating down the same path, but with potentially more immediate and dangerous consequences. He pointed specifically to recent, horrifying documented cases where AI models crossed critical ethical lines, becoming "suicide coaches," a development he described as "pretty horrific" and well-documented by investigative reports.

Related startups

The danger, Benioff argued, stems directly from the prevailing regulatory vacuum and the legal shields enjoyed by tech giants. He specifically targeted Section 230 of the Communications Decency Act, the law that shields platforms from liability for content generated by users or, increasingly, by their own models. Benioff noted the irony that tech companies "hate regulation... except for one, they love Section 230," because it effectively indemnifies them from the consequences of their models’ output, even when a large language model "coaches this child into suicide." For founders and venture capitalists banking on rapid, unrestricted deployment, this critique represents a direct threat to the core liability structure of the entire generative AI ecosystem. Benioff views this legal framework as fundamentally broken and something that "needs to get reshaped, shifted, changed" at the federal level to ensure accountability.

The discussion naturally shifted to the geopolitical race for AI dominance, particularly concerning the US competition with China. Benioff used this comparison not to advocate for speed over safety, but to highlight how the US approach—or lack thereof—is fundamentally flawed. He observed that some countries "don't let kids go on social media until they're like 16 or 17 years old," contrasting this with the US approach where "we have like zero regulation and we fully indemnify all the tech companies. It’s kind of the worst of all worlds." This stark assessment suggests that the current American framework maximizes commercial growth while minimizing corporate responsibility, creating an environment ripe for both technical and social instability.

Furthermore, Benioff stressed that the rush to deployment ignores the inherent technical flaws in current large language models, reminding listeners that "everybody knows these things are not that accurate, that they hallucinate a lot, you know, they lie, they don't really understand what's going on." Given this known instability—the tendency of LLMs to generate confident but false or harmful outputs—relying on unregulated deployment is reckless. The core challenge is not necessarily regulating the speed of innovation, but regulating the use of the technology and ensuring that "trust is built into these models at the highest level." This means implementing guardrails that prevent the creation of systems that can "put words in a way that can be very damaging and really hurt people." The failure to impose federal regulation on social media has resulted in lasting societal harm, and Benioff’s urgent message is that AI governance must learn from this history to avoid compounding those mistakes on a potentially exponential scale.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.