AI Safety Fears: A Skeptic's View

Bloomberg's Par-Miyol Olson discusses the debate around AI risks, questioning the focus on existential threats and highlighting the need for practical regulation.

4 min read
Bloomberg 'Here's Why' podcast logo
Image credit: Bloomberg· Bloomberg Podcast

In the fast-evolving world of artificial intelligence, the debate around potential risks and the need for regulation is more heated than ever. On Bloomberg's 'Here's Why' podcast, Bloomberg Opinion columnist Par-Miyol Olson sat down with a prominent AI researcher to discuss the current state of AI development and its implications.

Olson, known for her insightful analysis of the tech industry, brought a critical perspective to the conversation. She highlighted a perceived disconnect between the dire warnings issued by some AI leaders about existential risks and the actual, more immediate challenges faced by the industry and society.

Related startups

The Role of AI Leaders

Olson pointed out that many AI leaders, while publicly expressing concerns about AI's potential to pose existential threats, also seem to be strategically marketing their work. She suggested that this framing might be a way to control the narrative and influence regulatory approaches. "I think what's happening is that some companies are trying to get ahead of regulation," Olson stated. "They're saying, 'We're going to have these existential risks, so you need to let us lead the way in terms of how this is regulated.'"

The full discussion can be found on Bloomberg Podcast's YouTube channel.

Here’s Why Some AI Might Be Too Dangerous To Release | Here's Why - Bloomberg Podcast
Here’s Why Some AI Might Be Too Dangerous To Release | Here's Why — from Bloomberg Podcast

Pace of Development vs. Security

A key concern raised was the sheer speed at which AI capabilities are advancing. Olson noted that the pace of development often outstrips the ability of security teams and regulatory bodies to establish and implement effective safeguards. This rapid evolution creates a dynamic where the potential for misuse or unintended consequences is constantly increasing. The conversation touched upon the idea that AI developers might be inadvertently creating vulnerabilities that could be exploited by malicious actors.

Critique of Regulatory Approaches

Olson expressed skepticism about the broad strokes of some proposed AI regulations, particularly those originating from entities like the European Union and the United States. She argued that these regulations, while well-intentioned, are often too vague and could stifle innovation without effectively addressing the core risks. "The problem with a lot of the regulatory frameworks we're seeing is that they're too broad," Olson explained. "They're not specific enough about what constitutes a risk and how to mitigate it." She contrasted this with the approach taken by some organizations that have, in the past, delayed the release of powerful AI models due to safety concerns, only to find that the predicted catastrophic outcomes did not materialize.

The 'Fear' Narrative

Olson suggested that the emphasis on existential risks might be a form of strategic communication rather than a purely objective assessment of current dangers. She pointed to a 2019 report by the UK's AI Safety Institute, which found that many AI models were not yet capable of causing widespread societal disruption. This, she argued, implies that the current focus on doomsday scenarios might be overblown. "I think there's a tendency for people who are very close to the technology to talk about these extreme risks," Olson commented. "But we need to balance that with a realistic assessment of what's actually happening on the ground." She emphasized that while AI does present risks, the immediate concerns for many businesses, especially smaller ones, are more about practical applications and competitive advantage.

Focus on Practical Risks

The discussion also highlighted the potential for AI to disrupt labor markets, a more tangible and immediate concern for many. Olson cited a report suggesting that a significant portion of entry-level white-collar jobs could be automated in the coming years. This stark projection suggests that policymakers and companies should focus on managing these near-term economic impacts, rather than solely on hypothetical existential threats. The conversation concluded with an acknowledgment that while the promise of AI is immense, a measured and pragmatic approach to its development and regulation is crucial.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.