"Ninety-seven percent of organizations have either experienced an AI-related breach or lack proper AI access controls." This stark finding from IBM's 2025 Cost of a Data Breach Report underscores a pervasive vulnerability in the burgeoning AI landscape, demanding immediate attention from startup founders, venture capitalists, and seasoned AI professionals alike.
In a recent interview, Tim Hwang, Host of Mixture of Experts, spoke with Suja Visweswaran, Vice President of Security and Runtime Products at IBM, about the alarming statistics emerging from IBM's latest report, specifically concerning the state of AI security and governance. The discussion illuminated the critical challenges enterprises face as they integrate artificial intelligence, often without adequate safeguards.
Visweswaran described the 97% figure as "shocking and also kind of expected." The rapid proliferation of AI tools and models, particularly generative AI, without commensurate security frameworks, creates a fertile ground for new vulnerabilities. The inherent complexity of AI systems, with their vast data dependencies and intricate model interactions, means that traditional cybersecurity measures often fall short. For VCs assessing AI startups, this highlights the imperative of scrutinizing a company's security posture and governance strategy as rigorously as their technological innovation.
A compelling analogy offered by Visweswaran likened AI security to public health. "It's like COVID. You have vaccines to help with you, but you still have to wash your hands and then do uh, be keep it, keep it clean." This illustrates that while advanced AI security solutions are emerging and being developed, the fundamental principles of cybersecurity hygiene remain paramount. Founders building AI-driven products must embed security from inception, recognizing that cutting-edge AI cannot compensate for a lack of basic, robust governance.
The core issue, as pointed out by Visweswaran, is a glaring deficiency in foundational AI governance. She revealed that "63% of organizations don't have enough AI governance policies" to effectively manage these new risks. This lack of structured oversight means that the very systems designed to enhance efficiency and innovation are simultaneously creating significant attack surfaces. Without clear policies for data access, model training, deployment, and ongoing monitoring, vulnerabilities are not merely potential, but inherent. This governance gap represents a critical blind spot for many enterprises, from nascent startups to established corporations.
These basic hygiene failures are not theoretical. They are actively being "exposed and exploited very much in this AI era." The risk extends beyond data breaches to include model manipulation, intellectual property theft, and the propagation of biased or erroneous outputs, all of which can have profound financial and reputational consequences.
The IBM report serves as a critical wake-up call for the enterprise. The widespread adoption of AI without adequate governance and fundamental security practices is not just a strategic misstep; it's an existential risk that demands immediate, comprehensive attention from leadership across all sectors. Organizations must prioritize developing and enforcing robust AI governance frameworks to mitigate these escalating threats and secure their AI-powered futures.

