The transformative power of artificial intelligence is undeniable, yet its ascent is shadowed by inherent risks that demand a concerted, dual-pronged approach to governance and security. As Jeff Crume, a Distinguished Engineer at IBM, articulately explains in his presentation on "Security & AI Governance," the promise of AI can only be fully realized when organizations diligently address its potential for harm, whether through accidental missteps or malicious intent. Crume’s core argument centers on the idea that AI’s greatness is inextricably linked to its risk, necessitating robust frameworks that are both complementary and distinct in their focus.
Crume highlights a sobering statistic from the 2025 IBM Cost of a Data Breach Report, revealing that a staggering 63% of organizations currently operate without an AI governance policy. This oversight creates a significant gap, leaving systems vulnerable to doing "the wrong thing, give incorrect answers, and expose the organization to reputational and business damage." To mitigate these pervasive threats, a strong governance framework, typically overseen by a Chief Risk Officer (CRO), must ensure responsible and explainable AI. This entails guaranteeing that AI systems operate ethically, avoid bias, and are transparent in their decision-making, supported by clear documentation and source attribution.
Parallel to governance, robust AI security, often championed by a Chief Information Security Officer (CISO), is crucial. While governance guards against self-inflicted wounds—unintentional misalignments, policy violations, or ethical lapses stemming from poorly trained models or compromised data lineage—security directly confronts external and internal threats. These include vulnerabilities that attackers might exploit, the risks posed by "shadow AI" instances created without proper authorization, and the potential for data leaks. In essence, governance focuses on ensuring the AI *does what it should*, while security ensures the AI *doesn't do what it shouldn't*, especially when under duress from intentional actors.
The potential damage from unmanaged AI risks is multifaceted. From a governance perspective, the concerns range from AI generating "HAP" (hate, abuse, profanity) or exhibiting biases, to suffering from model drift where its performance degrades over time. Intellectual property (IP) risks also loom large, encompassing both the unauthorized use of copyrighted material in training data and the potential for an AI system to inadvertently leak proprietary information. These issues can severely erode an organization’s reputation and lead to significant legal and financial repercussions.
Security risks, on the other hand, are often framed within the classic CIA triad: Confidentiality, Integrity, and Availability. A breach of confidentiality could see an AI exfiltrate sensitive data, while integrity issues might involve models being manipulated or "poisoned" with malicious inputs, leading to incorrect or harmful outputs. Availability concerns focus on preventing denial-of-service attacks that could render critical AI systems unusable. These intentional threats underscore the need for constant vigilance and proactive defense mechanisms.
To counter these risks, Crume advocates for a comprehensive set of controls. Governance requires clear rules, well-defined policies, and established accountability structures to ensure that everyone understands their roles and responsibilities. Security, conversely, emphasizes prevention, detection, and response—proactive measures to harden AI systems against attacks, mechanisms to identify breaches when they occur, and protocols for swift and effective incident management. Without such controls, organizations are essentially navigating a minefield blindfolded.
Beyond high-level policies, specific controls must be applied directly to AI models. Governance dictates proper model training, meticulous tracking of data lineage, and clear acceptable use policies to guide employee interaction with AI. It also involves vigilant IP risk management to prevent legal entanglements from improper data sourcing. Security focuses on technical safeguards like protecting against prompt injections, preventing unauthorized access to AI systems, conducting rigorous penetration testing, and implementing continuous posture management to identify and rectify misconfigurations. These measures are vital to protect the integrity and reliability of the AI at its operational core.
Ultimately, Crume proposes an integrated AI Risk Solution Framework that layers protections. At the center is the AI itself, surrounded by a governance layer encompassing lifecycle governance, discovery and management of AI use cases, model management, risk management, monitoring/performance controls, and compliance. This governance layer is then enveloped by a security layer, which includes capabilities like discovering shadow AI, AI security posture management (AISPM), AI firewalls with guardrails for exfiltration control, penetration testing, and a comprehensive threat monitor and dashboard. This holistic framework is designed to provide a much stronger defense than isolated efforts. As Crume succinctly puts it, "AI (Governance + Security) = ↓ Risk," emphasizing that only through this combined, layered approach can organizations truly lower their AI risk profile and unlock the full, trustworthy potential of artificial intelligence.

