“With great power comes great risk.” This foundational insight from Jeff Crume, PhD, Distinguished Engineer at IBM, underscores the critical need for robust risk management as artificial intelligence permeates every industry. In a recent presentation, Crume elucidated the NIST AI Risk Management Framework, offering a structured approach to fostering trustworthy AI systems. His analysis provides a vital blueprint for founders, venture capitalists, and AI professionals navigating this transformative landscape.
The core premise is that for AI to be truly trustworthy, it must possess several key characteristics. NIST defines these as validity and reliability, ensuring the AI’s outputs are accurate and make logical sense. An AI must also be safe, preventing harm to human life, property, or the environment. Furthermore, security and resilience are paramount; as Crume notes, "bad guys will try to break it," necessitating defenses against unavailability, data leakage, or adversarial poisoning.
