AI Vulnerabilities: The "Shift Left" Debate

IBM Security experts discuss how AI can be both a shield and a sword in cybersecurity, exploring new vulnerabilities and the need for adaptive defenses.

5 min read
Panelists discussing AI and cybersecurity on Security Intelligence podcast
Perplexity Comet, agentic blabbering, and the shift-left failure — IBM on YouTube

In a recent discussion on the IBM Security Intelligence podcast, security experts Sridhar Muppidi, CTO of IBM Security, and Claire Nuñez, Creative Director at X-Force Cyber Range, along with host Matt Kosinski, delved into the complex relationship between artificial intelligence and cybersecurity. The conversation highlighted how the increasing sophistication of AI, particularly in code generation and analysis, presents both opportunities and significant challenges for securing legacy systems and modern applications.

The Panelists

Sridhar Muppidi, an IBM Fellow and CTO of IBM Security, brings extensive experience in cybersecurity strategy and technology leadership. His role at IBM Security places him at the forefront of developing solutions to protect organizations from evolving threats, leveraging his deep understanding of enterprise security challenges and emerging technologies.

Claire Nuñez, Creative Director at X-Force Cyber Range, contributes her expertise in offensive security operations and the practical application of cybersecurity principles. Her work at the Cyber Range likely involves simulating real-world attack scenarios and developing training methodologies to equip security professionals with the skills needed to defend against sophisticated threats.

The full discussion can be found on IBM's YouTube channel.

Perplexity Comet, agentic blabbering, and the shift-left failure - IBM
Perplexity Comet, agentic blabbering, and the shift-left failure — from IBM

Matt Kosinski, the host and a representative of Security Intelligence, guides the discussion, bringing his journalistic insight to bear on the critical issues facing the cybersecurity landscape.

AI's Double-Edged Sword in Security

The core of the discussion revolved around the dual nature of AI in cybersecurity. While AI can be a powerful tool for defense, it can also be weaponized by malicious actors. Muppidi and Nuñez explored how AI models, trained on vast datasets of code, can inadvertently or intentionally introduce vulnerabilities, or conversely, be used to find and exploit existing ones.

A key point raised was the potential for AI to resurrect old vulnerabilities. Muppidi explained, "We're seeing a lot of the things that we've been trying to secure against for years, for decades even, that are now being unearthed or re-examined through the lens of AI." This suggests that while AI can automate detection and analysis, it can also be used to automate the exploitation of long-standing, perhaps overlooked, security flaws.

Nuñez drew an analogy between AI agents and a "teenager with a credit card," highlighting the potential for unintended consequences and misuse. She emphasized that while AI can be trained to avoid certain actions, its probabilistic nature means that security measures need to be robust and adaptable. "You can tell it to not do something, but it doesn't really know why it shouldn't do something," Nuñez stated, pointing to the need for careful control and oversight.

The "Shift Left" Dilemma

The conversation touched upon the established cybersecurity principle of "shift left," which advocates for integrating security considerations early in the development lifecycle. Kosinski posed a critical question: "Did shift left fail?" He elaborated on the idea that if AI can easily discover and exploit vulnerabilities in even well-established codebases, the traditional "shift left" approach might be insufficient.

Muppidi responded by clarifying that "shift left" itself hasn't failed, but its implementation needs to evolve. He suggested that the focus should be on not just finding vulnerabilities but also on understanding how AI can be used to fix them. "We have to think about AI as something that we need to train in a different way," Muppidi asserted. This implies a need for a more proactive and adaptable security posture, where AI is integrated into the entire development and security lifecycle, rather than being an afterthought.

The "Agentic Blabbering" and Target Shifts

A particularly thought-provoking concept introduced was what Nuñez described as "agentic blabbering," where AI agents, through their reasoning processes, might inadvertently reveal vulnerabilities or sensitive information. This occurs when AI models, attempting to fulfill a request, might generate responses that expose underlying flaws or data that should remain protected.

This concept ties into the idea of a "target shift" in cyberattacks, where the focus is moving from exploiting human vulnerabilities (social engineering) to exploiting AI agent vulnerabilities. As Nuñez explained, "We're seeing more and more that AI is being used to kind of probe for vulnerabilities in the backend, and then use whatever that information is, to then go ahead and manipulate the AI." This highlights a new frontier in cyber warfare, where AI itself becomes both the target and the tool.

The discussion also touched on how AI can be trained to identify and exploit vulnerabilities in legacy systems. Muppidi stated, "The attack surface is massive, and many of our legacy systems are still on older versions of code that we have not been able to update or patch." He noted that AI's ability to analyze vast amounts of code quickly can uncover vulnerabilities that might have been missed by human analysts.

Mitigation and Future Strategies

Addressing these challenges requires a multi-faceted approach. Muppidi emphasized the need for organizations to treat AI models as they would any other critical asset, implementing robust security measures and continuous monitoring. He stressed the importance of "plenty of observability" and "rigorous testing."

Nuñez added that organizations need to move beyond a simple "trust but verify" mentality when it comes to AI. Instead, a "zero trust" approach is essential, where AI systems are continuously monitored and their outputs are validated. She suggested that organizations should aim to "mitigate risk" and "address vulnerabilities" in AI systems by implementing strong governance, clear access controls, and continuous monitoring.

The panelists agreed that the security landscape is rapidly evolving, and organizations must remain vigilant and adaptable. The key takeaway is that AI is not a silver bullet for security but rather a powerful tool that requires careful management, continuous learning, and a proactive approach to identifying and mitigating risks.

The conversation underscored the critical need for organizations to understand the potential downsides of AI in security, not just the benefits. As AI becomes more integrated into our digital infrastructure, the strategies for defending against threats must also evolve to encompass the unique challenges that AI presents.