AI Agents: The "Renting Edge" in Cybersecurity

Experts discuss how AI agents are evolving beyond prompt injection to sophisticated "promptware" attacks, necessitating a shift in cybersecurity strategies.

4 min read
AI Agents: The "Renting Edge" in Cybersecurity
IBM

In a recent Security Intelligence podcast episode, experts delved into the evolving landscape of AI-powered cyber threats, highlighting how attackers are increasingly leveraging AI agents to conduct sophisticated attacks. The conversation, featuring Kimmie Farrington (Security Detection Engineer) and Ian Molloy (Department Head, Security Research), alongside Seth Glasgow (Cyber Range Executive Advisor), underscored a shift from simple prompt injection to more complex "promptware" strategies.

The Shift from Prompt Injection to Promptware

The discussion began by addressing the common perception of AI vulnerabilities, which often centers on "prompt injection" – a method where attackers manipulate AI models by crafting specific prompts to elicit unintended or malicious outputs. However, the panelists argued that this view is too narrow. They proposed that attackers are moving beyond simple prompt manipulation to developing more autonomous AI agents, capable of executing multi-stage attack campaigns.

Molloy elaborated on this by explaining how current discussions often focus narrowly on the initial access vector of prompt injection, overlooking the broader attack surface that AI agents can exploit. He emphasized that these agents can potentially navigate an entire attack chain, from reconnaissance to data exfiltration and command and control, mimicking human attacker behavior but at a significantly increased scale and speed.

The full discussion can be found on IBM's YouTube channel.

Promptware, cloud security trends for 2026, and what the Xbox One hack means for cybersecurity - IBM
Promptware, cloud security trends for 2026, and what the Xbox One hack means for cybersecurity — from IBM

AI Agents as a "Renting Edge"

A key concept introduced was that of AI agents acting as a "renting edge" for attackers. This means that attackers can potentially leverage pre-trained or easily adaptable AI models to perform complex tasks without needing to develop the underlying technology themselves. This democratization of sophisticated attack capabilities lowers the barrier to entry for malicious actors.

Farrington highlighted how attackers are already using AI to generate convincing phishing emails, craft malicious code, and even to identify and exploit vulnerabilities. The ability of these agents to learn and adapt means that defensive measures must also evolve rapidly to keep pace.

Redefining the Threat Model

A significant portion of the conversation revolved around the need to update traditional cybersecurity threat models. Molloy stated, "We need to understand them as the first step in a threat model, assuming initial access will occur and then securing your environment with that threat model." This implies a proactive approach where defenses are built with the expectation of breaches, focusing on containment and rapid response.

The panelists also discussed the complexity of attributing AI-driven attacks. As AI agents become more sophisticated and potentially autonomous, determining the origin and intent behind an attack can become significantly more challenging, blurring the lines between human and machine-driven malicious activity.

The Broader Attack Surface

Glasgow brought up the concept of "living off the land," a tactic where attackers leverage existing tools and infrastructure within a target environment to carry out their objectives. He noted that AI agents can amplify this by using legitimate cloud services and APIs to perform malicious actions, making them harder to detect.

"What changes for us if we start to adopt this kind of model of understanding it as kind of promptware?" Molloy asked, posing a critical question about the implications for security professionals. Farrington responded that it requires a deeper understanding of how these agents interact with systems, noting that "the model picks up other instructions... it can do other things." This highlights the need for more granular visibility and control over AI agent behavior.

Key Takeaways for Defenders

The experts emphasized several key takeaways for organizations looking to defend against AI-powered threats:

  • Assume Breach: Security strategies must shift from purely preventative measures to a comprehensive approach that includes robust detection, response, and recovery capabilities.
  • Understand AI Agent Capabilities: Security teams need to educate themselves on how AI agents can be used in attacks, their potential capabilities, and their limitations.
  • Focus on Identity and Access Management (IAM): As AI agents often operate with high-level privileges, strong IAM controls are paramount to limit their potential impact if compromised.
  • Threat Modeling Evolution: Traditional threat models need to be updated to incorporate AI-specific attack vectors and the potential for autonomous agent behavior.
  • Zero Trust: The principle of "never trust, always verify" becomes even more critical when dealing with AI-driven threats, requiring continuous validation of all interactions.

In essence, the conversation underscored that AI is not just a tool for attackers but a fundamental shift in the threat landscape, demanding a paradigm shift in how cybersecurity is approached. The future of security will likely involve a constant race to understand and counter the evolving capabilities of AI-driven malicious actors.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.