“Criminals are gonna use whatever tools they have available... criminals gonna crime. And if they have AI tools available, they’re gonna use those AI tools.” This stark observation from Cris Thomas, X-Force Global Lead of Technical Eminence, encapsulates the immediate challenge facing the cybersecurity landscape. On a recent episode of IBM’s Security Intelligence, host Matt Kosinski, alongside Thomas and Sridhar Muppidi, IBM Fellow and CTO IBM Security, dissected a rapidly evolving threat environment where the lines between human and machine, and legitimate and malicious, are increasingly blurred. Their conversation revealed a stark asymmetry: while innovation accelerates, robust governance and defensive strategies often lag, creating fertile ground for sophisticated attacks.
The discussion opened with the alarming rise of malicious AI agents, moving beyond theoretical proofs-of-concept into tangible threats. Researchers at Datadog identified "CoFish," a technique exploiting Microsoft CoPilot Studio to build AI agents that stealthily steal OAuth tokens. Simultaneously, Palo Alto Networks uncovered "Agent Session Smuggling," where a malicious AI agent covertly transmits commands to a target agent via an agent-to-agent communication protocol, circumventing user visibility. These instances underscore a critical insight: the same powerful AI tools designed for productivity and efficiency can be repurposed for nefarious ends. As Thomas noted, attackers have the advantage of experimentation, throwing "stuff at the wall and see what works," while defenders bear the burden of anticipating and mitigating every potential misuse.
The sophistication of these threats extends beyond AI agents. The panel highlighted Herodotus malware, a newly discovered banking trojan that evades traditional behavioral detection systems. It achieves this by timing its text inputs to mimic human typing, a simple yet effective technique that, as Thomas dryly remarked, "should have been done ten years ago." This points to a fundamental flaw in existing detection heuristics that rely on basic, easily spoofed human-versus-non-human distinctions. The core insight here is that the threat landscape is not merely expanding but is also becoming more adaptive, learning to blend in with legitimate activity, thereby demanding more nuanced and multi-dimensional defensive approaches.
A particularly unsettling development detailed by the panelists involved social engineering attacks targeting brokerage accounts. Threat actors gain access to victims' investment accounts, liquidate existing holdings, and then reallocate funds into low-liquidity stocks or IPOs. They then artificially inflate these stock prices by purchasing large amounts, selling their holdings at a profit, and withdrawing the earnings via mobile wallets. This represents a significant escalation of social engineering, moving beyond simple credential theft to sophisticated market manipulation, transforming individual account compromises into systemic financial threats.
Related Reading
- The Regulatory Battle for AI's Soul: Sacks on Silicon Valley's Future
- OpenAI's Internal Turmoil: Sutskever Deposition Unveils Power Struggles and Governance Faultlines
- OpenAI's Vertical Stack Ambition Signals AI's Industrial Evolution
The underlying vulnerability enabling these sophisticated attacks is a widening "AI governance gap." Matt Kosinski cited IBM's AI at the Core 2025 research report, stating that "72% of businesses surveyed said they have integrated AI into at least one business function, but only 23.8% of businesses surveyed said they have extensive governance frameworks in place." This disparity is not new; as Sridhar Muppidi pointed out, "We've seen this movie so many times... deploy fast, govern later, get breached in between." This reactive approach, driven by the imperative for innovation and productivity, consistently leaves organizations vulnerable.
The solution, according to the experts, lies in a fundamental cultural shift within organizations. Security can no longer be the sole domain of a "no people" department that prohibits new technologies. Instead, it must embrace a model of "secure enablement," where the focus is on how to safely integrate and leverage new technologies like AI. This requires a shared responsibility across all teams and a proactive approach to understanding and mitigating risks. Muppidi emphasized the need to "put blinders on the agent" – to meticulously scope and control AI agents' capabilities based on time, resource actions, and location, thereby limiting their potential for malicious coercion. He further elaborated on the necessity of identifying and authenticating AI agents just as rigorously as human users, treating them as the "next level of insiders." This implies a move towards behavioral analytics that can detect anomalous patterns across multiple dimensions, going beyond simplistic metrics. Cris Thomas echoed this sentiment, stating, "Security's job is to say yes and figure out how to do it securely." This paradigm shift is crucial for closing the governance gap and building resilient systems in an AI-driven world.

