The shift from conversational AI to "agentic" artificial intelligence, where systems actively perform tasks rather than merely chat, introduces a profound new layer of operational risk. As Sabrina Kopecki, an AI Technical Specialist at IBM, articulated in her recent presentation, the danger is no longer just "what it thinks, it's what it does," a concern amplified when these autonomous "doers" operate within the elusive realm of Shadow AI. Kopecki’s insights underscore the urgent need for a cohesive security and governance framework, one that ensures fast-moving AI systems remain stable, accountable, and, crucially, safe.
Kopecki’s presentation, "Agentic AI Meets Shadow AI: Zero Trust Security for AI Automation," highlighted the critical intersection of these emerging AI paradigms with established security principles. She explained that Agentic AI, by its very nature, interacts directly with business systems, booking appointments, filing tickets, updating records, and interacting with APIs. This capability, while powerful for automation, also means that unchecked AI agents can execute actions with significant real-world consequences, demanding a robust oversight mechanism.
The core challenge, as Kopecki meticulously laid out, lies with "Shadow AI." This refers to unofficial AI solutions spun up by teams to expedite tasks, often bypassing formal IT channels. These clandestine helpers, she noted, often emerge "with no tickets, no approvals, and no paper trail," starting innocently as small scripts or models linked to SaaS tools, only to quickly escalate. They might begin interacting with sensitive customer data, calling third-party APIs, or writing to critical systems without official tracking.
This unofficial deployment creates several critical vulnerabilities. Firstly, Shadow AI is inherently difficult to detect. If an organization is unaware of an AI agent's existence, it cannot possibly secure it, leaving a significant blind spot in the enterprise's security posture. Secondly, these untracked agents are prone to data leakage, often copying and pasting information or utilizing "loose keys" like unprotected passwords, allowing private data to slip out.
Furthermore, Shadow AI presents considerable compliance challenges. Organizations must demonstrate adherence to regulatory standards, but without a clear record of an AI agent's actions, proving compliance becomes impossible. Fourth, these agents often operate with excessive permissions, granted for convenience without proper segmentation. This broad access means that a compromised agent could potentially open every digital "door" within a system. Lastly, the lack of visibility leads to messy incident responses; when an untracked agent malfunctions, identifying its origin, scope of impact, and ownership becomes a protracted and damaging exercise.
To counteract these pervasive risks, Kopecki advocated for a unified "air traffic control for AI," moving beyond disparate checklists to a single, integrated control plane for Agentic AI. This holistic approach centers on a continuous loop of "Discover, Assess, Govern, Secure, and Audit." The initial step, Discover, involves automatically identifying all AI agents, both sanctioned and shadow, across repositories, cloud projects, and embedded systems, bringing them under centralized oversight.
Once discovered, these agents must be Assessed. This involves rigorous stress testing through automated red teaming to proactively identify vulnerabilities like prompt injection, data leakage, tool misuse, and brittle configurations before malicious actors exploit them. Following assessment, Governance is critical, enforcing runtime policies such as least-privileged access, establishing guardrails on inputs and outputs, and actively monitoring for risky data movements. This entire process must be tied to a single, actionable risk register that both security and governance teams can leverage.
The Secure phase focuses on implementing automated logging and controls to generate irrefutable evidence for every AI action. This auditability, Kopecki emphasized, is not merely a compliance checkbox but a fundamental requirement for building trust and enabling rapid, safe AI scaling. The final step, Audit, completes the loop, using these logs to continuously verify adherence to policies and identify areas for improvement. This continuous cycle ensures that AI systems evolve with inherent security and accountability.
Kopecki illustrated this framework with two compelling use cases. In healthcare, an Agentic AI could quietly draft clinical notes during a patient-clinician conversation, cross-referencing facts against the patient's chart, flagging discrepancies for human review, and pre-staging follow-up appointments and referrals—all with minimal permissions and robust audit trails. This allows clinicians to focus on patient interaction, while the agent handles administrative burdens, enhancing both care quality and efficiency. Similarly, in the public sector, an AI agent could assist a citizen with filing taxes and renewing a fishing license simultaneously. The agent would confirm identity, retrieve only necessary records with consent, prepare summaries, and initiate payments, all while logging every action and decision for full traceability. This streamlines citizen services, reduces fraud, and builds public trust by providing clear, auditable processes.
Ultimately, Kopecki concluded, the goal is not flashy demos but rather the safe and predictable operation of AI that delivers tangible value without inadvertently creating future crises. Agentic AI is no longer just about conversation; it’s about action – clicking buttons, moving data, and even spending money. If these actions cannot be seen, tested, controlled, and proven, organizations are "flying fast in fog." Therefore, visibility is not optional; it is oxygen. By diligently implementing a unified security and governance framework, grounded in Zero Trust principles, organizations can foster a secure environment where Agentic AI can truly thrive, earning trust from patients, citizens, and stakeholders alike.



