As AI systems grow in sophistication, particularly with the rise of generative AI and agentic models, the foundational challenge of identity propagation transforms from a mere technical hurdle into a critical security imperative. In a recent presentation, Grant Miller, a Distinguished Engineer at IBM, shed light on the evolving complexities and strategic approaches to securely managing identity across multi-hop, multi-agent AI environments.
Miller highlighted that while traditional identity propagation patterns—such as direct user-to-application connections or trusted assertions via an Identity Provider (IdP)—suffice for simpler architectures, they falter in the intricate, multi-node flows characteristic of modern agentic systems. "Organizations are embracing Gen AI and RAG models and agentic systems. With that, we're starting to see a lot of challenges pop up," he noted, emphasizing the shift from simple user-to-database interactions to complex chains involving chatbots, routers, and multiple agents.
The fundamental shift from human-driven transactions to autonomous agentic flows demands a re-evaluation of security paradigms. Legacy identity models are simply inadequate for ensuring trust and integrity in these dynamic, distributed AI environments.
The core vulnerability in these complex flows emerges when an unauthorized entity attempts to impersonate a legitimate user. This concern intensifies as identity moves across various agents and even organizational boundaries. Miller succinctly posed the critical question: "How do they trust that the identity that is coming through the system is actually the identity that is supposed to be used?" This points to the need for robust mechanisms that verify identity at every stage, not just at the initial authentication point.
To counter these emerging threats, Miller outlined several strategic imperatives. Firstly, he advocated for strict adherence to industry standards like OAuth2 and OpenID Connect (OIDC), stating, "We really want to look at is we really want to use OAuth2 and OIDC as our standards." This provides a common, secure framework for authentication and authorization. Secondly, implementing a "token exchange" mechanism at each hop within an agentic flow becomes paramount. This ensures that every node validates the incoming token, preventing unauthorized injections or impersonations by ensuring it matches a known, secure flow and intended audience. Lastly, leveraging context, scope, and audience within tokens allows for granular control over what an agent is permitted to do, narrowing privileges to only what is strictly necessary for that specific transaction. Connecting nodes via APIs further centralizes and secures these exchanges through API gateways, offloading the burden from individual developers.
Ultimately, the success of enterprise AI adoption hinges on the ability to establish and maintain transitive trust across increasingly complex and distributed systems. Robust monitoring and continuous adaptation of these strategies will be key to navigating the evolving security landscape of agentic AI.

