IBM Field CTO on AI Runtime Security

IBM Field CTO Tyler Lynch discusses securing AI agents, emphasizing dynamic credentials and OAuth 2 for safe resource access.

6 min read
Tyler Lynch, Field CTO at IBM, explaining AI agent runtime security with a diagram.
What is Agentic Security Runtime? Securing AI Agents — IBM on YouTube

In a recent IBM Think series video, Tyler Lynch, Field CTO at IBM, delves into the critical topic of securing AI agents. As organizations increasingly adopt AI technologies, the challenge of ensuring these agents operate securely and responsibly becomes paramount. Lynch highlights that while AI agents are powerful tools, their utility is often unlocked through external connections to various data sources and services, which in turn creates new security considerations.

Who Is Tyler Lynch?

Tyler Lynch serves as the Field CTO for IBM, a role that places him at the forefront of technological innovation and client engagement. With extensive experience in enterprise technology solutions, Lynch's expertise lies in understanding and articulating the practical applications and security implications of emerging technologies, particularly within the cloud and AI domains. His position allows him to bridge the gap between complex technical concepts and real-world business needs, making him a key voice in discussions around AI adoption and security.

The Architecture of AI Agents and Their Security Needs

Lynch begins by illustrating the typical architecture of an AI agent. He explains that an AI agent, often a Python, TypeScript, or Java application, rarely operates in isolation. Instead, it connects to external resources such as databases, Large Language Model (LLM) providers, and Software-as-a-Service (SaaS) applications like Salesforce. These connections are fundamental for the agent to perform its intended functions and deliver value.

The full discussion can be found on IBM's YouTube channel.

What is Agentic Security Runtime? Securing AI Agents - IBM
What is Agentic Security Runtime? Securing AI Agents — from IBM

The core security challenge arises from managing access to these external resources. Traditionally, hard-coded credentials within the application code were used, a practice Lynch points out as a significant security risk. He elaborates that these hard-coded credentials, whether they are database passwords or API keys for LLMs, are static and can be easily compromised if the code is exposed. This approach lacks the dynamic and granular control necessary for modern security paradigms.

The Imperative for Dynamic and Session-Bound Credentials

To address these vulnerabilities, Lynch advocates for a shift towards dynamic credentials that are session-bound and automatically revoked. He introduces the concept of "Non-Human Identity" (NHI) for AI agents, emphasizing that these agents should not be treated as anonymous entities. Instead, their access should be managed based on their specific needs and the context of their operations.

Lynch explains that the goal is to grant an AI agent only the necessary permissions for a specific task or session. This principle of least privilege is crucial in minimizing the potential damage if an agent's credentials are compromised. He draws a parallel to how human users are authenticated and authorized, suggesting that AI agents should undergo a similar rigorous process.

Implementing Secure AI Agent Access: Dynamic Credentials and OAuth 2

Lynch proposes a model where AI agents dynamically obtain credentials for the resources they need to access. This means that instead of having static, long-lived credentials embedded in the code, the agent requests temporary credentials for each session or specific task. This approach significantly enhances security by reducing the attack surface and limiting the lifespan of any potentially compromised credentials.

A key enabler for this dynamic credential management is the OAuth 2 framework. Lynch illustrates how an AI agent can use OAuth 2 to obtain access tokens for various services. This involves the agent presenting its identity to an authorization server, which then issues tokens that grant specific permissions for a limited time. This process ensures that credentials are not hardcoded and are managed securely by a dedicated authorization system.

Leveraging OAuth 2 for AI Agent Security

Lynch further elaborates on how OAuth 2 can be integrated into the AI agent's workflow. When an AI agent needs to access a resource, such as a database or an LLM API, it can initiate an OAuth 2 flow. This typically involves the agent presenting its identity, which might be a client ID and secret, or a more advanced mechanism like a certificate. The authorization server then validates the agent's identity and issues an access token, which the agent uses to authenticate its requests to the protected resource.

He highlights that this approach ensures that the credentials are not stored within the AI application itself, but rather managed by a centralized identity and access management system. This separation of concerns is a fundamental security best practice.

The Role of Ciba in Enhancing AI Agent Security

Beyond standard OAuth 2 flows, Lynch introduces the concept of Ciba (Client Initiated Backchannel Authentication) as a more advanced method for securing AI agent interactions, particularly for sensitive operations. Ciba allows for a more secure and seamless authentication process where the AI agent can initiate an authentication request via a backchannel, without direct user interaction in the browser for every request.

Lynch uses the example of an AI agent needing to offboard an employee. This is a sensitive operation that requires strong assurance of the agent's identity and authorization. In this scenario, the AI agent would use Ciba to interact with an identity provider. The identity provider then prompts the user (e.g., via their phone) to authorize the specific action the AI agent intends to perform. This user confirmation, coupled with the agent's verified identity, provides a robust security layer for high-stakes operations.

He explains that this process ensures that the AI agent is not acting autonomously with sensitive data but is doing so with explicit, context-aware user consent. This client-initiated backchannel authentication mechanism is crucial for maintaining security and accountability when AI agents perform actions that have significant real-world implications.

The Importance of Dynamic Credential Revocation

A critical aspect of this security model is the ability to dynamically revoke credentials. Lynch emphasizes that once a session ends or a task is completed, the temporary credentials issued to the AI agent should be automatically invalidated. This prevents the credentials from being reused maliciously after their intended purpose has been fulfilled.

He states, "We create dynamic credentials that are time-bound and automatically revoked at the end of the session." This practice ensures that the AI agent's access is always limited to the current context, thereby minimizing the risk of unauthorized access or data breaches.

The Future of AI Agent Security

Lynch concludes by stressing that building secure AI agents requires a fundamental shift in how access is managed. By moving away from hard-coded credentials and embracing dynamic, session-bound credentials authenticated through standards like OAuth 2 and Ciba, organizations can significantly enhance the security posture of their AI deployments. This approach not only protects sensitive data and resources but also ensures that AI agents operate within defined security boundaries, fostering trust and enabling responsible AI adoption.