IBM CTO on AI Agent Identity and Access Management

IBM CTO Grant Miller presents a four-step model for maturing AI agent identity and access management, from 'ad hoc' to 'adaptive' controls.

4 min read
Grant Miller, IBM Distinguished Engineer and CTO, presents a model for AI agent identity and access management.
Image credit: IBM· IBM

Grant Miller, a Distinguished Engineer and CTO at IBM, recently shared a crucial framework for managing identity and access for AI agents. In a video presentation, Miller detailed a four-step model designed to mature how organizations approach AI identity and access management (IAM). This approach is vital as AI agents become more integrated into business processes, necessitating robust security and control measures.

Grant Miller's Expertise

Grant Miller brings extensive experience to the forefront of technology strategy. As a Distinguished Engineer and CTO at IBM, he is instrumental in shaping IBM's technological direction, particularly in areas like artificial intelligence, cloud computing, and security. His role involves understanding complex technical challenges and developing practical solutions for enterprises.

The Four-Step AI IAM Maturity Model

Miller introduced a four-step model to help organizations assess and improve their AI agent IAM capabilities. The model progresses from a basic, 'ad hoc' state to a highly sophisticated 'enhanced' stage.

Related startups

The full discussion can be found on IBM's YouTube channel.

IAM for AI: 4 Steps to Secure and Futureproof Agentic Systems - IBM
IAM for AI: 4 Steps to Secure and Futureproof Agentic Systems — from IBM

Step 1: Ad Hoc Identity

At the most basic level, Miller described the 'ad hoc' stage. In this phase, AI agents are often built without a clear identity or robust access controls. They might be given minimal credentials to connect to systems, but there's little oversight or a structured approach to their identity. This approach is common in early-stage AI development or when AI is first being experimented with.

Step 2: Foundational Identity

The second step, 'foundational identity,' involves assigning basic identities to AI agents. This means moving beyond simply using generic API keys. Miller explained that organizations start assigning specific, non-human identities to agents. This allows for better tracking and the ability to assign basic privileges. For example, an agent might be given the identity 'AI-Reporting-Agent-1' and granted read-only access to specific data sources. This stage also begins to introduce basic delegation, where one agent might act on behalf of another or perform tasks under the authority of a human identity.

Step 3: Enhanced Identity

The 'enhanced' stage builds upon the foundational steps by introducing more granular and context-aware controls. Miller highlighted that in this phase, organizations strive to assign identities that reflect the specific tasks and data an agent needs to access. This involves assigning 'least privilege' principles, meaning agents only get the minimum necessary access to perform their functions. Furthermore, this stage emphasizes the use of systems like SIEM (Security Information and Event Management) for auditing and compliance, ensuring that agent actions are logged and can be reviewed. The goal here is to know who or what is performing actions and to what extent.

Step 4: Adaptive Identity

The most mature stage, 'adaptive identity,' focuses on continuous authorization and real-time risk assessment. Miller explained that in this phase, AI agents are treated more like dynamic entities within the system. Their access and permissions are not static but can change based on the context of their actions, the sensitivity of the data they are accessing, and real-time risk signals. This includes concepts like risk-based re-authorization and real-time revocation of credentials if suspicious activity is detected. Essentially, the system continuously monitors and verifies the agent's trustworthiness and adjusts its access accordingly.

Key Risks and Mitigation

Miller emphasized that moving through these maturity stages helps mitigate significant risks associated with AI agents. One primary risk is establishing accountability. When AI agents perform actions, it's crucial to know which agent performed them and why. The 'ad hoc' stage, with its lack of clear identities, makes accountability difficult. By assigning identities and logging actions, organizations can trace operations back to specific agents.

Another critical risk is preventing abuse. Without proper controls, AI agents with excessive privileges can be exploited, either maliciously or unintentionally, to access sensitive data or perform unauthorized actions. Miller stressed the importance of the 'least privilege' principle and fine-grained, contextual access, as seen in the 'enhanced' and 'adaptive' stages, to prevent such misuse.

He also touched upon the challenges of non-human identity management. Unlike human users with known attributes and regular review cycles, AI agents can be numerous, ephemeral, and their access needs can change rapidly. The model aims to provide a structured way to manage these unique characteristics, ensuring that agents have the necessary access to perform their tasks without compromising security.

By adopting a phased approach to AI IAM maturity, organizations can systematically build more secure and manageable AI deployments, ensuring that AI agents operate responsibly and within defined boundaries.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.