The National Institute of Standards and Technology (NIST) has launched the AI Agent Standards Initiative, a program designed to ensure autonomous AI agents can operate securely and interoperate across digital systems. The initiative seeks to foster industry-led standards and protocols, aiming to build public trust and accelerate the adoption of agentic AI.
AI agents, capable of tasks like coding, email management, and online shopping, promise significant productivity gains. However, their real-world utility hinges on their ability to interact reliably with external systems and data. This initiative aims to address concerns about fragmentation and adoption barriers.
Pillars of the Initiative
The AI Agent Standards Initiative will advance along three main pillars:
- Facilitating industry-led development of agent standards and U.S. leadership in international bodies.
- Fostering community-led open-source protocol development for agents.
- Advancing research in AI agent security and identity to promote trusted adoption.
NIST will solicit public input through Requests for Information (RFIs) and listening sessions. An RFI on AI Agent Security is due March 9, and a concept paper on AI Agent Identity and Authorization is open for comments until April 2.
In April, CAISI will also host sector-specific listening sessions focusing on healthcare, finance, and education to identify barriers to AI adoption. These sessions will inform concrete projects aimed at spurring confident adoption.
The initiative underscores NIST's commitment to supporting the development of a trusted and interoperable AI agent ecosystem. Previously, CAISI issued an RFI seeking insights on securing AI agent systems, highlighting risks from adversarial data and model vulnerabilities. The NCCoE is also exploring standards-based approaches for identifying and authorizing AI agent actions, recognizing the growing autonomy and potential risks associated with these systems.



