The Agentic Standard is Here: MCP’s Journey from Hackathon to Linux Foundation

5 min read
The Agentic Standard is Here: MCP’s Journey from Hackathon to Linux Foundation

The path to truly capable, enterprise-grade AI agents requires interoperability and standardization, compelling even fierce competitors to collaborate on the foundational infrastructure. The recent announcement that Anthropic’s Model Context Protocol (MCP)—a simple, open standard for connecting AI applications to data and tools—is joining the newly formed Agentic AI Foundation (AAIF) under the Linux Foundation marks a pivotal moment in the industry’s standardization effort. David Soria Parra (MCP lead, Anthropic) recently sat down with Swyx and Alessio Fanelli of Latent Space, alongside Nick Cooper (OpenAI), Brad Howes (Block / Goose), and Jim Zemlin (Linux Foundation CEO), to discuss the protocol’s explosive one-year journey, its rapid enterprise adoption, and the surprising collaborative effort behind establishing a neutral ground for agentic systems.

What began as a local-only experiment quickly scaled into a de facto standard. Soria Parra noted that the protocol saw "crazy adoption" initially through early builders and then hit a major inflection point around April when leaders from Microsoft, Google, and OpenAI publicly endorsed its use. This rapid timeline underscores the urgent need for a common communication layer that connects large language models (LLMs) to external tools and data sources. The evolution was swift, moving from simple local `stdio` servers to robust remote HTTP streaming capable of handling complex, long-running tasks.

Enterprise adoption quickly forced the protocol to mature beyond initial assumptions. The key friction point centered on robust authentication, necessitating a pivot from the initial "March spec" to a more resilient June iteration that cleanly separated resource servers from identity providers.

This separation was critical because, as Soria Parra learned through early implementation, combining the authentication server and the resource server into a single MCP component proved "unusable" for large organizations. Enterprises require authentication against centralized identity providers (IDPs) like Okta or Auth0. The June specification fixed this by adhering strictly to OAuth 2.1 principles, recognizing that the authentication step must be handled by external, enterprise-grade infrastructure, leaving the MCP server to focus purely on resource provision and agent communication.

The introduction of "Tasks" as a new primitive addresses the limitations inherent in basic tool calling for increasingly complex agent workflows. Tasks allow for long-running, asynchronous operations—enabling agents to handle deep research or multi-step processes that persist outside of a single API call. Soria Parra described the intent: to enable "maybe even agent-to-agent communication," recognizing that true productivity gains come from autonomous, persistent agents that work while the user sleeps. This containerized approach to long-running work is fundamentally different from a transient tool call, serving as a container for complex, multi-step operations.

The formation of the Agentic AI Foundation (AAIF) under the Linux Foundation provides a crucial governance structure designed to ensure neutrality in a highly competitive space. The foundation’s mandate is to prevent vendor lock-in while curating meaningful, high-quality projects. Jim Zemlin, CEO of the Linux Foundation, remarked that he has "never seen this much day-one inbound interest in 22 years," signaling the industry's consensus on the necessity of this neutral layer.

The foundation is built on the principle of minimal governance overhead for the core protocol while encouraging rapid community development around tooling and applications. The technical steering committee, where Soria Parra continues to shepherd the protocol, functions as a mechanism for consensus-building, rather than unilateral decision-making. The challenge now lies in creating a robust ecosystem that balances openness with security—a dilemma particularly acute in the context of registries. The concept of an "npm for agents" is compelling, but the reality demands trust levels, code signing, and curated sub-registries (like Smithery or GitHub) to ensure compliance in highly regulated sectors such as financial services and healthcare.

The collaboration between Anthropic, OpenAI, and Block in donating key projects like MCP and Block’s Goose coding agent to the AAIF highlights a shared understanding that foundational standards benefit the entire ecosystem, even if the participating companies are fierce competitors in the model space. This agreement focuses on technical principles over rigid roadmaps, ensuring that the core mechanisms remain stable and composable. Soria Parra noted that Anthropic itself dogfoods MCP heavily, using internal gateways and custom servers for applications like Slack summaries and employee surveys, demonstrating that the protocol was born from the practical need to scale developer tooling faster than the company grew.

Soria Parra clarified the relationship between connectivity protocols like MCP and model optimization techniques like "code mode" or skills. MCP serves purely as the communication layer, facilitating actions in the outside world. Model training, conversely, is an optimization layer that helps the model decide which actions to take and how to call them most efficiently. "I see it purely as an optimization," Soria Parra stated, emphasizing that the underlying primitives of the protocol remain stable even as models become exponentially more capable at utilizing them. This distinction is vital for understanding why competitors can agree on the communication standard while maintaining proprietary advantages in model performance. The fundamental goal of the AAIF and MCP is not to dictate model behavior, but to provide a robust, scalable, and secure interface for models to interact with the world, fostering a shared platform economy for agentic AI.