"MCP is useful as a common language for declaring tools for AI agents," stated Jack Wotherspoon, a Python Developer Advocate at Google Cloud, during a recent Google Cloud Tech interview. This insight encapsulates the essence of the Model Context Protocol (MCP), an open standard developed by Anthropic, designed to simplify how large language models (LLMs) and AI agents interact with external systems. Wotherspoon, speaking with Cloud Developer Advocate Martin Omander, illuminated how MCP, particularly when deployed on serverless platforms like Google Cloud Run, offers a streamlined approach to building more capable and autonomous AI agents.
The discussion, part of Google Cloud's "Serverless Expeditions" series, centered on a practical demonstration of integrating MCP tools with Google Cloud Run. Wotherspoon explained that MCP standardizes the provision of context to LLMs, primarily through "tools" that empower agents to perform real-world actions. These actions can range from calling external APIs and querying databases to executing custom code, essentially extending the agent's capabilities beyond its core language model.
The utility of MCP extends to two primary audiences. For API providers, it offers a method to package existing APIs into MCP tools, making them inherently more discoverable and usable by AI agents. Conversely, developers building AI-assisted IDEs or agentic chatbots find MCP invaluable as it simplifies the process of enabling their agents to leverage these external APIs or perform actions on their behalf. This symbiotic relationship fosters a richer ecosystem of AI-driven applications.
Consider Alice, a provider of sprinkler systems, who has an API to generate installation quotes. She deploys an MCP server on Google Cloud Run, encapsulating her API's functionality within an MCP tool. David, a lawn care provider, then develops an AI chatbot using Google's Agent Development Kit (ADK), also deployed on Cloud Run. David's chatbot can now integrate Alice's MCP tool. Eve, a customer, interacts with David's chatbot, asking for lawn care tips or, crucially, "What would a new sprinkler system cost?"
David's chatbot, powered by the ADK, intelligently recognizes the customer's intent to get a quote. Instead of David having to manually code complex API calls, the agent simply points to Alice's MCP server URL. The agent then dynamically understands the `calculate_quote` tool, its required inputs (square meters, country code), and its expected output. This abstraction is a core insight: "He just pointed the agent to Alice's MCP server URL, and it figured out the rest." The chatbot prompts Eve for the necessary details, then calls Alice's MCP tool, which in turn interacts with Alice's underlying API to retrieve a quote. The result is a seamless, conversational experience for the end-user, powered by interoperable AI tools.
The technical implementation for Alice's MCP server, as demonstrated, is remarkably straightforward. Using the FastMCP Python library, she merely imports `FastMCP` and annotates her `calculate_quote` function with an `@mcp.tool` decorator. This simple annotation is all it takes to expose the method via the MCP protocol, making it accessible to AI agents. The function itself contains the business logic for calculating the quote based on country-specific pricing and lawn size.
Related Reading
- Orchestrator Agents and Model Context Protocol: The Future of AI Automation
- Box CEO: AI Agents Redefine Enterprise Productivity
For David's agent, the integration is equally elegant. The agent is configured with an `MCPToolSet` that specifies the URL of Alice's MCP server. This single URL is sufficient for the agent to discover all available tools and their descriptions, eliminating the need for David to hardcode input/output parameters or detailed API specifications. This highlights a second core insight: MCP provides a high-level, semantic description of tools, abstracting away low-level implementation details that are typically found in specifications like OpenAPI. While OpenAPI offers a "low-level blueprint" including HTTP status codes, MCP focuses on the functional aspect—what the tool *does* and what arguments it *needs*—making it more directly consumable by AI agents.
The choice of Google Cloud Run for deploying both Alice's MCP server and David's agentic chatbot underscores a third critical insight: serverless platforms are ideal for AI infrastructure. Cloud Run handles the underlying infrastructure, scaling, and configuration, allowing developers to focus solely on their code. This "hands-off" approach accelerates development cycles and reduces operational overhead, a significant advantage in the fast-paced AI landscape. Furthermore, for secure service-to-service communication, MCP supports standard authentication methods like OAuth, and Cloud Run's native OIDC token integration provides a robust mechanism for agent-to-tool authentication. Even existing APIs can be easily exposed via MCP by generating an MCP server from an OpenAPI specification or by writing a small proxy using FastMCP.

