The era of AI agents orchestrating multi-step workflows across disparate systems is here, but it’s a governance nightmare. Databricks is stepping in with an expanded AI Gateway, aiming to bring order to the chaos of agentic AI. This updated platform provides a unified governance layer, tackling the critical need for control and auditability in increasingly complex AI deployments.
Agentic AI, where models interact with tools, APIs, and other systems to complete tasks, presents significant challenges. Traditional governance tools, built for siloed applications, fall short. Databricks' approach aims to span the full lifecycle of an agent's actions, from LLM access to external system interactions.
Unified Control for Complex Workflows
The core of the update is extending AI Gateway to manage how LLMs interact with tools like APIs and coding assistants. This includes new support for governing MCP (model communication protocol) usage, allowing organizations to dictate which agents can access which external systems and monitor that data usage. This move is a significant step toward comprehensive LLM guardrails, moving beyond simple LLM access to encompass the entire agentic ecosystem.
Databricks is enabling users to set up LLM endpoints and MCP servers in seconds, supporting a range of models from Anthropic, OpenAI, Google, and open-source options. The key benefit is consistent policy application across providers, eliminating the need for duplicate configurations.
Fine-Grained Permissions and Guardrails
Preventing unwanted actions is paramount. AI Gateway introduces fine-grained access control for tools, supporting on-behalf-of user execution for MCP calls. This ensures agents operate with the same permissions as the requesting user, preventing unauthorized data access.