Cloudflare is betting big on the rise of AI agents, officially unveiling its vision for the agentic cloud during its recent Agents Week 2026. The company launched a raft of new services and features designed to provide the infrastructure for agents to operate at scale, aiming to address the unique demands of AI workloads that differ significantly from traditional applications.
The company’s announcement highlights the explosive growth of agents in tasks ranging from coding and customer support to complex research. As CTO Dane Knecht and VP of Product Rita Kozlov noted, the potential scale is enormous, requiring compute capacity for tens of millions of simultaneous sessions.
The traditional cloud’s one-app-serves-many-users model is insufficient for this new paradigm. Cloudflare’s strategy centers on re-architecting its platform, originally built on its Workers serverless compute, to accommodate these new demands. This includes providing flexible compute environments, robust security, essential agent toolkits, and a streamlined path from prototype to production.
Compute for Agents
Addressing the diverse needs of AI agents, Cloudflare introduced new compute environments. These range from full operating system capabilities for complex package installations to lightweight isolates that can start in milliseconds.
Artifacts offers Git-compatible, versioned storage for code and data, supporting tens of millions of repositories. Cloudflare Sandboxes provide persistent, isolated environments akin to actual computers for AI agents, complete with shells and file systems.
To enhance security for Sandboxes, Outbound Workers act as programmable, zero-trust egress proxies, enabling dynamic security policies and credential injection without exposing sensitive tokens. Durable Objects in Dynamic Workers allow AI-generated applications to have their own isolated SQLite databases, facilitating stateful code execution.
Cloudflare Workflows, the durable execution engine for multi-step applications, has been re-architected to support 50,000 concurrent executions, scaling for durable background agents.
Security as Default
Security is paramount as agents gain autonomy and access to internal networks. Cloudflare Mesh offers secure, private network access for users, nodes, and AI agents, integrating with Workers VPC for scoped access to private resources.
Managed OAuth for Access simplifies agent integration with internal applications by adopting RFC 9728 for secure authentication. Cloudflare is also enhancing security for non-human identities with scannable API tokens, improved OAuth visibility, and resource-scoped permissions, promoting a least-privilege architecture.
The company shared its internal strategy for governing these agents, emphasizing a reference architecture for enterprise deployments using Access, AI Gateway, and MCP server portals.
The Agent Toolbox
Capable agents require models, tools, and context. Cloudflare announced primitives for inference, search, memory, voice, email, and browser interaction.
Project Think previews the next generation of the Agents SDK, aiming for a platform where agents can think, act, and persist. An experimental voice pipeline enables real-time voice interactions for agents.
The Cloudflare Email Service, now in public beta, allows agents to send, receive, and process emails natively. Cloudflare’s AI platform is evolving into a unified inference layer supporting over 14 model providers, with new features including Workers bindings for third-party models and multimodal capabilities.
To run large language models efficiently, Cloudflare developed a custom technology stack and an inference-time compression system called Unweight, reducing model footprint by up to 22% for faster, cheaper inference.
Agent Memory provides AI agents with persistent memory, enabling them to learn and recall information. AI Search offers a dedicated search primitive for agents, supporting dynamic instance creation and hybrid retrieval.
Browser Run, formerly Browser Rendering, now offers Live View, Human in the Loop, and enhanced concurrency for AI agents interacting with web content. This comprehensive suite of tools forms the foundation for robust agentic cloud infrastructure.
Prototype to Production and the Agentic Web
Cloudflare is streamlining the developer experience with a unified CLI, cf, and Local Explorer for debugging. Agent Lee, an in-dashboard agent, shifts the interface paradigm to a prompt-based interaction for stack management.
Flagship, a native feature flag service, offers sub-millisecond evaluation by leveraging KV and Durable Objects. Integrations with PlanetScale allow for deploying Postgres and MySQL databases with Cloudflare Workers.
The Cloudflare Registrar API, now in beta, enables developers and agents to register domains directly from their workflow.
As agents proliferate, the web itself needs to adapt. Cloudflare introduced the Agent Readiness score to help site owners assess their agent compatibility. Redirects for AI Training allows content owners to direct verified crawlers to canonical pages without origin changes.
Cloudflare also detailed network performance updates, including a migration to a Rust-based architecture called FL2, and previewed support for shared compression dictionaries to improve page load times.
This week's launches lay the groundwork for Cloudflare’s vision of the agentic cloud, with ongoing development to support the evolving needs of AI agents and the infrastructure that powers them. The company continues to build out its platform, integrating advanced capabilities like Cloudflare Workers for AI agents.
