Cloudflare Shrinks Web Bloat with Shared Dictionaries

Cloudflare rolls out shared compression dictionaries to slash web asset transfer sizes, improve load times, and combat increasing data bloat.

5 min read
Diagram illustrating how shared compression dictionaries reduce data transfer between server and browser.
Shared dictionaries enable faster web loading by sending only differences in updated assets.· Cloudflare

The web is getting heavier, and agents are making it download more often. Cloudflare is introducing shared compression dictionaries, a technology designed to combat this escalating data bloat. By allowing browsers to leverage previously cached content as a reference, these dictionaries significantly shrink asset transfers, leading to faster page loads and less wasted bandwidth, especially for repeat visitors or those on slower connections.

Web pages have grown by 6-9% annually for a decade, fueled by richer frameworks and media. This trend shows no sign of slowing. Simultaneously, the rise of agents – from crawlers to AI development tools – is dramatically increasing how often web pages are requested and rebuilt. Agentic actors accounted for nearly 10% of Cloudflare's network requests in March 2026, a 60% year-over-year surge. This dual pressure of larger files and more frequent fetches strains traditional caching mechanisms.

The Problem: More Deploys, Less Caching

AI-assisted development accelerates product velocity, enabling teams to ship more frequently. However, this rapid iteration often breaks conventional caching. Even a minor one-line code change can lead to a new file name, forcing browsers to re-download entire JavaScript bundles. Traditional compression algorithms like Gzip or Brotli reduce the size of individual files but cannot account for the 95% of content a client might already have cached. This redundancy translates to wasted bandwidth and CPU cycles, with hardware increasingly becoming a bottleneck.

What Are Shared Dictionaries?

A shared compression dictionary acts as a cheat sheet between server and client. Instead of compressing a response from scratch, the server can inform the client, "you already have this part," and transmit only the new or modified sections. The client uses its cached reference to reconstruct the full file. This principle of compressing against known content is key to modern efficiency.

While Brotli includes a standard dictionary for common web patterns and Zstandard can be optimized with custom content samples, Gzip lacks these features, building dictionaries dynamically. Shared dictionaries take this a step further by using the previously cached version of a resource as the dictionary itself. For a JavaScript bundle with a single-line change, this can reduce a 500KB transfer down to a few kilobytes.

Related startups

Delta Compression: Sending Only the Diff

Delta compression is the mechanism that turns a browser's existing cached file into a dictionary. The server initially serves a resource with a Use-As-Dictionary header, signaling the browser to retain it. On subsequent requests, the browser sends an Available-Dictionary header, indicating what it has. The server then compresses the new version against the old one, sending only the difference.

This is particularly effective for versioned assets like JavaScript bundles, CSS files, and framework updates. Each new version is compressed against the previous one, creating a persistent chain of small diffs rather than full file re-downloads. Savings compound across the entire release history.

Why the Wait? Past Challenges and New Standards

Previous attempts at shared dictionaries, like Google's Shared Dictionary Compression for HTTP (SDCH) in 2008, faced significant hurdles. Security vulnerabilities such as CRIME and BREACH allowed attackers to infer sensitive data by observing compression size changes. Architectural issues, including conflicts with the Same-Origin Policy and CORS, further hindered adoption.

Chrome ultimately removed SDCH support in 2017. However, the modern standard, RFC 9842: Compression Dictionary Transport, addresses these shortcomings. It mandates that dictionaries are only usable on responses from the same origin, mitigating side-channel attack vectors. While Chrome and Edge now support this, Firefox is still implementing it, and broad cross-browser adoption is ongoing.

Implementing shared dictionaries remains complex. Origins must manage dictionary generation, header serving, and on-the-fly delta compression, with graceful fallbacks. Caching also becomes intricate, as each dictionary version creates a separate cache variant, potentially increasing storage and reducing hit rates mid-deployment. This complexity is precisely why edge solutions are ideal.

Cloudflare's Phased Rollout

Cloudflare is enabling shared dictionary support across its platform in three phases, recognizing the complexities involved. This approach aims to make the technology accessible and easy to implement for everyone.

Phase 1: Passthrough Support

Currently in development, Phase 1 focuses on passthrough support. Cloudflare will forward necessary headers (Use-As-Dictionary, Available-Dictionary) and encodings (dcb, dcz) without modification. Cache keys will be extended to vary on Available-Dictionary and Accept-Encoding for correct caching.

This phase targets customers who manage their own dictionaries at the origin. An open beta is scheduled for April 30, 2026. Requirements include a Cloudflare zone with the feature enabled, an origin serving dictionary-compressed responses with correct headers, and visitor browsers supporting dcb/dcz and sending Available-Dictionary (currently Chrome 130+ and Edge 130+, with Firefox in progress).

Internal testing showed dramatic results: a 272KB JavaScript bundle, compressed with Gzip to 92.1KB, dropped to just 2.6KB with shared dictionary compression (DCZ) against the previous version. This represents a 97% reduction over the Gzip-compressed asset. Download times improved by up to 89% on cache hits.

Phase 2: Edge Management

In Phase 2, Cloudflare will manage the dictionary process. Customers will designate assets as dictionaries via rules, and Cloudflare will handle injecting headers, storing dictionary bytes, delta-compressing new versions, and serving the correct variants. The origin will serve standard responses, offloading complexity to the edge. A live demo, "Can I Compress (with Dictionaries)?", illustrates this by reducing a 94KB bundle with minor changes to approximately 159 bytes.

Phase 3: Automatic Generation

The final phase aims for fully automatic dictionary generation. Cloudflare will identify versioned resources across its network based on traffic patterns – where successive responses share significant content but differ in hash. It will then automatically store previous versions as dictionaries and compress subsequent ones. This eliminates customer configuration and maintenance, offering significant performance and bandwidth benefits for all users.

This automatic generation is technically challenging, requiring careful handling of private data and identification of optimal candidates. Cloudflare's network visibility, edge storage capabilities, and RUM data provide the necessary components to make this feasible.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.