Network Protocols & Compression
HTTP/2: The Current Baseline
HTTP/2 (standardized 2015, RFC 7540) remains the minimum acceptable protocol for any performance-conscious site. It introduced multiplexing (multiple requests over a single TCP connection), header compression (HPACK), and server push (now deprecated). If your site is still serving over HTTP/1.1, upgrading to HTTP/2 is likely the single largest infrastructure improvement you can make.
Key HTTP/2 features relevant to performance: multiplexing eliminates the need for domain sharding and sprite sheets — the browser can send all requests over one connection without head-of-line blocking at the HTTP layer. Header compression (HPACK) reduces redundant header data, which is especially impactful for sites making many small API requests. Stream prioritization lets the browser signal which resources matter most, though the original complex tree-based priority model was inconsistently implemented and has been replaced by the simpler Extensible Priorities scheme (RFC 9218) used in HTTP/3.
HTTP/2 requires HTTPS (TLS) in practice — while the spec allows plaintext HTTP/2, no browser implements it. This is a feature, not a limitation: HTTPS is a hard requirement for modern web features (service workers, Brotli compression, many APIs) and a Google ranking signal.
Server push (Link: </style.css>; rel=preload with nopush to disable) was HTTP/2’s most hyped feature but has been deprecated. Chrome removed support in 2022. The replacement is 103 Early Hints (covered in Section 17), which is simpler, more cache-friendly, and avoids the over-pushing problems that plagued server push.
Resources:
HTTP/3 and QUIC: The Future is UDP
HTTP/3 (RFC 9114, finalized June 2022) is the most significant transport-layer change in the web’s history. It replaces TCP entirely with QUIC (RFC 9000), a transport protocol built on UDP that integrates TLS 1.3 directly at the transport layer. As of 2025–2026, HTTP/3 is supported by 95%+ of major browsers by default, adopted by 34% of the top 10 million websites, and served by Google, Meta, Cloudflare, Akamai, and Shopify across their entire infrastructure.
Why QUIC matters — the key improvements over TCP:
No head-of-line blocking: HTTP/2’s fatal flaw is that while it multiplexes at the HTTP layer, it still runs over a single TCP connection. If one TCP packet is lost, all streams stall waiting for retransmission — the TCP layer can’t distinguish which stream the lost packet belonged to. QUIC provides independent streams at the transport layer, so packet loss in one stream doesn’t affect others. On lossy mobile networks, this is transformative.
Faster connection establishment: A new TCP + TLS 1.3 connection requires 2–3 round trips (TCP SYN/SYN-ACK + TLS handshake). QUIC combines transport and encryption into a single 1-RTT handshake. For returning visitors, 0-RTT resumption allows data to be sent immediately with zero round trips — the connection is essentially instant.
Connection migration: TCP connections are identified by the (source IP, source port, dest IP, dest port) tuple. Change any of these (switching from WiFi to cellular) and the connection breaks, requiring a full reconnection. QUIC connections use a connection ID, allowing seamless migration between networks without interruption — critical for mobile users.
Built-in encryption: Every QUIC connection is encrypted with TLS 1.3. There is no unencrypted HTTP/3 — security is mandatory, not optional.
Simpler prioritization: HTTP/3 uses the Extensible Priorities scheme (RFC 9218) with urgency levels and incremental hints, replacing HTTP/2’s complex and inconsistently implemented tree-based priority model.
When HTTP/3 helps most: mobile users on unstable networks, high-latency connections (rural, intercontinental), websites with many parallel resource requests (e-commerce product pages), returning visitors (0-RTT). When gains are minimal: low-latency wired connections with no packet loss, sites with few resources.
How to enable: Most CDNs (Cloudflare, Akamai, Fastly, AWS CloudFront) support HTTP/3 out of the box — often a single toggle. For self-hosted, NGINX supports HTTP/3 since 1.25.0 (May 2023), LiteSpeed has it enabled by default since 6.0.2, and Caddy has it on by default since 2.6.0. The server advertises HTTP/3 availability via the Alt-Svc HTTP header; browsers negotiate the upgrade automatically.
Caveat: Some corporate firewalls and network appliances block or rate-limit UDP traffic, preventing HTTP/3. Browsers automatically fall back to HTTP/2 over TCP when QUIC fails, so enabling HTTP/3 is risk-free — worst case, nothing changes.
Resources:
- What is HTTP/3? — Cloudflare
- HTTP/3 vs HTTP/2 Performance — DebugBear
- HTTP/3 in the Wild — The New Stack / Catchpoint
- QUIC and HTTP/3: The Next Step — IJS Blog
- Examining HTTP/3 Usage One Year On — Cloudflare Radar
TLS 1.3: Faster, Simpler, Mandatory
TLS 1.3 (RFC 8446, finalized 2018) is the encryption layer for both HTTPS over TCP and QUIC/HTTP/3. It’s simpler, faster, and more secure than TLS 1.2:
1-RTT handshake (vs 2-RTT for TLS 1.2): The handshake completes in a single round trip, saving 50–150ms on typical connections. 0-RTT resumption: Returning visitors can send application data in the first flight, achieving zero-round-trip connection establishment (with important replay-attack caveats — 0-RTT data should be idempotent). Removed insecure ciphers: TLS 1.3 eliminates RSA key exchange, CBC mode ciphers, RC4, SHA-1, and other legacy cryptographic algorithms that were sources of vulnerabilities. Only AEAD ciphers (AES-GCM, ChaCha20-Poly1305) are permitted. Encrypted handshake: Unlike TLS 1.2, where the certificate is sent in plaintext, TLS 1.3 encrypts most of the handshake, improving privacy.
TLS 1.3 is supported by all modern browsers and should be your minimum TLS version. Disable TLS 1.0 and 1.1 (deprecated by RFC 8996), and consider disabling TLS 1.2 for new deployments. Most CDNs handle TLS configuration automatically with best-practice defaults.
Encrypted Client Hello (ECH), currently rolling out (Cloudflare is a major proponent), encrypts the SNI field in the TLS handshake — the last piece of plaintext metadata that reveals which website a user is visiting. ECH is a significant privacy improvement, preventing network observers from identifying visited sites based on the SNI.
Resources:
Compression: Brotli, Zstandard, and the End of gzip-Only
HTTP text compression — applied to HTML, CSS, JavaScript, JSON, SVG, and other text-based formats — is one of the simplest and most effective performance optimizations. The landscape in 2026 has three tiers:
Brotli (Google, 2015) remains the gold standard for maximum compression ratio. It achieves 15–25% better compression than gzip thanks to a built-in static dictionary of common web patterns. Brotli has 96% browser support and, based on HTTP Archive data from 2024, is already used more than gzip for JavaScript and CSS delivery. The trade-off: Brotli’s highest compression levels (10–11) are significantly slower to compress than gzip, making them impractical for dynamic content in real-time. The practical approach: pre-compress static assets at Brotli level 11 at build time (one-time cost), and use level 4–6 for dynamic content or let your CDN handle it.
Zstandard (zstd) (Facebook/Meta, 2016) is the exciting newcomer for web delivery. Cloudflare’s testing shows zstd compresses 42% faster than Brotli while achieving nearly equivalent compression ratios, and produces files 11.3% smaller than gzip at comparable speeds. Chrome added zstd support in version 123 (March 2024), Firefox supports it, but Safari does not yet — the browser automatically falls back to Brotli or gzip when zstd isn’t supported. Facebook serves their homepage with zstd when the browser supports it.
Zstd’s sweet spot is dynamic content (HTML, API responses) where compression speed directly impacts TTFB. For static assets where you can pre-compress at build time, Brotli at high levels still wins on ratio. The ideal strategy uses both: zstd for dynamic responses (fast compression, good ratio), Brotli for static assets (best ratio, pre-compressed).
gzip (1992) is the universal fallback. It’s supported everywhere and requires zero configuration on most servers. 11% of websites still serve content without any compression — if that’s you, enabling gzip is a quick win. But for any site already using gzip, switching to Brotli for static assets and zstd for dynamic content yields meaningful byte savings.
Shared Dictionary Compression is the newest frontier (March 2025+). An IETF standard allows clients and servers to agree on custom compression dictionaries — the client sends an Available-Dictionary header with a hash of its best dictionary, and the server uses it for Brotli (dcb) or Zstandard (dcz) compression. This is extraordinarily effective for incremental updates: if a JavaScript bundle changes by 1%, dictionary compression sends only the diff rather than the full re-compressed file. Chrome has experimental support, and the potential savings for repeat visitors are dramatic.
What NOT to compress: Images (AVIF, WebP, JPEG, PNG already contain compression), WOFF2 fonts (internally Brotli-compressed), video, and other binary formats. Applying HTTP compression to already-compressed formats wastes CPU with no size benefit.
A robust multi-algorithm configuration (NGINX example):
# gzip (universal fallback)
gzip on;
gzip_min_length 256;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml text/javascript
application/json application/javascript application/xml
image/svg+xml;
# Brotli (via ngx_brotli module)
brotli on;
brotli_comp_level 4; # 4-6 for dynamic, pre-compress static at 11
brotli_types text/plain text/css text/xml text/javascript
application/json application/javascript application/xml
image/svg+xml;
# Zstandard (via nginx-module-zstd)
zstd on;
zstd_comp_level 3; # 1-5 for dynamic content
zstd_types text/plain text/css text/xml text/javascript
application/json application/javascript application/xml
image/svg+xml;
The browser’s Accept-Encoding header negotiates automatically: accept-encoding: gzip, deflate, br, zstd. The server picks the best supported algorithm — ideally zstd for dynamic, Brotli for static, gzip as fallback.
Resources:
- Choosing Between gzip, Brotli and Zstandard — Paul Calvano
- Brotli vs gzip: HTTP Compression — DebugBear
- ZSTD vs Brotli vs gzip Comparison — SpeedVitals
- New Standards for a Faster Internet (Zstandard) — Cloudflare
- Dictionary Compression is Finally Here — HTTP Toolkit
- NGINX Zstd Compression Guide — GetPageSpeed
TCP Optimization: BBR and Connection Tuning
Even with HTTP/3 growing, the majority of web traffic still runs over TCP. Two server-side TCP optimizations provide measurable improvements:
BBR (Bottleneck Bandwidth and Round-trip propagation time) is Google’s congestion control algorithm, designed to replace the decades-old CUBIC algorithm used by most Linux servers. BBR models the network path to find the optimal sending rate, rather than relying on packet loss as a congestion signal. On networks with even moderate packet loss or bufferbloat, BBR achieves significantly higher throughput and lower latency than CUBIC. Google reported 4% improvement in global search latency and 14% improvement in YouTube throughput when deploying BBR. It’s available in Linux kernel 4.9+ and can be enabled with:
sysctl -w net.core.default_qdisc=fq
sysctl -w net.ipv4.tcp_congestion_control=bbr
TCP Fast Open (TFO) allows data to be sent in the SYN packet of a TCP handshake, saving one round trip on repeat connections. It’s supported in Linux, macOS, and iOS, but adoption has been limited due to middlebox interference. For most sites, the move to HTTP/3 (which has 0-RTT built into QUIC) makes TFO less critical.
Initial congestion window (initcwnd): The default Linux initial congestion window is 10 segments (~14KB) — this is the maximum data the server can send in the first round trip. This is why the 14KB critical CSS/HTML budget matters (see Section 13). Increasing initcwnd to 16 or 20 can help on high-bandwidth connections but risks congestion on slower ones; the default of 10 is generally appropriate.
Resources: