Recent from talks
Nothing was collected or created yet.
Chunked transfer encoding
View on WikipediaThis article needs additional citations for verification. (June 2014) |
Chunked transfer encoding is a streaming data transfer mechanism available in Hypertext Transfer Protocol (HTTP) version 1.1, defined in RFC 9112 §7.1. In chunked transfer encoding, the data stream is divided into a series of non-overlapping "chunks". The chunks are sent out and received independently of one another. No knowledge of the data stream outside the currently-being-processed chunk is necessary for both the sender and the receiver at any given time.
Each chunk is preceded by its size in bytes. The transmission ends when a zero-length chunk is received. The chunked keyword in the Transfer-Encoding header is used to indicate chunked transfer.
Chunked transfer encoding is not supported in HTTP/2, which provides its own mechanisms for data streaming.[1]
Rationale
[edit]The introduction of chunked encoding provided various benefits:
- Chunked transfer encoding allows a server to maintain an HTTP persistent connection for dynamically generated content. In this case, the HTTP Content-Length header cannot be used to delimit the content and the next HTTP request/response, as the content size is not yet known. Chunked encoding has the benefit that it is not necessary to generate the full content before writing the header, as it allows streaming of content as chunks and explicitly signaling the end of the content, making the connection available for the next HTTP request/response.
- Chunked encoding allows the sender to send additional header fields after the message body. This is important in cases where values of a field cannot be known until the content has been produced, such as when the content of the message must be digitally signed. Without chunked encoding, the sender would have to buffer the content until it was complete in order to calculate a field value and send it before the content.
Applicability
[edit]For version 1.1 of the HTTP protocol, the chunked transfer mechanism is considered to be always and anyway acceptable, even if not listed in the TE (transfer encoding) request header field, and when used with other transfer mechanisms, should always be applied last to the transferred data and never more than one time. This transfer coding method also allows additional entity header fields to be sent after the last chunk if the client specified the "trailers" parameter as an argument of the TE field. The origin server of the response can also decide to send additional entity trailers even if the client did not specify the "trailers" option in the TE request field, but only if the metadata is optional (i.e. the client can use the received entity without them). Whenever the trailers are used, the server should list their names in the Trailer header field; three header field types are specifically prohibited from appearing as a trailer field: Transfer-Encoding, Content-Length and Trailer.
Format
[edit]If a Transfer-Encoding field with a value of "chunked" is specified in an HTTP message (either a request sent by a client or the response from the server), the body of the message consists of one or more chunks and one terminating chunk with an optional trailer before the final ␍␊ sequence (i.e. carriage return followed by line feed).
Each chunk starts with the number of octets of the data it embeds expressed as a hexadecimal number in ASCII followed by optional parameters (chunk extension) and a terminating ␍␊ sequence, followed by the chunk data. The chunk is terminated by ␍␊.
If chunk extensions are provided, the chunk size is terminated by a semicolon and followed by the parameters, each also delimited by semicolons. Each parameter is encoded as an extension name followed by an optional equal sign and value. These parameters could be used for a running message digest or digital signature, or to indicate an estimated transfer progress, for instance.
The terminating chunk is a special chunk of zero length. It may contain a trailer, which consists of a (possibly empty) sequence of entity header fields. Normally, such header fields would be sent in the message's header; however, it may be more efficient to determine them after processing the entire message entity. In that case, it is useful to send those headers in the trailer.
Header fields that regulate the use of trailers are TE (used in requests), and Trailers (used in responses).
Use with compression
[edit]HTTP servers often use compression to optimize transmission, for example with Content-Encoding: gzip or Content-Encoding: deflate. If both compression and chunked encoding are enabled, then the content stream is first compressed, then chunked; so the chunk encoding itself is not compressed, and the data in each chunk is compressed holistically (i.e. based on the whole content). The remote endpoint then decodes the stream by concatenating the chunks and uncompressing the result.
Example
[edit]Encoded data
[edit]The following example contains three chunks of size 4, 7, and 11 (hexadecimal "B") octets of data.
4␍␊Wiki␍␊7␍␊pedia i␍␊B␍␊n ␍␊chunks.␍␊0␍␊␍␊
Below is an annotated version of the encoded data.
4␍␊ (chunk size is four octets) Wiki (four octets of data) ␍␊ (end of chunk) 7␍␊ (chunk size is seven octets) pedia i (seven octets of data) ␍␊ (end of chunk) B␍␊ (chunk size is eleven octets) n ␍␊chunks. (eleven octets of data) ␍␊ (end of chunk) 0␍␊ (chunk size is zero octets, no more chunks) ␍␊ (end of final chunk with zero data octets)
Note: Each chunk's size excludes the two ␍␊ bytes that terminate the data of each chunk.
Decoded data
[edit]Decoding the above example produces the following octets:
Wikipedia in ␍␊chunks.
The bytes above are typically displayed as
Wikipedia in chunks.
See also
[edit]References
[edit]External links
[edit]- "Chunked Transfer Coding". HTTP/1.1. June 2022. sec. 7.1. doi:10.17487/RFC9112. RFC 9112.
Chunked transfer encoding
View on GrokipediaTransfer-Encoding: chunked header in the HTTP response, allowing servers to stream content incrementally while keeping the connection persistent for subsequent requests.[2] The encoding concludes with a zero-sized chunk (formatted as 0\r\n\r\n), optionally followed by a trailer section containing additional header fields, ensuring the recipient can fully reconstruct the message.[1]
Introduced as part of the HTTP/1.1 specification to address limitations of earlier versions that relied on fixed-length bodies via the Content-Length header, chunked transfer encoding supports efficient handling of dynamically generated or streaming content, such as live video feeds or server-side rendered pages where the final size cannot be predetermined.[2] It promotes connection reuse by avoiding the need to close the TCP connection after each response, reducing overhead in scenarios like web proxies or long-polling applications.[3] The mechanism is mandatory for HTTP/1.1 implementations when the message length is unknown, and recipients are required to support decoding it to maintain protocol compliance.[1]
In practice, a typical chunked response might begin with a header like HTTP/1.1 200 OK\r\nTransfer-Encoding: chunked\r\n\r\n, followed by chunks such as 5\r\nHello\r\n0\r\n\r\n, which assembles into "Hello" on the client side.[4] Extensions via chunk parameters (e.g., for compression or caching directives) can be included after the size field, though no standard parameters are predefined in the core specification.[5] While primarily associated with HTTP/1.1, equivalent streaming capabilities exist in HTTP/2 and HTTP/3 through frame-based protocols, but chunked encoding remains a foundational feature for backward compatibility.
Background and Overview
Definition and Purpose
Chunked transfer encoding is a transfer-coding mechanism defined in HTTP/1.1 that allows a message body to be transmitted as a series of chunks, each preceded by a hexadecimal size indicator, enabling the delivery of content whose total length is unknown at the start of the transmission.[1] This approach, first specified in RFC 2068 and refined in subsequent revisions including RFC 2616, RFC 7230, and the current RFC 9112, wraps the payload in discrete, length-delimited segments to facilitate progressive data transfer over persistent connections.[6][7][8][1] The primary purpose of chunked transfer encoding is to enable servers to send dynamic or incrementally generated content without requiring the entire response to be buffered beforehand, such as in scenarios involving long-running computations or real-time data generation.[1] It is activated by including the "Transfer-Encoding: chunked" header field in an HTTP response, which signals to the recipient that the message body follows this encoding scheme rather than relying on a Content-Length header.[1] This mechanism ensures that the connection remains open until a zero-length chunk indicates the end of the body, allowing for efficient handling of indefinite-length streams.[1] Key benefits include reduced user-perceived latency by permitting immediate rendering of partial content, support for streaming applications like media delivery or server-sent events, and prevention of connection timeouts that might occur when waiting for an unknown total length.[1] By avoiding the need to precompute or estimate response sizes, it enhances efficiency in dynamic web environments where content is produced on-the-fly.[1]Historical Development
Chunked transfer encoding was introduced as a key feature of the HTTP/1.1 protocol in RFC 2068, published in January 1997, to enable the efficient transmission of dynamically generated content without requiring the sender to determine the total message length in advance. This addressed significant limitations in HTTP/1.0, where responses relied on either a predefined Content-Length header or connection closure to signal completion, both of which were inefficient for streaming data or content produced on-the-fly, such as outputs from CGI scripts. By allowing the message body to be sent as a series of chunks each preceded by a hexadecimal size indicator, followed by an optional trailer, the mechanism supported persistent connections and reduced latency for emerging web applications requiring real-time data transfer.[6] The specification was refined in RFC 2616 in June 1999, which obsoleted RFC 2068 and provided clearer definitions for transfer codings, including chunked encoding, to improve interoperability among HTTP/1.1 implementations. Widespread adoption occurred alongside the rollout of HTTP/1.1-compliant servers and proxies in the late 1990s, with major web servers like Apache HTTP Server version 1.3 (released in 1998) fully supporting the protocol's features by 1999-2000, enabling broader use in production environments for dynamic web content. Subsequent clarifications came in RFC 7230, published in June 2014, which further obsoleted earlier HTTP/1.1 documents and refined the semantics of chunked encoding, particularly regarding trailer headers and edge cases such as decoding processes and forbidden fields in trailers.[9][8] The specification was further consolidated and updated in RFC 9112 (June 2022), which obsoletes RFC 7230 and incorporates prior errata and clarifications on chunked encoding semantics.[10] However, with the publication of RFC 7540 in May 2015 defining HTTP/2, chunked transfer encoding cannot be used, as the protocol employs a binary framing mechanism with DATA frames for message payloads. This shift marked the evolution toward more efficient multiplexing, though HTTP/1.1 and its chunked encoding remain in use for backward compatibility in many legacy systems.[11]Core Mechanism
Rationale for Use
Chunked transfer encoding addresses key challenges in transmitting dynamic content over HTTP/1.1 by enabling servers to stream data incrementally without requiring prior knowledge of the total response size. This mechanism is particularly valuable for scenarios where content is generated in real time, such as processing logs or delivering API responses, as it permits immediate transmission of available portions while the rest is being prepared.[10] Unlike the Content-Length header, which demands that the server buffer the entire response to determine and declare its length upfront—potentially introducing delays for computationally intensive or variable-sized outputs—chunked encoding eliminates this bottleneck by delimiting data with per-chunk sizes.[10] In comparison to signaling message completion via Connection: close, which terminates the TCP connection and precludes reuse, chunked encoding preserves persistent connections by concluding with a zero-length chunk, thereby supporting efficient multiplexing of multiple requests over a single link.[10] These features yield notable performance gains, including reduced round-trip times through compatibility with HTTP pipelining and keep-alive mechanisms, which minimize connection establishment overhead in high-throughput environments.[10] Practical applications encompass streaming video, where content is segmented for quicker initial playback, progressive HTML rendering to enhance user-perceived page load speeds, and server-push updates in protocols like Server-Sent Events, offering real-time data delivery without the overhead of WebSocket connections.[12]Applicability in HTTP
Chunked transfer encoding is applicable specifically within the HTTP/1.1 protocol as a mechanism for transmitting message bodies of indeterminate length while maintaining persistent connections. In HTTP/1.1, servers are required to employ chunked encoding as the final transfer coding when sending a response body without a Content-Length header field and intending to keep the connection open for subsequent messages, ensuring compliance with the protocol's framing rules.[1] This usage is mandatory for protocol-compliant HTTP/1.1 servers in such scenarios to avoid closing the connection prematurely.[1] For client requests, chunked encoding remains optional and is infrequently utilized, as request payload lengths are typically known in advance.[2] Receivers in HTTP/1.1 environments, including both clients and servers, must fully support parsing and decoding of chunked transfer encoding to ensure interoperability.[2] This requirement applies universally to all HTTP/1.1 implementations, which are obligated to handle the "chunked" coding regardless of other transfer codings present.[2] Proxies and intermediaries may strip or rewrite Transfer-Encoding headers during forwarding, particularly when downgrading to HTTP/1.0 endpoints that lack support, but they must preserve the chunked encoding when relaying to other HTTP/1.1 recipients to maintain message integrity.[5] Several constraints govern the use of chunked encoding in HTTP/1.1. It cannot be combined with a Content-Length header field in the same message, as the presence of Transfer-Encoding takes precedence and renders any Content-Length invalid; senders must omit Content-Length entirely in such cases.[2] Furthermore, chunked encoding is invalid in HTTP/1.0, where the Transfer-Encoding header is unrecognized, necessitating de-chunking by intermediaries for compatibility.[13] Trailers, which provide optional additional header fields after the final chunk, are permitted but not mandated in all implementations; their inclusion requires prior advertisement via the TE header in requests.[14] Error handling for chunked messages emphasizes robustness in HTTP/1.1. Receivers must buffer incoming data until encountering the zero-length chunk that signals completion; failure to receive this terminating chunk renders the message incomplete.[1] Premature closure of the connection during transmission is treated as an incomplete response, prompting receivers to discard the partial body and potentially retry idempotent requests on a new connection.[1]Encoding Format
Chunk Structure
In chunked transfer encoding, each chunk consists of a size indicator, optional extensions, the data payload, and delimiters to separate components. The size is specified as an unsigned integer in base-16 (hexadecimal) notation, followed by a carriage return and line feed (CRLF, represented as\r\n), the exact number of data octets indicated by the size, and another CRLF.[15] For example, a chunk of 26 octets begins with 1A\r\n, followed by 26 bytes of data, and ends with \r\n.[15] No maximum chunk size is defined in the specification.[15]
Optional chunk extensions may follow the size field, introduced by a semicolon (;) and consisting of name-value pairs that provide per-chunk metadata, such as compression indicators or other future-defined parameters.[16] These extensions are formatted as zero or more instances of BWS ";" BWS chunk-ext-name [ BWS "=" BWS chunk-ext-val ], where BWS denotes bad whitespace (optional spaces or tabs), and they enable extensibility without altering the core structure.[16] For instance, an extension might appear as 1A; ext1=value1\r\n, preserving compatibility with basic parsers that ignore unknown extensions.[16]
The stream of chunks continues until a zero-size chunk signals the end of the body. This terminating chunk is formatted as 0[; extensions]\r\n with no following data, immediately followed by an optional trailer section containing key-value header fields (similar to standard HTTP headers) and a final CRLF to close the message body.[15] The full chunked body syntax is thus *chunk last-chunk trailer-section CRLF, ensuring deterministic parsing even for streams of indeterminate total length.[15]
Trailer Headers
Trailer headers, also known as trailer fields, are optional HTTP header fields transmitted at the end of a message body encoded with chunked transfer coding, providing additional metadata that could not be determined at the start of the transmission, such as message integrity checks or signatures generated during body processing.[17] These headers follow the zero-length chunk that signals the end of the body and serve to append information like checksums to the initial header section without requiring the sender to buffer the entire response.[17] The format of trailer headers mirrors that of standard HTTP request and response headers, consisting of one or more name-value pairs in the formfield-name: field-value, each followed by a carriage return and line feed (CRLF), and the entire trailer section terminated by an empty line (CRLF CRLF).[18] Prior to sending the body, the sender includes a Trailer header field in the initial headers to declare the names of any trailer fields that will appear, such as Trailer: [ETag](/page/E-TAG), Content-MD5.[17] For trailers to be used, the recipient must indicate support via the TE header field with the trailers keyword (e.g., TE: trailers), signaling willingness to accept and process them; without this, the sender should avoid generating trailers.[19]
According to RFC 9112, trailer fields are intended for non-essential metadata and a sender MUST NOT include fields containing information necessary for proper routing, message framing, or payload processing (e.g., Transfer-Encoding, Content-Length, Host, Content-Type, Content-Encoding), as these are required to be present in the initial headers.[17] Recipients MAY retain trailer fields separately or merge them into the message's header section only if the field definition permits, and SHOULD ignore unknown or non-mergeable fields.[18] Common applications include including an ETag for caching validation or a Content-MD5 for integrity verification in streaming scenarios where the full body is unavailable upfront.[17]
Limitations of trailer headers include their optional nature, where recipients not expecting them (e.g., those without TE: trailers) may safely discard the trailer section without affecting message processing.[17] Additionally, not all intermediaries, such as proxies, reliably forward trailers, as they may de-chunk the message and omit the end-of-stream metadata unless explicitly configured to preserve the TE header with trailers.[17] In practice, trailer usage has been limited due to these forwarding inconsistencies and the preference for including such metadata in initial headers when possible.[17]
Interactions and Extensions
Compatibility with Compression
Chunked transfer encoding integrates seamlessly with content compression in HTTP/1.1 by applying compression to the payload before chunking the resulting data stream. Servers indicate this combination using the headersContent-Encoding: [gzip](/page/Gzip) to denote that the resource representation has been compressed with gzip, and Transfer-Encoding: chunked to specify that the compressed body is transmitted in delimited chunks rather than a single block with a known length. This layering ensures that the compression optimizes the data for the end-to-end representation while chunking handles the hop-by-hop transmission framing for efficiency.[2][20]
In terms of processing, the server first compresses the full body if its length is known in advance, then divides the compressed output into chunks for transmission; alternatively, for dynamic or streaming content, the server applies streaming compression (such as gzip's deflate algorithm) to generate compressed data incrementally, which is immediately chunked and sent without requiring complete buffering. On the receiving end, the client or intermediary first decodes the chunked transfer encoding by reassembling the chunks into a complete compressed body, then applies decompression to recover the original uncompressed representation. This order—transfer decoding followed by content decoding—preserves the integrity of both mechanisms and is mandatory for HTTP/1.1 compliance.[1][2]
Challenges arise primarily with dynamic content where the total body length is unknown, necessitating streaming compression to enable progressive chunked delivery without delaying the response; non-streaming implementations might otherwise buffer excessively, though standard gzip supports deflate in a streaming mode to mitigate this. Additionally, while the main body is compressed, trailers—optional metadata headers sent after the final zero-length chunk—remain uncompressed, allowing them to carry plain-text information such as caching directives or authentication tokens without interference from the body encoding. Per-chunk compression is generally avoided, as it would fragment the compression context and reduce efficiency; instead, the entire compressed stream is chunked holistically.[14]
RFC 9112 explicitly permits this layering of transfer encodings over content encodings in Section 6.1, stating that transfer codings like chunked are applied to the message body after content codings like gzip modify the representation, thereby supporting flexible combinations for optimized transfers. This approach is widely adopted in content delivery networks (CDNs), where it facilitates efficient streaming of compressed dynamic content, such as live-generated web pages or API responses, by reducing bandwidth while enabling low-latency delivery without full pre-computation.[2]
Relation to Modern Protocols
Chunked transfer encoding, a feature of HTTP/1.1, undergoes significant changes in its relation to subsequent protocol versions, where it is largely supplanted by more efficient framing mechanisms. In HTTP/2, as defined in RFC 9113, the Transfer-Encoding header is not used for chunked transfer coding, and the "chunked" transfer encoding must not be employed when sending responses.[21] Instead, HTTP/2 achieves streaming through DATA frames, which carry payloads in a binary-framed format and include an END_STREAM flag to indicate completion, enabling progressive delivery without the need for explicit chunking.[22] This shift eliminates the overhead of HTTP/1.1-style chunk headers while supporting multiplexed streams for concurrent resource delivery.[23] HTTP/3, built over QUIC and specified in RFC 9114, further abstracts these concepts by prohibiting the Transfer-Encoding header entirely, rendering chunked encoding unsupported.[24] QUIC streams provide the underlying structure for progressive data delivery, with flow control mechanisms like WINDOW_UPDATE frames managing transmission rates across multiple independent streams.[25] This design allows for low-latency, ordered delivery of response bodies without relying on HTTP/1.1 codings, as intermediaries must decode any chunked content from prior versions before forwarding to HTTP/3 endpoints.[26] The result is enhanced efficiency in mixed-protocol environments, where QUIC's multiplexing mitigates head-of-line blocking issues inherent in earlier TCP-based protocols. Despite these advancements, chunked transfer encoding retains relevance as a fallback mechanism in heterogeneous networks involving non-HTTP/2 proxies or clients. In such setups, HTTP/1.1 is negotiated for compatibility, allowing chunked encoding to stream content where modern framing is unavailable.[23] For instance, during h2c (HTTP/2 cleartext) upgrades from an initial HTTP/1.1 connection, the preliminary exchange may utilize chunked encoding before transitioning to HTTP/2 streams, ensuring seamless negotiation in legacy-supporting tools.[23] As of 2025, chunked encoding persists in HTTP/1.1-dominant scenarios, such as certain proxy configurations or older infrastructure, but it is discouraged in new protocol designs that prioritize the multiplexed streams of HTTP/2 and HTTP/3 for superior performance and reliability. Ongoing adoption of QUIC-based transports continues to diminish its role, favoring built-in streaming abstractions that reduce protocol complexity and improve resource utilization.[21]Examples and Implementation
Sample Encoded Transmission
A dynamic web server may generate a simple HTML response with a body of approximately 50 bytes, such as a basic page displaying a greeting, and transmit it using chunked transfer encoding to enable progressive rendering without prior knowledge of the exact body length.[1] The encoding is activated by including theTransfer-Encoding: chunked header in the HTTP/1.1 response, alongside standard headers like Date and Server for identification and timing. This approach is particularly useful for streaming content from server-side scripts or APIs.
The following illustrates a complete chunked-encoded response for such a scenario, divided into three chunks totaling 68 bytes of body data (excluding CRLF delimiters within chunks). The body assembles to <html><body><h1>Chunked Example</h1><p>Hello World</p></body></html>.
HTTP/1.1 200 OK\r\n
Date: Sun, 09 Nov 2025 12:00:00 GMT\r\n
Server: ExampleServer/1.0\r\n
Content-Type: text/[html](/page/HTML)\r\n
Transfer-Encoding: chunked\r\n
\r\n
24\r\n
<html><body><h1>Chunked Example</h1>\r\n
12\r\n
<p>Hello World</p>\r\n
E\r\n
</body></html>\r\n
0\r\n
\r\n
HTTP/1.1 200 OK\r\n
Date: Sun, 09 Nov 2025 12:00:00 GMT\r\n
Server: ExampleServer/1.0\r\n
Content-Type: text/[html](/page/HTML)\r\n
Transfer-Encoding: chunked\r\n
\r\n
24\r\n
<html><body><h1>Chunked Example</h1>\r\n
12\r\n
<p>Hello World</p>\r\n
E\r\n
</body></html>\r\n
0\r\n
\r\n
Decoding Process
The decoding process for chunked transfer encoding involves a recipient, such as a client or intermediary proxy, systematically reading and reassembling the message body from the stream of chunks received over a TCP connection. This ensures the body is reconstructed accurately without prior knowledge of its total length. The process is mandatory for HTTP/1.1 recipients, as specified in the protocol standards.[28] The algorithm follows a loop that parses each chunk until the end marker is encountered:length := 0
read chunk-size [chunk-ext] CRLF
while (chunk-size > 0) {
read chunk-data and CRLF
append chunk-data to decoded-body
length := length + chunk-size
read chunk-size [chunk-ext] CRLF
}
read trailer-section CRLF
length := 0
read chunk-size [chunk-ext] CRLF
while (chunk-size > 0) {
read chunk-data and CRLF
append chunk-data to decoded-body
length := length + chunk-size
read chunk-size [chunk-ext] CRLF
}
read trailer-section CRLF
Limitations and Security
Known Vulnerabilities
Chunked transfer encoding introduces several security vulnerabilities, primarily due to ambiguities in HTTP/1.1 parsing rules that allow attackers to exploit inconsistencies between servers and intermediaries. One prominent issue is HTTP request smuggling, where attackers send a request containing both aTransfer-Encoding: chunked header and a Content-Length header, causing front-end proxies and back-end servers to interpret the message boundaries differently. This discrepancy enables the smuggling of malicious requests, potentially leading to cache poisoning, bypass of access controls, or cross-site scripting attacks. The vulnerability was first detailed in 2005 by researchers who demonstrated how such inconsistencies in proxy chains could poison web caches by associating malicious content with legitimate URLs. A specific instance affected the Apache HTTP Server versions prior to 1.3.34 and 2.0.55 when acting as a proxy, allowing remote attackers to poison caches via smuggled requests (CVE-2005-2088).[32][33]
As of 2025, new variants of HTTP request smuggling continue to emerge in HTTP/1.1 implementations involving chunked encoding. For example, CVE-2025-4366 affected Cloudflare's Pingora proxy framework, enabling cache poisoning through request desynchronization in chunked transfers. Similarly, CVE-2025-54142 involved smuggling via OPTIONS requests with bodies, highlighting persistent parsing inconsistencies. These recent cases underscore that while mitigations have reduced prevalence, risks remain in unpatched or misconfigured systems.[34][35]
Trailer misuse represents another risk, as the optional trailer section in chunked messages can contain arbitrary header fields appended after the body. If recipients fail to validate or ignore these trailers properly, attackers can inject unauthorized headers, such as those altering cache directives or authentication tokens, leading to cache poisoning or security filter evasion. For example, unvalidated trailers might include fields like Set-Cookie or Location, tricking caches into storing poisoned responses. RFC 7230 explicitly restricts trailers to exclude fields critical for message framing, routing, or processing (e.g., Content-Length, Host), and mandates that senders avoid them unless the recipient's TE header explicitly allows "trailers". Violations of these rules have been exploited in various implementations, enabling attacks like response manipulation in proxy environments.[36][37]
A related concern arises in compressed chunked responses, where the BREACH attack (a 2013 variant of the CRIME exploit) can leak sensitive data like CSRF tokens through compression side-channel oracles. When servers apply gzip or deflate compression to chunked-encoded responses over HTTPS, attackers can craft reflected inputs to induce detectable length variations in the compressed output, inferring secrets byte-by-byte. This affects scenarios where dynamic content is chunked and compressed without length randomization, as chunking alone does not prevent the oracle if padding is absent. The attack targets HTTP-level compression mechanisms and was demonstrated at Black Hat 2013, highlighting risks in web applications using both techniques.[38]
Mitigations have evolved through updated standards and implementation hardening. RFC 7230 (2014), which obsoletes RFC 2616, introduces stricter parsing rules: messages with both Transfer-Encoding and Content-Length must be treated as errors, and trailers are disabled by default unless the TE header specifies support, reducing smuggling and injection risks. Browsers like Google Chrome implemented stricter HTTP/1.1 conformance in the 2010s, including normalization of ambiguous headers and rejection of non-compliant chunked messages, to prevent client-side exploitation of these issues. These changes, combined with proxy-level validation, have significantly curtailed the prevalence of such vulnerabilities in modern deployments.[39][40]
