Hubbry Logo
HTTPHTTPMain
Open search
HTTP
Community hub
HTTP
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
HTTP
HTTP
from Wikipedia

HTTP
International standard
  • RFC 1945 HTTP/1.0
  • RFC 9110 HTTP Semantics
  • RFC 9111 HTTP Caching
  • RFC 9112 HTTP/1.1
  • RFC 9113 HTTP/2
  • RFC 7541 HTTP/2: HPACK Header Compression
  • RFC 8164 HTTP/2: Opportunistic Security for HTTP/2
  • RFC 8336 HTTP/2: The ORIGIN HTTP/2 Frame
  • RFC 8441 HTTP/2: Bootstrapping WebSockets with HTTP/2
  • RFC 9114 HTTP/3
  • RFC 9204 HTTP/3: QPACK: Field Compression
Developed byInitially CERN; IETF, W3C
Introduced1991; 34 years ago (1991)
Websitehttpwg.org/specs/

HTTP (Hypertext Transfer Protocol) is an application layer protocol in the Internet protocol suite model for distributed, collaborative, hypermedia information systems.[1] HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser.

Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989 and summarized in a simple document describing the behavior of a client and a server using the first HTTP version, named 0.9.[2] That version was subsequently developed, eventually becoming the public 1.0.[3]

Development of early HTTP Requests for Comments (RFCs) started a few years later in a coordinated effort by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), with work later moving to the IETF.

HTTP/1 was finalized and fully documented (as version 1.0) in 1996.[4] It evolved (as version 1.1) in 1997[5] and then its specifications were updated in 1999,[6] 2014,[7] and 2022,[1] when it was promoted to Internet Standard 97. Its secure variant named HTTPS is used by more than 85% of websites.[8]

HTTP/2, published in 2015, provides a more efficient expression of HTTP's semantics "on the wire". As of August 2024, it is supported by 66.2% of websites[9][10] (35.3% HTTP/2 + 30.9% HTTP/3 with backwards compatibility) and supported by almost all web browsers (over 98% of users).[11] It is also supported by major web servers over Transport Layer Security (TLS) using an Application-Layer Protocol Negotiation (ALPN) extension[12] where TLS 1.2 or newer is required.[13]

HTTP/3, the successor to HTTP/2, was published in 2022.[14] As of February 2024, it is now used on 30.9% of websites[15] and is supported by most web browsers, i.e. (at least partially) supported by 97% of users.[16] HTTP/3 uses QUIC instead of TCP for the underlying transport protocol. Like HTTP/2, it does not obsolete previous major versions of the protocol. In 2019, support for HTTP/3 was first added to Cloudflare and Google Chrome,[17][18] and also enabled in Firefox.[19] HTTP/3 has lower latency for real-world web pages, if enabled on the server, and loads faster than with HTTP/2, in some cases over three times faster than HTTP/1.1 (which is still commonly only enabled).[20]

Technical overview

[edit]

HTTP functions as a request–response protocol in the client–server model. A web browser, for example, may be the client whereas a process, named web server, running on a computer hosting one or more websites may be the server. The client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content or performs other functions on behalf of the client, returns a response message to the client. The response contains completion status information about the request and may also contain requested content in its message body.

A web browser is an example of a user agent (UA). Other types of user agent include the indexing software used by search providers (web crawlers), voice browsers, mobile apps, and other software that accesses, consumes, or displays web content.

HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites often benefit from web cache servers that deliver content on behalf of upstream servers to improve response time. Web browsers cache previously accessed web resources and reuse them, whenever possible, to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers.

To allow intermediate HTTP nodes (proxy servers, web caches, etc.) to accomplish their functions, some of the HTTP headers (found in HTTP requests/responses) are managed hop-by-hop whereas other HTTP headers are managed end-to-end (managed only by the source client and by the target web server).

HTTP is an application layer protocol designed within the framework of the Internet protocol suite. Its definition presumes an underlying and reliable transport layer protocol.[1]: §3.3 

HTTP resources are identified and located on the network by Uniform Resource Locators (URLs), using the Uniform Resource Identifiers (URIs) schemes http and https. URIs are encoded as hyperlinks in HTML documents, so as to form interlinked hypertext documents.[21]

In HTTP/1.0 a separate TCP connection to the same server is made for every resource request.[4]: §1.3 

In HTTP/1.1 instead a TCP connection can be reused to make multiple resource requests (i.e. of HTML pages, frames, images, scripts, stylesheets, etc.).[22]: §9.1,9.3 

HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead, especially under high traffic conditions.[23]

HTTP/2 is a revision of previous HTTP/1.1 in order to maintain the same client–server model and the same protocol methods but with these differences in order:

  • to use a compressed binary representation of metadata (HTTP headers) instead of a textual one, so that headers require much less space;
  • to use a single TCP/IP (usually encrypted) connection per accessed server domain instead of 2 to 8 TCP/IP connections;
  • to use one or more bidirectional streams per TCP/IP connection in which HTTP requests and responses are broken down and transmitted in small packets to almost solve the problem of the HOLB (head-of-line blocking).[note 1]
  • to add a push capability to allow server application to send data to clients whenever new data is available (without forcing clients to request periodically new data to server by using polling methods).[13]: §2 

HTTP/2 communications therefore experience much less latency and, in most cases, even higher speeds than HTTP/1.1 communications.

HTTP/3 is a revision of previous HTTP/2 in order to use QUIC + UDP transport protocols instead of TCP. Before that version, TCP/IP connections were used; but now, only the IP layer is used (which UDP, like TCP, builds on). This slightly improves the average speed of communications and to avoid the occasional (very rare) problem of TCP connection congestion that can temporarily block or slow down the data flow of all its streams (another form of "head of line blocking").

History

[edit]
Tim Berners-Lee

The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, which was in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a client user interface called web browser. Berners-Lee designed HTTP in order to help with the adoption of his other idea: the "WorldWideWeb" project, which was first proposed in 1989, now known as the World Wide Web.

The first web server went live in 1990.[24][25] The protocol used had only one method, namely GET, which would request a page from a server.[26] The response from the server was always an HTML page.[2]

Summary of HTTP milestone versions

[edit]
Version Year introduced Current status Usage in August 2024 Support in August 2024
HTTP/0.9 1991 Obsolete 0 100%
HTTP/1.0 1996 Obsolete 0 100%
HTTP/1.1 1997 Standard 33.8% 100%
HTTP/2 2015 Standard 35.3% 66.2%
HTTP/3 2022 Standard 30.9% 30.9%

HTTP/0.9

[edit]

In 1991, the first documented official version of HTTP was written as a plain document, less than 700 words long, and this version was named HTTP/0.9, which supported only GET method, allowing clients to only retrieve HTML documents from the server, but not supporting any other file formats or information upload.[2]

HTTP/1.0-draft

[edit]

Since 1992, a new document was written to specify the evolution of the basic protocol towards its next full version. It supported both the simple request method of the 0.9 version and the full GET request that included the client HTTP version. This was the first of the many unofficial HTTP/1.0 drafts that preceded the final work on HTTP/1.0.[3]

W3C HTTP Working Group

[edit]

After having decided that new features of HTTP protocol were required and that they had to be fully documented as official RFCs, in early 1995 the HTTP Working Group (HTTP WG, led by Dave Raggett) was constituted with the aim to standardize and expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields.[27][28]

The HTTP WG planned to revise and publish new versions of the protocol as HTTP/1.0 and HTTP/1.1 within 1995, but, because of the many revisions, that timeline lasted much more than one year.[29]

The HTTP WG planned also to specify a far future version of HTTP called HTTP-NG (HTTP Next Generation) that would have solved all remaining problems, of previous versions, related to performances, low latency responses, etc. but this work started only a few years later and it was never completed.

HTTP/1.0

[edit]

In May 1996, RFC 1945[4] was published as a final HTTP/1.0 revision of what had been used in previous 4 years as a pre-standard HTTP/1.0-draft which was already used by many web browsers and web servers.

In early 1996 developers started to even include unofficial extensions of the HTTP/1.0 protocol (i.e. keep-alive connections, etc.) into their products by using drafts of the upcoming HTTP/1.1 specifications.[30]

HTTP/1.1

[edit]

Since early 1996, major web browsers and web server developers also started to implement new features specified by pre-standard HTTP/1.1 drafts specifications. End-user adoption of the new versions of browsers and servers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet used the new HTTP/1.1 header "Host" to enable virtual hosting, and that by June 1996, 65% of all browsers accessing their servers were pre-standard HTTP/1.1 compliant.[31]

In January 1997, RFC 2068[5] was officially released as HTTP/1.1 specifications.

In June 1999, RFC 2616[6] was released to include all improvements and updates based on previous (obsolete) HTTP/1.1 specifications.

W3C HTTP-NG Working Group

[edit]

Resuming the old 1995 plan of previous HTTP Working Group, in 1997 an HTTP-NG Working Group was formed to develop a new HTTP protocol named HTTP-NG (HTTP New Generation). A few proposals / drafts were produced for the new protocol to use multiplexing of HTTP transactions inside a single TCP/IP connection, but in 1999, the group stopped its activity passing the technical problems to IETF.[32]

IETF HTTP Working Group restarted

[edit]

In 2007, the IETF HTTP Working Group (HTTP WG bis or HTTPbis) was restarted firstly to revise and clarify previous HTTP/1.1 specifications and secondly to write and refine future HTTP/2 specifications (named httpbis).[33][34]

SPDY: an unofficial HTTP protocol developed by Google

[edit]

In 2009, Google, a private company, announced that it had developed and tested a new HTTP binary protocol named SPDY. The implicit aim was to greatly speed up web traffic (specially between future web browsers and its servers).

SPDY was indeed much faster than HTTP/1.1 in many tests and so it was quickly adopted by Chromium and then by other major web browsers.[35]

Some of the ideas about multiplexing HTTP streams over a single TCP/IP connection were taken from various sources, including the work of W3C HTTP-NG Working Group.

HTTP/2

[edit]

In January–March 2012, HTTP Working Group (HTTPbis) announced the need to start to focus on a new HTTP/2 protocol (while finishing the revision of HTTP/1.1 specifications), maybe taking in consideration ideas and work done for SPDY.[36][37]

After a few months about what to do to develop a new version of HTTP, it was decided to derive it from SPDY.[38]

In May 2015, HTTP/2 was published as RFC 7540[39] and quickly adopted by all web browsers already supporting SPDY and more slowly by web servers.

2014 updates to HTTP/1.1

[edit]

In June 2014, the HTTP Working Group released an updated six-part HTTP/1.1 specification obsoleting RFC 2616[6]:

  • RFC 7230 – "Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing,"[7] Obsolete.
  • RFC 7231 – "Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content,"[40] Obsolete.
  • RFC 7232 – "Hypertext Transfer Protocol (HTTP/1.1): Conditional Requests,"[41] Obsolete.
  • RFC 7233 – "Hypertext Transfer Protocol (HTTP/1.1): Range Requests,"[42] Obsolete.
  • RFC 7234 – "Hypertext Transfer Protocol (HTTP/1.1): Caching,"[43] Obsolete.
  • RFC 7235 – "Hypertext Transfer Protocol (HTTP/1.1): Authentication,"[44] Obsolete.

HTTP/0.9 Deprecation

[edit]

In 2014, HTTP/0.9 was deprecated for servers supporting version HTTP/1.1 (and higher):[7]: §Appendix A 

Since HTTP/0.9 did not support header fields in a request, there is no mechanism for it to support name-based virtual hosts (selection of resource by inspection of the Host header field). Any server that implements name-based virtual hosts ought to disable support for HTTP/0.9. Most requests that appear to be HTTP/0.9 are, in fact, badly constructed HTTP/1.x requests caused by a client failing to properly encode the request-target.

Since 2016 many product managers and developers of user agents (browsers, etc.) and web servers have begun planning to gradually deprecate and dismiss support for HTTP/0.9 protocol, mainly for the following reasons:[45]

  • it is so simple that an RFC document was never written (there is only the original document);[2]
  • it has no HTTP headers and lacks many other features that nowadays are required for minimal security reasons;
  • it has not been widespread since 1999..2000 (because of HTTP/1.0 and HTTP/1.1) and is commonly used only by some very old network hardware, i.e. routers, etc.

[note 2]

HTTP/3

[edit]

In 2020, the first drafts HTTP/3 were published and major web browsers and web servers started to adopt it.

On 6 June 2022, IETF standardized HTTP/3 as RFC 9114[14].

Updates and refactoring in 2022

[edit]

In June 2022, a batch of RFCs was published, deprecating many of the previous documents and introducing a few minor changes and a refactoring of HTTP semantics description into a separate document.

  • RFC 9110 – "HTTP Semantics,"[1] Internet Standard 97.
  • RFC 9111 – "HTTP Caching,"[46] Internet Standard 98.
  • RFC 9112 – "HTTP/1.1,"[22] Internet Standard 99.
  • RFC 9113 – "HTTP/2,"[13] Proposed Standard.
  • RFC 9114 – "HTTP/3,"[14] Proposed Standard. (See also the section above.)
  • RFC 9204 – "QPACK: Field Compression for HTTP/3,"[47] Proposed Standard.
  • RFC 9218 – "Extensible Prioritization Scheme for HTTP,"[48] Proposed Standard.

HTTP data exchange

[edit]

HTTP is a stateless application-level protocol and it requires a reliable network transport connection to exchange data between client and server.[49] In HTTP implementations, TCP/IP connections are used using well-known ports (typically port 80 if the connection is unencrypted or port 443 if the connection is encrypted, see also List of TCP and UDP port numbers).[1]: §4.2.1,4.2.2  In HTTP/2, a TCP/IP connection plus multiple protocol channels are used. In HTTP/3, the application transport protocol QUIC over UDP is used.

Request and response messages through connections

[edit]

Data is exchanged through a sequence of request–response messages which are exchanged by a session layer transport connection.[49] An HTTP client initially tries to connect to a server establishing a connection (real or virtual). An HTTP(S) server listening on that port accepts the connection and then waits for a client's request message. The client sends its HTTP request message. Upon receiving the request the server sends back an HTTP response message, which includes header(s) plus a body if it is required. The body of this response message is typically the requested resource, although an error message or other information may also be returned. At any time (for many reasons) client or server can close the connection. Closing a connection is usually advertised in advance by using one or more HTTP headers in the last request/response message sent to server or client.[22]: §9.1 

Persistent connections

[edit]

In HTTP/0.9, the TCP/IP connection is always closed after server response has been sent, so it is never persistent.

In HTTP/1.0, the TCP/IP connection should always be closed by server after a response has been sent.[4][note 3]

In HTTP/1.1 a keep-alive-mechanism was officially introduced so that a connection could be reused for more than one request/response. Such persistent connections reduce request latency perceptibly because the client does not need to re-negotiate the TCP 3-Way-Handshake connection after the first request has been sent. Another positive side effect is that, in general, the connection becomes faster with time due to TCP's slow-start-mechanism.

HTTP/1.1 added also HTTP pipelining in order to further reduce lag time when using persistent connections by allowing clients to send multiple requests before waiting for each response. This optimization was never considered really safe because a few web servers and many proxy servers, specially transparent proxy servers placed in Internet / Intranets between clients and servers, did not handle pipelined requests properly (they served only the first request discarding the others, they closed the connection because they saw more data after the first request or some proxies even returned responses out of order etc.). Because of this, only HEAD and some GET requests (i.e. limited to real file requests and so with URLs without query string used as a command, etc.) could be pipelined in a safe and idempotent mode. After many years of struggling with the problems introduced by enabling pipelining, this feature was first disabled and then removed from most browsers also because of the announced adoption of HTTP/2.

HTTP/2 extended the usage of persistent connections by multiplexing many concurrent requests/responses through a single TCP/IP connection.

HTTP/3 does not use TCP/IP connections but QUIC + UDP (see also: technical overview).

Content retrieval optimizations

[edit]
HTTP/0.9
A requested resource was always sent in its entirety.
HTTP/1.0
HTTP/1.0 added headers to manage resources cached by client in order to allow conditional GET requests; in practice a server has to return the entire content of the requested resource only if its last modified time is not known by client or if it changed since last full response to GET request. One of these headers, "Content-Encoding", was added to specify whether the returned content of a resource was or was not compressed.
If the total length of the content of a resource was not known in advance (i.e. because it was dynamically generated, etc.) then the header "Content-Length: number" was not present in HTTP headers and the client assumed that when server closed the connection, the content had been sent in its entirety. This mechanism could not distinguish between a resource transfer successfully completed and an interrupted one (because of a server / network error or something else).
HTTP/1.1
HTTP/1.1 introduced:
  • new headers to better manage the conditional retrieval of cached resources.
  • chunked transfer encoding to allow content to be streamed in chunks in order to reliably send it even when the server does not know its length in advance (i.e. because it is dynamically generated, etc.).
  • byte range serving, where a client can request only one or more portions (ranges of bytes) of a resource (i.e. the first part, a part in the middle or in the end of the entire content, etc.) and the server usually sends only the requested part(s). This is useful to resume an interrupted download (when a file is very large), when only a part of a content has to be shown or dynamically added to the already visible part by a browser (i.e. only the first or the following n comments of a web page) in order to spare time, bandwidth and system resources, etc.
HTTP/2, HTTP/3
Both HTTP/2 and HTTP/3 have kept the above mentioned features of HTTP/1.1.

HTTP authentication

[edit]

HTTP provides multiple authentication schemes such as basic access authentication and digest access authentication which operate via a challenge–response mechanism whereby the server identifies and issues a challenge before serving the requested content.

HTTP provides a general framework for access control and authentication, via an extensible set of challenge–response authentication schemes, which can be used by a server to challenge a client request and by a client to provide authentication information.[1]

The authentication mechanisms described above belong to the HTTP protocol and are managed by client and server HTTP software (if configured to require authentication before allowing client access to one or more web resources), and not by the web applications using a web application session.

Authentication realms

[edit]

The HTTP Authentication specification also provides an arbitrary, implementation-specific construct for further dividing resources common to a given root URI. The realm value string, if present, is combined with the canonical root URI to form the protection space component of the challenge. This in effect allows the server to define separate authentication scopes under one root URI.[1]

HTTP application session

[edit]

HTTP is a stateless protocol. A stateless protocol does not require the web server to retain information or status about each user for the duration of multiple requests.

Some web applications need to manage user sessions, so they implement states, or server side sessions, using for instance HTTP cookies[50] or hidden variables within web forms.

To start an application user session, an interactive authentication via web application login must be performed. To stop a user session a logout operation must be requested by user. These kind of operations do not use HTTP authentication but a custom managed web application authentication.

HTTP/1.1 request messages

[edit]

Request messages are sent by a client to a target server.[note 4]

Request syntax

[edit]

A client sends request messages to the server, which consist of:[51]

  • a request line, consisting of the case-sensitive request method, a space, the requested URI, another space, the protocol version, a carriage return, and a line feed, e.g.:
GET /images/logo.png HTTP/1.1
  • zero or more request header fields (at least 1 or more headers in case of HTTP/1.1), each consisting of the case-insensitive field name, a colon, optional leading whitespace, the field value, an optional trailing whitespace and ending with a carriage return and a line feed, e.g.:
Host: www.example.com
Accept-Language: en
  • an empty line, consisting of a carriage return and a line feed;
  • an optional message body.

In the HTTP/1.1 protocol, all header fields except Host: hostname are optional.

A request line containing only the path name is accepted by servers to maintain compatibility with HTTP clients before the HTTP/1.0 specification in RFC 1945.[52]

Request methods

[edit]
An HTTP/1.1 request made using telnet. The request message, response header section, and response body are highlighted.

HTTP defines methods (sometimes referred to as verbs, but nowhere in the specification does it mention verb) to indicate the desired action to be performed on the identified resource. What this resource represents, whether pre-existing data or data that is generated dynamically, depends on the implementation of the server. Often, the resource corresponds to a file or the output of an executable residing on the server. The HTTP/1.0 specification[4]: §8  defined the GET, HEAD, and POST methods as well as listing the PUT, DELETE, LINK and UNLINK methods under additional methods. However, the HTTP/1.1 specification[6]: §9  formally defined and added five new methods: PUT, DELETE, CONNECT, OPTIONS, and TRACE. Any client can use any method and the server can be configured to support any combination of methods. If a method is unknown to an intermediate, it will be treated as an unsafe and non-idempotent method. There is no limit to the number of methods that can be defined, which allows for future methods to be specified without breaking existing infrastructure. For example, WebDAV defined seven new methods and RFC 5789 specified the PATCH method.

Method names are case sensitive.[22]: §3 [1]: §9.1  This is in contrast to HTTP header field names which are case-insensitive.[1]: §6.3 

GET
The GET method requests that the target resource transfer a representation of its state. GET requests should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.)[1] For retrieving resources without making changes, GET is preferred over POST, as they can be addressed through a URL. This enables bookmarking and sharing and makes GET responses eligible for caching, which can save bandwidth. The W3C has published guidance principles on this distinction, saying, "Web application design should be informed by the above principles, but also by the relevant limitations."[53] See safe methods below.

HEAD
The HEAD method requests that the target resource transfer a representation of its state, as for a GET request, but without the representation data enclosed in the response body. This is useful for retrieving the representation metadata in the response header, without having to transfer the entire representation. Uses include checking whether a page is available through the status code and quickly finding the size of a file (Content-Length).

POST
The POST method requests that the target resource process the representation enclosed in the request according to the semantics of the target resource. For example, it is used for posting a message to an Internet forum, subscribing to a mailing list, or completing an online shopping transaction.[1]: §9.3.3 

PUT
The PUT method requests that the target resource create or update its state with the state defined by the representation enclosed in the request. A distinction from POST is that the client specifies the target location on the server.[1]: §9.3.4 

DELETE
The DELETE method requests that the target resource delete its state.

CONNECT
The CONNECT method requests that the intermediary establish a TCP/IP tunnel to the origin server identified by the request target. It is often used to secure connections through one or more HTTP proxies with TLS.[1]: §9.3.6 [54] See HTTP CONNECT method.

OPTIONS
The OPTIONS method requests that the target resource transfer the HTTP methods that it supports. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource.

TRACE
The TRACE method requests that the target resource transfer the received request in the response body. That way a client can see what (if any) changes or additions have been made by intermediaries.

PATCH
The PATCH method requests that the target resource modify its state according to the partial update defined in the representation enclosed in the request. This can save bandwidth by updating a part of a file or document without having to transfer it entirely.[55]

All general-purpose web servers are required to implement at least the GET and HEAD methods, and all other methods are considered optional by the specification.[1]: §9.1 

Properties of request methods
Request method RFC Request has payload body Response has payload body Safe Idempotent Cacheable
GET RFC 9110 Optional Yes Yes Yes Yes
HEAD RFC 9110 Optional No Yes Yes Yes
POST RFC 9110 Yes Yes No No Yes
PUT RFC 9110 Yes Yes No Yes No
DELETE RFC 9110 Optional Yes No Yes No
CONNECT RFC 9110 Optional Yes No No No
OPTIONS RFC 9110 Optional Yes Yes Yes No
TRACE RFC 9110 No Yes Yes Yes No
PATCH RFC 5789 Yes Yes No No No

Safe methods

[edit]

A request method is safe if a request with that method has no intended effect on the server. The methods GET, HEAD, OPTIONS, and TRACE are defined as safe. In other words, safe methods are intended to be read-only. Safe methods can still have side effects not seen by the client, such as appending request information to a log file or charging an advertising account.

In contrast, the methods POST, PUT, DELETE, CONNECT, and PATCH are not safe. They may modify the state of the server or have other effects such as sending an email. Such methods are therefore not usually used by conforming web robots or web crawlers; some that do not conform tend to make requests without regard to context or consequences.

Despite the prescribed safety of GET requests, in practice their handling by the server is not technically limited in any way. Careless or deliberately irregular programming can allow GET requests to cause non-trivial changes on the server. This is discouraged because of the problems which can occur when web caching, search engines, and other automated agents make unintended changes on the server. For example, a website might allow deletion of a resource through a URL such as https://example.com/article/1234/delete, which, if arbitrarily fetched, even using GET, would simply delete the article.[56] A properly coded website would require a DELETE or POST method for this action, which non-malicious bots would not make.

One example of this occurring in practice was during the short-lived Google Web Accelerator beta, which prefetched arbitrary URLs on the page a user was viewing, causing records to be automatically altered or deleted en masse. The beta was suspended only weeks after its first release, following widespread criticism.[57][56]

Idempotent methods

[edit]

A request method is idempotent if multiple identical requests with that method have the same effect as a single such request. The methods PUT and DELETE, and safe methods are defined as idempotent. Safe methods are trivially idempotent, since they are intended to have no effect on the server whatsoever; the PUT and DELETE methods, meanwhile, are idempotent since successive identical requests will be ignored. A website might, for instance, set up a PUT endpoint to modify a user's recorded email address. If this endpoint is configured correctly, any requests which ask to change a user's email address to the same email address which is already recorded—e.g. duplicate requests following a successful request—will have no effect. Similarly, a request to DELETE a certain user will have no effect if that user has already been deleted.

In contrast, the methods POST, CONNECT, and PATCH are not necessarily idempotent, and therefore sending an identical POST request multiple times may further modify the state of the server or have further effects, such as sending multiple emails. In some cases this is the desired effect, but in other cases it may occur accidentally. A user might, for example, inadvertently send multiple POST requests by clicking a button again if they were not given clear feedback that the first click was being processed. While web browsers may show alert dialog boxes to warn users in some cases where reloading a page may re-submit a POST request, it is generally up to the web application to handle cases where a POST request should not be submitted more than once.

Note that whether or not a method is idempotent is not enforced by the protocol or web server. It is perfectly possible to write a web application in which (for example) a database insert or other non-idempotent action is triggered by a GET or other request. To do so against recommendations, however, may result in undesirable consequences, if a user agent assumes that repeating the same request is safe when it is not.

Cacheable methods

[edit]

A request method is cacheable if responses to requests with that method may be stored for future reuse. The methods GET, HEAD, and POST are defined as cacheable.

In contrast, the methods PUT, DELETE, CONNECT, OPTIONS, TRACE, and PATCH are not cacheable.

Request header fields

[edit]

Request header fields allow the client to pass additional information beyond the request line, acting as request modifiers (similarly to the parameters of a procedure). They give information about the client, about the target resource, or about the expected handling of the request.

HTTP/1.1 response messages

[edit]

A response message is sent by a server to a client as a reply to its former request message.[note 4]

Response syntax

[edit]

A server sends response messages to the client, which consist of:[22]: §2.1 

  • a status line, consisting of the protocol version, a space, the response status code, another space, a possibly empty reason phrase, a carriage return and a line feed, e.g.:
    HTTP/1.1 200 OK
    
  • zero or more response header fields, each consisting of the case-insensitive field name, a colon, optional leading whitespace, the field value, an optional trailing whitespace and ending with a carriage return and a line feed, e.g.:
    Content-Type: text/html
    
  • an empty line, consisting of a carriage return and a line feed;
  • an optional message body.

Response status codes

[edit]

In HTTP/1.0 and since, the first line of the HTTP response is called the status line and includes a numeric status code (such as "404") and a textual reason phrase (such as "Not Found"). The response status code is a three-digit integer code representing the result of the server's attempt to understand and satisfy the client's corresponding request. The way the client handles the response depends primarily on the status code, and secondarily on the other response header fields. Clients may not understand all registered status codes but they must understand their class (given by the first digit of the status code) and treat an unrecognized status code as being equivalent to the x00 status code of that class.

The standard reason phrases are only recommendations, and can be replaced with "local equivalents" at the web developer's discretion. If the status code indicated a problem, the user agent might display the reason phrase to the user to provide further information about the nature of the problem. The standard also allows the user agent to attempt to interpret the reason phrase, though this might be unwise since the standard explicitly specifies that status codes are machine-readable and reason phrases are human-readable.

The first digit of the status code defines its class:

1XX (informational)
The request was received, continuing process.
2XX (successful)
The request was successfully received, understood, and accepted.
3XX (redirection)
Further action needs to be taken in order to complete the request.
4XX (client error)
The request contains bad syntax or cannot be fulfilled.
5XX (server error)
The server failed to fulfill an apparently valid request.

Response header fields

[edit]

The response header fields allow the server to pass additional information beyond the status line, acting as response modifiers. They give information about the server or about further access to the target resource or related resources.

Each response header field has a defined meaning which can be further refined by the semantics of the request method or response status code.

HTTP/1.1 example of request / response transaction

[edit]

Below is a sample HTTP transaction between an HTTP/1.1 client and an HTTP/1.1 server running on www.example.com, port 80.[note 5][note 6]

Client request

[edit]
GET / HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate, br
Connection: keep-alive

A client request (consisting in this case of the request line and a few headers that can be reduced to only the "Host: hostname" header) is followed by a blank line, so that the request ends with a double end of line, each in the form of a carriage return followed by a line feed. The "Host: hostname" header value distinguishes between various DNS names sharing a single IP address, allowing name-based virtual hosting. While optional in HTTP/1.0, it is mandatory in HTTP/1.1. (A "/" (slash) will usually fetch a /index.html file if there is one.)

Server response

[edit]
HTTP/1.1 200 OK
Date: Mon, 23 May 2005 22:38:34 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 155
Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux)
ETag: "3f80f-1b6-3e1cb03b"
Accept-Ranges: bytes
Connection: close

<html>
  <head>
    <title>An Example Page</title>
  </head>
  <body>
    <p>Hello World, this is a very simple HTML document.</p>
  </body>
</html>

The ETag (entity tag) header field is used to determine if a cached version of the requested resource is identical to the current version of the resource on the server. "Content-Type" specifies the Internet media type of the data conveyed by the HTTP message, while "Content-Length" indicates its length in bytes. The HTTP/1.1 webserver publishes its ability to respond to requests for certain byte ranges of the document by setting the field "Accept-Ranges: bytes". This is useful, if the client needs to have only certain portions[58] of a resource sent by the server, which is called byte serving. When "Connection: close" is sent, it means that the web server will close the TCP connection immediately after the end of the transfer of this response.[22]: §9.1 

Most of the header lines are optional but some are mandatory. When header "Content-Length: number" is missing in a response with an entity body then this should be considered an error in HTTP/1.0 but it may not be an error in HTTP/1.1 if header "Transfer-Encoding: chunked" is present. Chunked transfer encoding uses a chunk size of 0 to mark the end of the content. Some old implementations of HTTP/1.0 omitted the header "Content-Length" when the length of the body entity was not known at the beginning of the response and so the transfer of data to client continued until server closed the socket.

A "Content-Encoding: gzip" can be used to inform the client that the body entity part of the transmitted data is compressed by gzip algorithm.

Encrypted connections

[edit]

The most popular way of establishing an encrypted HTTP connection is HTTPS.[59] Two other methods for establishing an encrypted HTTP connection also exist: Secure Hypertext Transfer Protocol, and using the HTTP/1.1 Upgrade header to specify an upgrade to TLS. Browser support for these two is, however, nearly non-existent.[60][61][62]

Similar protocols

[edit]
  • The Gopher protocol is a content delivery protocol that was displaced by HTTP in the early 1990s.
  • The SPDY protocol is an alternative to HTTP developed at Google, superseded by HTTP/2.
  • The Gemini protocol is a Gopher-inspired protocol which mandates privacy-related features.

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Hypertext Transfer Protocol (HTTP) is a stateless application-level protocol for distributed, collaborative, hypertext information systems. It serves as the foundational protocol for data communication on the , enabling the transfer of hypermedia documents between clients and servers. Developed by while working at in 1989, HTTP was initially proposed as part of the project to facilitate information sharing among researchers. HTTP operates on a request-response model, where clients (such as web browsers) send requests to servers using methods like GET (to retrieve resources) or (to submit data), and servers respond with status codes (e.g., 200 OK for success or 404 Not Found for missing resources) along with the requested content, headers, and metadata. This structure supports the stateless nature of HTTP, meaning each request is independent and does not retain information from previous interactions unless explicitly managed through mechanisms like or sessions. Key features include support for various content types (e.g., , images, ), caching directives to improve efficiency, and security enhancements in later versions, such as (HTTP over TLS) for encrypted communication. The protocol has evolved through several to address , , and needs. HTTP/0.9, the unversioned in 1991, was a simple text-based protocol limited to retrieving HTML documents via GET requests without headers or status codes. HTTP/1.0, standardized in RFC 1945 in 1996, introduced headers, status codes, and support for multiple content types but remained connection-oriented and prone to latency issues like the "." HTTP/1.1, defined in RFC 2068 (1997) and refined in RFC 2616 (1999) and later RFC 9110 (2022), added persistent connections, pipelining, , and better caching, making it the dominant version for over two decades. Subsequent updates focused on multiplexing and reduced latency: HTTP/2, standardized in RFC 7540 in 2015, introduced binary framing, header compression (HPACK), server push, and stream over a single TCP connection to overcome HTTP/1.1 limitations. HTTP/3, published as RFC 9114 in 2022, shifts to the transport protocol (over UDP) for faster connection establishment, built-in encryption, and migration-resistant streams, further enhancing performance in modern networks. Today, underpins nearly all , with ongoing work by the IETF ensuring its adaptability to emerging technologies like APIs and real-time applications.

Overview

Technical overview

HTTP (Hypertext Transfer Protocol) is a stateless application-level protocol for distributed, collaborative, hypertext systems, serving as the foundational mechanism for data communication on the by enabling the transfer of hypertext and other resources between clients and servers. It operates on a request-response model, in which clients—such as web browsers—initiate requests to servers, which process these requests and return corresponding responses containing the requested resources or status . This model facilitates a uniform interface for accessing and manipulating resources identified by uniform resource identifiers (URIs), ensuring interoperability across diverse systems. A defining characteristic of HTTP is its , meaning that each request from a client to a server must contain all the necessary for the server to understand and respond, with no for the server to retain state or from previous requests. While this design promotes by allowing servers to handle requests independently, it can be extended through mechanisms like or sessions to simulate statefulness when needed for applications requiring continuity, such as user authentication. The core components of HTTP include URIs, which uniquely identify resources; messages structured as requests and responses; methods that specify the intended action (e.g., retrieval or modification); status codes that indicate the outcome of the server's processing; and headers that carry metadata about the message, such as content type or caching directives. These elements collectively define the semantics shared across HTTP versions. In its basic flow, a client establishes a connection—typically over TCP/IP—and sends an HTTP request message to the server, which parses the request, performs the necessary operations on the identified resource, and transmits a response message back to the client. This exchange occurs at the , abstracting the underlying transport details while relying on reliable protocols like TCP for delivery. HTTP's extensibility is inherent in its design, allowing new methods and headers to be introduced without disrupting existing implementations, which has enabled ongoing enhancements for performance and functionality in later versions such as and HTTP/3.

Role in the web

HTTP serves as the foundational protocol for client-server interactions in the web, facilitating the retrieval and exchange of resources such as web pages, images, and data between browsers or applications and servers. It underpins web browsing by enabling users to access hypertext documents and multimedia content through a standardized request-response mechanism, while also powering application programming interfaces (APIs), particularly RESTful services that allow disparate systems to communicate seamlessly. In the (IoT), HTTP supports lightweight data exchange between devices and cloud services, enabling real-time monitoring and control in applications like smart homes and industrial sensors. As of November 2025, HTTP and its secure variant dominate web traffic, with over 95% of websites on utilizing , reflecting its near-universal adoption for secure data transmission. , the latest version, has achieved approximately 35.9% usage among websites globally, driven by its performance improvements and support in major browsers and servers. This widespread adoption underscores HTTP's evolution from serving static pages in the early web to handling dynamic, data-intensive interactions in modern applications. HTTP integrates closely with other core technologies to form the backbone of web architecture: the (DNS) resolves human-readable domain names to IP addresses for requests, while transport protocols like TCP (for HTTP/1.x and ) or (for ) ensure reliable data delivery over networks. Security is layered on via (TLS), which encrypts HTTP traffic to protect against eavesdropping and tampering, a standard practice since the protocol's maturation. This interoperability extends to HTTP's influence on web standards, where it drives the loading of documents, execution of for dynamic content via asynchronous requests (AJAX/Fetch API), and optimization through content delivery networks (CDNs) that cache and distribute resources closer to users for reduced latency. In contemporary computing paradigms, HTTP remains essential for architectures, where services communicate via HTTP-based APIs to enable scalable, loosely coupled systems. It supports by allowing event-driven functions to invoke and respond over HTTP endpoints, abstracting infrastructure management for developers. Similarly, in , HTTP facilitates low-latency interactions between edge nodes and central services, processing data closer to the source in distributed environments like 5G networks and IoT deployments.

History

Origins and early development (1989–1996)

In March 1989, , a British physicist working at , proposed an information management system to facilitate the sharing of scientific documents among researchers across the organization. The proposal outlined a hypertext-based system using a distributed network of documents linked via hyperlinks, addressing the challenges of fragmented information silos in high-energy physics collaborations. This initiative aimed to create a universal, platform-independent method for accessing and linking data over existing networks like TCP/IP, without requiring centralized databases. By 1991, Berners-Lee had implemented the first version of the Hypertext Transfer Protocol (HTTP), designated as HTTP/0.9, as part of the project at . HTTP/0.9 was a minimalist, request-response protocol limited to the GET method for retrieving simple hypertext documents, typically in a plain-text format resembling early , transmitted over TCP connections. It prioritized simplicity and speed to enable quick document retrieval in a distributed environment, serving as the foundational transport mechanism for the web's initial public demonstration in August 1991. In response to growing adoption and the need for more robust features, early drafts of HTTP/1.0 emerged between and 1993, introducing elements like request headers, response status codes, and additional methods beyond GET to support and error handling. These developments were influenced by contemporary protocols such as WAIS for search and retrieval and for menu-driven navigation, which highlighted the demand for extensible, hypermedia-oriented information access over the . The formation of the IETF HTTP Working Group in formalized these efforts, coordinating input from the broader community via mailing lists like www-talk to refine the protocol for wider interoperability. The culmination of this period came with the release of RFC 1945 in May 1996, which documented HTTP/1.0 as an informational specification reflecting common implementations and enabling broader adoption across diverse hosts. This version addressed key challenges in early web deployment, such as efficient data transfer over unreliable networks and extensibility for future enhancements, solidifying HTTP as a simple yet scalable protocol layered atop TCP/IP.

HTTP/1.1 standardization and dominance (1997–2009)

The Hypertext Transfer Protocol version 1.1 (HTTP/1.1) was initially standardized as a provisional specification in RFC 2068, published by the (IETF) in January 1997. This document formalized HTTP/1.1 as an update to HTTP/1.0, addressing ambiguities in the earlier version and introducing enhancements for better performance and reliability. In June 1999, RFC 2616 superseded RFC 2068, providing clarifications on critical areas such as caching directives and persistent connections to resolve implementation inconsistencies observed in early deployments. Key features of HTTP/1.1 included support for persistent connections—also known as "keep-alive"—which allowed multiple requests and responses over a single TCP connection, reducing overhead compared to the per-request connections in HTTP/1.0. Pipelining enabled clients to send multiple requests without waiting for responses, further optimizing bandwidth usage, while permitted servers to stream content in variable-sized blocks without specifying the total length upfront. These innovations, along with improved cache control and the mandatory Host header for , made HTTP/1.1 more efficient for growing web applications. During the late 1990s and early 2000s, HTTP/1.1 rapidly gained dominance, powering the explosive growth of the amid the dot-com boom. Major browsers, including 3.0 (released in 1996) and 4.0 (released in 1997), fully implemented HTTP/1.1, enabling seamless integration with emerging web technologies and contributing to the surge in online commerce and content delivery. By the mid-2000s, HTTP/1.1 had become the , handling the vast majority of web traffic as internet usage expanded globally. The IETF provided ongoing maintenance for HTTP/1.1 through errata reports and clarifications, ensuring compatibility as implementations proliferated. In 2007, the HTTP Bis working group was chartered to revise and clarify the protocol specifications, culminating in a series of updates by 2009 that addressed lingering ambiguities without introducing major changes. However, as web pages grew more resource-intensive with embedded objects like images and scripts, limitations of HTTP/1.1 began to emerge, particularly in pipelined requests and increased latency over high-latency networks, which hindered performance in mobile and broadband environments.

HTTP/2 introduction (2010–2015)

In November 2009, Google introduced , an experimental application-layer protocol designed to accelerate by addressing key limitations of HTTP/1.1, such as the inefficiency of establishing multiple TCP connections per page and that delayed resource delivery. enabled multiplexing of multiple requests and responses over a single TCP connection, compressed HTTP headers to reduce overhead, and prioritized traffic to minimize latency, achieving up to a 64% reduction in page load times in controlled tests. These improvements were particularly motivated by the growing demands of the , where slow network conditions and high latency amplified HTTP/1.1's bottlenecks, leading to poorer user experiences on bandwidth-constrained devices. The success of SPDY prompted the revival of the IETF's HTTP Working Group (httpbis) in 2011, tasked with clarifying and updating HTTP specifications while exploring performance enhancements. Influenced by SPDY's concepts, the group chartered the development of a new HTTP version that preserved HTTP/1.1's application semantics but introduced a more efficient wire protocol, with initial drafts directly based on SPDY's multiplexing and compression mechanisms. Over the following years, collaborative efforts refined these ideas through multiple iterations, incorporating feedback from implementers to ensure broad compatibility and security, culminating in a consensus-driven specification. HTTP/2 was officially standardized as RFC 7540 in May 2015, defining a binary protocol that layered HTTP messaging over SPDY-inspired framing. Key innovations included binary framing for all messages, which replaced text-based with compact, structured frames to lower processing overhead; HPACK header compression to eliminate redundant data across requests; server push, allowing proactive resource delivery without client requests; and stream multiplexing, enabling concurrent, interleaved transmission of multiple request-response pairs on one connection without blocking. These features collectively aimed to reduce latency and bandwidth usage while maintaining backward compatibility with HTTP/1.1 semantics. The protocol's development was driven by the explosive growth of mobile usage and the need for sub-second page loads to retain users, with early implementations demonstrating significant speed gains on resource-heavy sites. Google integrated support into Chrome starting with version 40 in early 2015, enabling it over TLS connections to leverage and compression for faster browsing. By 2016, major browsers including (from version 36), (from version 9), and Microsoft Edge had enabled by default, accelerating its adoption among web servers and content providers.

HTTP/3 adoption and protocol refactoring (2016–present)

In 2016, the (IETF) initiated the development of through its QUIC , adapting Google's transport protocol to address limitations in prior HTTP versions by enabling reduced latency through integrated encryption and 0-RTT connection establishment, as well as seamless connection migration across network changes. HTTP/3 was formally standardized as RFC 9114 in June 2022, defining a mapping of semantics onto the protocol, which operates over UDP to support , flow control, and error recovery at the without relying on TCP. This approach preserves HTTP's request-response model while leveraging QUIC's built-in congestion control and reliability features to improve performance in lossy networks. Accompanying the HTTP/3 standard, the IETF undertook a comprehensive refactoring of HTTP specifications in , consolidating and updating core documents to enhance clarity and across versions: RFC 9110 for HTTP semantics, RFC 9111 for caching, RFC 9112 for HTTP/1.1, and RFC 9113 for , while deprecating earlier specifications such as RFC 2616. These updates separated transport-independent elements from version-specific details, facilitating future extensions without fragmenting the protocol ecosystem. Adoption of accelerated following browser enablement, with enabling it by default in April 2020 and following in May 2021, though experimental support appeared in nightly builds as early as 2019. By late 2024, major content delivery networks including , Akamai, and had implemented widespread support, driving support to approximately 34% of the top 10 million websites and usage to 20.5% of global web requests. As of November 2025, support was in use by 36.2% of all monitored websites, reflecting steady growth from around 15-18% in 2022, though actual request usage remains lower at around 20% based on 2024 data. Ongoing IETF efforts focus on extensions enhancing and functionality, such as Oblivious HTTP (RFC 9458, published in 2023), which enables anonymous request forwarding to prevent between clients and servers. No major new HTTP version beyond has been announced as of 2025. Deployment challenges persist, particularly with firewalls blocking UDP traffic, necessitating automatic fallback to over TCP, which can introduce minor latency penalties during connection races.

Message Exchange

Connections and persistent sessions

HTTP connections are typically established over the underlying transport protocol using a client-server model. For HTTP/1.x, this involves a TCP three-way to create a reliable, ordered stream between the client and server, as defined in the TCP specification. The default ports are 80 for unencrypted HTTP and 443 for , which uses TLS over TCP to secure the connection. In contrast, HTTP/3 employs as its transport, a UDP-based protocol that integrates TLS 1.3 encryption and enables faster connection setup by combining and encryption processes. Persistent connections, introduced to mitigate the overhead of repeatedly establishing new transport connections, became the default in HTTP/1.1 via the Connection: keep-alive header field. Unlike HTTP/1.0, where connections closed after each response, this reuse allows multiple request-response exchanges over a single connection, reducing latency and resource consumption associated with TCP handshakes and TLS negotiations. However, HTTP/1.1 pipelining—sending sequential requests without waiting for prior responses—can introduce head-of-line (HOL) blocking, where a delayed response stalls subsequent ones on the same connection. HTTP is inherently stateless, meaning each request is independent and servers do not retain about prior interactions unless explicitly managed. Session management extends this statelessness using mechanisms like , where servers set state via the Set-Cookie response header and clients return it in subsequent Cookie request headers. Alternative tokens, such as session IDs, can also maintain state across requests. advances connection efficiency by multiplexing multiple independent streams over a single TCP connection, eliminating HOL blocking at the while preserving persistence. builds on this with QUIC's stream-based multiplexing, adding features like 0-RTT resumption, which allows clients to send data on the first packet of a resumed connection using cached session state, further accelerating reconnections. Optimizations in HTTP/2 and later versions include connection coalescing, where clients reuse an existing connection for requests to multiple virtual hosts (authorities) sharing the same IP address and port, based on matching the :authority pseudo-header. This reduces the number of concurrent connections and associated overhead.

Request-response cycle

The HTTP request-response cycle constitutes the core mechanism of message exchange in the protocol, operating in a stateless fashion where each request is independent of any prior interactions unless explicitly indicated by headers or other mechanisms. The process begins when a client, such as a web browser, parses a Uniform Resource Identifier (URI) to identify the target resource, resolve the host, and determine the connection endpoint, which may involve an origin server or intermediary. The client then constructs and sends an HTTP request message comprising a start line (method, request-target, and version), header fields, and an optional message body. Upon receipt, the server routes the request to an appropriate handler based on the target URI, processes it according to the method's semantics (e.g., retrieving or modifying the resource), and generates a response message with a status code, headers, and optional body, which is transmitted back along the same path. The client processes the response, such as rendering the content for display or executing further logic, completing the cycle. Intermediaries play a pivotal in modifying the cycle by intercepting and altering message flow between client and origin server. Proxies forward requests and responses, potentially injecting headers like Via to trace the transmission path, while also enforcing security policies or load balancing. Gateways function as protocol converters, treating inbound requests as if received directly and outbound responses as originating from themselves. Caches store prior responses and may satisfy subsequent requests from storage if the cached representation remains valid, thereby reducing origin server load and improving efficiency without altering the logical cycle structure. Error handling ensures robustness in the cycle through timeout detection and retry logic, particularly emphasizing idempotency to avoid unintended side effects. Clients implement timeouts to abandon unresponsive connections, after which they may retry the request if the method is idempotent—such as GET, HEAD, PUT, DELETE, OPTIONS, or TRACE—since repeating these yields the same result as a single invocation. Servers signal errors via status codes in responses, including 4xx codes for client issues (e.g., Bad Request for malformed syntax) and 5xx codes for server failures (e.g., 503 Service Unavailable prompting potential retries), enabling clients to diagnose and respond without assuming connection persistence. Content negotiation refines the cycle by allowing clients to influence response delivery through request headers, ensuring the server provides an appropriate representation. The Accept header specifies preferred media types (e.g., text/html;q=1.0, application/json;q=0.9), while Accept-Language, Accept-Charset, and Accept-Encoding indicate language, character set, and compression preferences, respectively; the server evaluates these to select and deliver the best-matching body format, or returns 406 Not Acceptable if none suffice. This proactive mechanism occurs prior to response generation, tailoring the output to client capabilities without requiring multiple cycles. In practical applications, the cycle iterates recursively to fetch composite resources; for instance, a successful GET response delivering an document prompts the client to parse it and initiate subsequent cycles for linked assets like images or stylesheets, each treated as an independent, stateless transaction. This relies on the client's interpretation of response content but adheres to the protocol's core exchange model.

Requests

Syntax and structure

In HTTP/1.1, a request message consists of three main parts: the request line, the header section, and an optional message body. The request line begins the request and follows the generic start-line format, but is specifically structured as method SP request-target SP HTTP-version, terminated by a line feed (CRLF). Here, method identifies the request method (e.g., "GET"), request-target specifies the target resource in one of four forms—origin-form (e.g., "/index."), absolute-form (full URI), authority-form (host and , used in CONNECT), or asterisk-form ("*" for OPTIONS)—and HTTP-version denotes the protocol version (e.g., "HTTP/1.1"). The Host header is mandatory in HTTP/1.1 to support . The header section immediately follows the request line and comprises zero or more header fields, each on its own line in the form field-name ":" OWS field-value OWS, where OWS denotes optional whitespace. This section is client-generated, conveying metadata such as the target host, accepted content types, or authentication credentials. The header section ends with an empty line (CRLF CRLF), which delimits it from the body if present. The message body, if included, follows the header section and contains the data (e.g., form data in requests), represented as a sequence of octets (*OCTET). Its presence and length are determined by headers like Content-Length or Transfer-Encoding, or by connection closure in the absence of such indicators; the body is optional for requests that do not require transmission, such as GET. For example, a complete HTTP/1.1 request might appear as:

GET /index.html HTTP/1.1 Host: www.example.com Accept: text/html

GET /index.html HTTP/1.1 Host: www.example.com Accept: text/html

In versions and later, the textual format of HTTP/1.1 is replaced by a binary framing layer, where requests are expressed through frames such as HEADERS (carrying pseudo-headers like :method, :path, :, and other fields) and (for the body), but the underlying semantics of the request line, headers, and body remain identical to HTTP/1.1. The method appears as the :method pseudo-header field within the HEADERS frame.

Methods

HTTP request methods specify the intended action to be performed by a client on a given at a server, forming a core part of the protocol's semantics across all versions. These methods are registered with the (IANA) and can be extended through standardized processes, ensuring interoperability. Each method carries specific semantics regarding its effect on the , and they are classified by key properties: (no state changes on the server), idempotency (repeated identical requests produce the same result and side effects), and cacheability (responses can be stored and reused). All standard methods defined in HTTP/1.1 are preserved with identical semantics in and , though the underlying framing and transport differ. Safe methods are defined as read-only operations that do not request any state changes on the server, allowing user agents to prefetch or cache responses without risk. The primary safe methods include GET, which retrieves a representation of the target ; HEAD, which behaves like GET but omits the message body in the response to fetch only metadata; OPTIONS, which describes the communication options available for the target or server; and TRACE, which performs a loop-back test to echo the received request for debugging purposes. Extensions like PROPFIND from retrieve properties (metadata) of a , supporting scoped queries via a Depth header and returning results in a multistatus XML format. All safe methods are also idempotent by definition. Idempotent methods ensure that retrying a failed request does not result in unintended side effects, making them suitable for unreliable networks. Beyond the safe methods, PUT replaces the target 's state with the request , creating the resource if it does not exist, while DELETE removes all current representations of the target from the server. TRACE is idempotent as a method, and POST can be treated as idempotent in specific variants using conditional requests (e.g., with If-Match headers based on entity tags) to avoid duplicate effects, though POST is not inherently idempotent. Non-idempotent methods like standard POST process the request according to resource-specific semantics, such as submitting that may create new resources or trigger side effects, and thus repeated invocations can lead to different outcomes. Regarding cacheability, responses to GET and HEAD are inherently cacheable, enabling intermediaries to store and reuse representations for efficiency. POST responses are cacheable only under strict conditions: explicit freshness information (e.g., via Expires or Cache-Control headers) must be present, and the Content-Location header must match the effective request URI. Other methods like PUT, DELETE, OPTIONS, and TRACE have no defined caching semantics unless specified otherwise, though safe method responses can be cached if freshness is indicated. For extensions, PATCH applies partial modifications to a using a patch document and is not cacheable by default, though conditional variants can enhance idempotency. PROPFIND responses are generally not cacheable due to the dynamic nature of properties. New HTTP methods can be defined and registered via the IETF process in the IANA HTTP Method Registry, promoting extensibility while maintaining protocol stability; for instance, PATCH was standardized in RFC 5789 for partial updates, distinguishing it from PUT's full replacement semantics.
MethodSemanticsSafeIdempotentCacheable
GETRetrieve resource representationYesYesYes
HEADRetrieve metadata onlyYesYesYes
OPTIONSQuery communication optionsYesYesIf freshness indicated
TRACEDiagnostic loop-backYesYesNo
PROPFINDRetrieve resource properties ()YesYesNo
POSTProcess payload, e.g., createNoNo (conditional variants yes)If conditions met
PUTReplace or create resource stateNoYesNo
DELETERemove resourceNoYesNo
PATCHApply partial modificationsNoNo (conditional variants yes)If conditions met

Header fields in requests

HTTP request header fields provide metadata from the client to the server, specifying directives such as the target resource, client capabilities, and conditional requirements for processing the request. These fields are case-insensitive and extensible, allowing clients to convey preferences for content negotiation, caching behavior, and security credentials. Among the general request headers, the Host field is mandatory in HTTP/1.1 and later versions to support virtual hosting, indicating the Internet host and port number of the resource being requested, formatted as "hostname[:port]". For example, Host: example.com specifies the target domain. The User-Agent field identifies the requesting user agent, typically including software name, version, and operating system details, such as User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36. This aids servers in optimizing responses or logging. The Accept field enables content negotiation by listing media types the client can handle, using quality values (q-values) from 0 to 1, like Accept: text/html;q=1.0, application/json;q=0.9. Request modifier headers allow clients to impose conditions or request specific handling. The field carries credentials to authenticate the client, formatted according to schemes like Basic or Bearer, such as Authorization: Basic dXNlcjpwYXNz. (Details on authentication schemes are covered separately.) The If-Match header makes a request conditional on the current entity tag (ETag) of the resource, preventing overwrites if the resource has changed; for instance, If-Match: "686897696a7c876b7e" requires the ETag to match exactly. The Range header requests partial content, typically byte ranges, as in Range: bytes=0-999 to retrieve the first 1000 bytes. Common examples include the Cache-Control directive, which instructs caches on handling, such as Cache-Control: no-cache to bypass caches and fetch fresh content. The Expect header signals server expectations before sending a large body, notably Expect: 100-continue to receive a 100 Continue status if the request is acceptable. All HTTP header fields, including those for requests, are managed by the Internet Assigned Numbers Authority (IANA) through a registry distinguishing permanent and provisional entries. Permanent fields require a published specification and expert review, while provisional ones undergo lighter scrutiny for experimental use; new registrations or changes are submitted via the IANA interface or mailing list. In HTTP/2, these fields (along with pseudo-headers like :method) are compressed using HPACK to reduce overhead on repeated transmissions.

Responses

Syntax and structure

In HTTP/1.1, a response message consists of three main parts: the status line, the header section, and an optional message body. The status line begins the response and follows the generic start-line format, but is specifically structured as HTTP-version SP status-code SP [reason-phrase], terminated by a carriage return line feed (CRLF). Here, HTTP-version identifies the protocol version (e.g., "HTTP/1.1"), status-code is a three-digit indicating the response outcome (e.g., ), and reason-phrase is an optional, human-readable description that provides no semantic binding but aids in or (e.g., "" for success or "Not Found" for 404). The header section immediately follows the status line and comprises zero or more header fields, each on its own line in the form field-name ":" OWS field-value OWS, where OWS denotes optional whitespace. This section mirrors the structure used in requests but is server-generated, conveying metadata such as content type or caching directives. The header section ends with an empty line (CRLF CRLF), which delimits it from the body if present. The message body, if included, follows the header section and contains the payload data, such as content or binary files, represented as a sequence of octets (*OCTET). Its presence and length are determined by headers like Content-Length or Transfer-Encoding, or by connection closure in the absence of such indicators; the body is optional for responses that do not require transmission. For example, a complete HTTP/1.1 response might appear as:

HTTP/1.1 200 OK Content-Type: text/[html](/page/HTML) Content-Length: 123 <html><body>Hello</body></[html](/page/HTML)>

HTTP/1.1 200 OK Content-Type: text/[html](/page/HTML) Content-Length: 123 <html><body>Hello</body></[html](/page/HTML)>

In versions and later, the textual format of HTTP/1.1 is replaced by a binary framing layer, where responses are expressed through such as HEADERS (carrying the status pseudo-header and other fields) and (for the body), but the underlying semantics of the status line, headers, and body remain identical to HTTP/1.1. The status code appears as the :status pseudo-header field within the HEADERS .

Status codes

HTTP status codes are three-digit integer values returned by a server in the response message to indicate the result of a client's request, providing a standardized way to communicate outcomes across HTTP implementations. These codes are part of the response status line and are grouped into five classes based on their first digit, with the semantics of each class defined by the initial digit regardless of the specific code. The classes ensure consistent interpretation, though individual codes within a class may convey nuanced meanings.

1xx Informational

Informational status codes (1xx) provide provisional responses, signaling that the request is being processed and the client should continue without further action unless specified. They are typically used in scenarios involving upgrades or continuations.
  • 100 Continue: Sent by the server to indicate that the client should proceed with sending the request body after submitting headers, confirming that the initial headers are acceptable.
  • 101 Switching Protocols: Indicates that the server is switching to the protocol requested by the client via the header, such as from HTTP/1.1 to HTTP/2.

2xx Success

Success status codes (2xx) indicate that the request was received, understood, and successfully processed by the server. The exact meaning depends on the HTTP method used, but these codes generally confirm the intended outcome.
  • 200 : The request succeeded, and the response may include a representation of the requested .
  • 201 Created: The request resulted in the creation of a new , often accompanied by a header pointing to the new .
  • 204 No Content: The server successfully processed the request but returns no content in the response body, suitable for operations like deletions or updates without representation.

3xx Redirection

Redirection status codes (3xx) suggest that the client needs to take additional action to complete the request, often by redirecting to another URL. These codes facilitate resource relocation without requiring client modifications in all cases.
  • 301 Moved Permanently: The requested resource has been permanently moved to a new URL, and future requests should use the new location.
  • 302 Found: The requested resource resides temporarily under a different URL, and clients should continue to use the original request method for the redirect.
  • 304 Not Modified: Used in conditional requests (e.g., with If-Modified-Since), indicating that the resource has not changed since the specified version, allowing the client to use its cached copy.

4xx Client Error

Client error status codes (4xx) indicate that the server understood the request but cannot fulfill it due to an error on the client's side, such as malformed or parameters. These codes prompt the client to re-examine its request.
  • 400 Bad Request: The server cannot process the request due to or unsupported features in the .
  • 401 Unauthorized: The client must authenticate itself to access the resource, typically requiring credentials in subsequent requests.
  • 404 Not Found: The server cannot find the requested resource, often due to an incorrect .
  • 429 Too Many Requests: The client has sent too many requests in a given time frame, enforcing to prevent overload.

5xx Server Error

Server error status codes (5xx) signal that the server failed to fulfill a valid request, pointing to issues on the server side such as internal failures or upstream problems. Clients may retry after a delay, depending on the code.
  • 500 Internal Server Error: A generic error indicating that the server encountered an unexpected condition preventing request fulfillment.
  • 502 Bad Gateway: The server, acting as a gateway or proxy, received an invalid response from an .
  • 503 Service Unavailable: The server is temporarily unable to handle the request due to or overload, suggesting a retry after a specified delay.
HTTP status codes are extensible, allowing new codes to be defined and registered in the IETF's HTTP Status Code Registry without altering the fixed class structure, ensuring backward compatibility. For example, extensions like WebDAV introduce codes such as 207 Multi-Status, which aggregates multiple independent responses into a single message for compound operations. Unregistered codes may be used experimentally but should avoid conflicts with standard assignments.

Header fields in responses

HTTP response header fields provide metadata from the server to the client, conveying about the response's origin, content, caching instructions, and security requirements. These fields are sent after the status line in the response message and help the client process the , manage sessions, and handle redirects or challenges. Unlike request headers, response headers are primarily server-initiated to control client behavior and ensure protocol compliance.

General Headers

General response headers apply broadly to the message and are not tied to the entity body. The Date header indicates the date and time at which the response was originated by the server, using the preferred format HTTP-date (e.g., Date: Wed, 21 Oct 2015 07:28:00 GMT). It serves as timing metadata for caching, conditional requests, and freshness calculations, with servers required to generate it if not provided by proxies. The Server header contains information about the software used by the origin server to handle the request, typically including product tokens and optional comments (e.g., Server: Apache/2.4.7 (Ubuntu)). Its purpose is to aid in interoperability issues and identifying server capabilities, though servers may omit sensitive details for security. For redirection or resource creation, the header specifies the URI reference for the target resource (e.g., Location: https://example.com/new-page). It is mandatory in 201 (Created) and 3xx (Redirection) status responses to indicate where the client should proceed next.

Entity Headers

Entity headers describe the representation enclosed in the response body. The Content-Type header defines the of the payload, including type, subtype, and parameters like charset (e.g., Content-Type: text/[html](/page/HTML); charset=[UTF-8](/page/UTF-8)). It informs the client how to interpret and render the content, with defaults falling back to application/octet-stream if absent. The Content-Length header declares the size of the entity body in decimal octets (e.g., Content-Length: 1234), enabling the client to know the exact length for transfer encoding and boundary detection. It must be accurate, or the connection risks closure. The header provides an opaque entity tag as a for the selected representation (e.g., ETag: "686897696a7c876b7e"), often a hash of the content. It supports efficient caching by allowing clients to check for changes via conditional requests like If-None-Match.

Response Control Headers

Headers for controlling and include Vary, which lists request header fields that influenced the response selection (e.g., Vary: Accept-Language, Accept-Encoding). It instructs caches to vary stored responses based on these factors, preventing incorrect content delivery. The WWW-Authenticate header challenges the client to provide credentials for the protected resource, specifying schemes and parameters (e.g., WWW-Authenticate: Basic [realm](/page/Realm)="Secure Area"). It is required in 401 (Unauthorized) responses to initiate authentication flows.

Caching and State Management Examples

The Cache-Control response header directs caching behavior with directives like max-age, which sets the response's freshness lifetime in seconds (e.g., Cache-Control: max-age=3600 for one hour). This allows servers to override default expiration heuristics, balancing performance and data staleness. For maintaining state in stateless HTTP, the Set-Cookie header instructs the client to store a with a name-value pair and attributes like expiration or domain (e.g., Set-Cookie: sessionId=abc123; Max-Age=3600; Secure). It enables session tracking by having the client return the cookie in subsequent requests via the Cookie header. Introduced in 2012, the Strict-Transport-Security (HSTS) header enforces -only access for the host, with a required max-age directive specifying policy duration in seconds (e.g., Strict-Transport-Security: max-age=31536000; includeSubDomains). It mitigates man-in-the-middle attacks by directing browsers to reject insecure connections and upgrade HTTP requests to .

Authentication and Security

Authentication schemes

HTTP employs a challenge-response framework to control access to protected resources, where servers issue challenges via specific response headers and clients respond with credentials in request headers. This framework, defined in RFC 7235, supports multiple extensible authentication schemes, allowing servers to specify the required method and parameters. Challenges are typically issued in response to unauthorized access attempts, prompting clients to authenticate before retrying the request. The Basic authentication scheme transmits user credentials as a -encoded string of the username and password separated by a colon, placed in the header. For example, a client might send Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=, where the encoded value represents "username:". This scheme is simple but inherently insecure over unencrypted connections, as the encoding provides no confidentiality or integrity protection, exposing credentials to interception. A client SHOULD NOT send Basic credentials over unencrypted HTTP; instead, it must be used with TLS to mitigate risks like and credential theft. In contrast, the Digest authentication scheme enhances security by using a challenge-response mechanism that avoids transmitting passwords. The server issues a challenge including a unique nonce (a server-generated string to prevent replay attacks) and a , and the client computes a cryptographic hash—typically or SHA-256—of the username, password, nonce, HTTP method, and request URI to produce the response value. This hashed digest is sent in the header, such as Authorization: Digest username="user", realm="example", nonce="7ypf/xlj9XXwfDPEoM4URrv/xwf94BcCAzFZH4GiTo0v", uri="/dir/index.html", response="dc509ac6f0e95b96b3e34b6e7d4197f5". By storing only the hashed form of credentials (e.g., HA1 = (username::password)), servers can verify without retaining passwords. Realms define protection spaces within the server, partitioning resources into logical areas where the same credentials apply, such as "admin" for administrative paths or "[email protected]" for user-specific sections. The parameter in challenge headers (e.g., WWW-Authenticate: Basic realm="Access to the staging site") indicates the scope, helping clients and users select appropriate credentials without affecting the scheme itself. Multiple realms can coexist on a single host, enabling fine-grained . Servers signal the need for authentication using 401 Unauthorized or 407 Proxy Authentication Required status codes, accompanied by WWW-Authenticate (for origin servers) or Proxy-Authenticate (for proxies) headers that list supported schemes and parameters. For instance, a 401 response might include WWW-Authenticate: Basic realm="secure area", Digest realm="secure area", nonce="abc123", allowing the client to choose a compatible scheme. These headers must appear in the specified responses to guide the process. Advanced schemes build on this framework for modern applications. The Bearer scheme, commonly used with 2.0, conveys access tokens in the header as Authorization: Bearer mF_9.B5f-4.1JqM, where the token grants scoped access without further proof of possession. Resource servers validate the token against an authorization server, but transmission requires TLS to prevent unauthorized use. Mutual TLS (mTLS) extends authentication by leveraging client certificates during the TLS handshake for OAuth contexts, binding access tokens to the presented certificate's subject (e.g., DN or SAN) to ensure the token cannot be misused by unauthorized parties. In the PKI method, the server verifies the client's certificate chain and matches it to registered metadata like tls_client_auth_subject_dn. Use of Basic authentication over plain HTTP has been discouraged since the mid-2010s due to its vulnerability to man-in-the-middle attacks, with best practices mandating for any credential transmission.

Encryption and HTTPS integration

HTTPS (Hypertext Transfer Protocol Secure) is the secure variant of HTTP, which runs HTTP over Transport Layer Security (TLS) to provide confidentiality, integrity, and authenticity for data in transit. It typically operates on TCP port 443, distinguishing it from unencrypted HTTP on port 80, and ensures that sensitive information, such as login credentials or financial data, remains protected from unauthorized access during transmission. Despite widespread adoption of HTTPS—with approximately 96-97% of global web traffic encrypted as of 2025—unencrypted HTTP on port 80 continues to expose web services to significant security risks due to its lack of encryption and the exposure of services to the public internet. Common attacks exploiting these vulnerabilities include man-in-the-middle (MitM) attacks, enabling interception and potential alteration of plaintext data such as login credentials; eavesdropping and packet sniffing to capture sensitive information in transit; denial-of-service (DoS) attacks such as HTTP floods and Slowloris, which overwhelm servers by consuming resources through partial or slow requests; application-layer exploits like SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF) delivered over HTTP; and exploitation of vulnerabilities in HTTP services or devices, such as command injection in routers or remote code execution (RCE) in web interfaces. Best practices recommend redirecting traffic from port 80 to port 443 or blocking port 80 where feasible to enforce encryption and minimize the attack surface. The integration of TLS into HTTP begins with the TLS handshake, a process that establishes a secure channel before any HTTP messages are exchanged. During the handshake, the client initiates with a ClientHello message, specifying supported TLS versions, cipher suites, and extensions; the server responds with a ServerHello, selecting parameters, followed by its digital certificate for authentication. Key exchange then occurs, often using ephemeral methods like Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) to generate session keys without reusing long-term secrets, enabling forward secrecy. Certificate validation by the client verifies the server's identity against trusted certificate authorities, preventing impersonation. HTTP versions interact with TLS in protocol-specific ways to enable . For HTTP/1.1, TLS is applied directly atop the TCP connection after the , with no additional required beyond standard TLS setup. HTTP/2 mandates TLS usage in most deployments and relies on (ALPN), a TLS extension where the client advertises "h2" in the ClientHello to signal support, allowing the server to confirm compatibility during the . In HTTP/3, TLS is embedded within the protocol, integrating messages into QUIC's connection establishment over UDP for reduced latency, while still providing the same security guarantees. To enforce HTTPS usage and mitigate risks from HTTP downgrades, HTTP Strict Transport Security (HSTS) allows servers to send a Strict-Transport-Security header in HTTPS responses, instructing clients to interact only over secure connections for a specified duration. Browsers may also preload HSTS policies for popular sites, automatically redirecting HTTP requests to HTTPS without user intervention. By November 2025, TLS 1.3 has become the de facto standard for HTTPS, with compliance frameworks like PCI DSS 4.0 requiring at least TLS 1.2 (prohibiting vulnerable versions such as TLS 1.0 and 1.1) and organizational policies increasingly mandating TLS 1.3 where feasible. U.S. federal executive orders further require government agencies to support TLS 1.3 as soon as practicable, with mandatory implementation by 2030. Emerging post-quantum cryptography considerations are influencing TLS implementations, with hybrid key exchanges incorporating NIST-approved algorithms like ML-KEM to prepare for quantum threats, though full adoption remains in early stages. This encryption layer mitigates key attacks including man-in-the-middle interception, where an attacker could otherwise eavesdrop or alter traffic, and replay attacks through TLS's integrity protections and sequence numbering.

Optimizations and Extensions

Caching and content negotiation

HTTP employs caching mechanisms to store copies of responses on intermediaries or clients, reducing latency and bandwidth usage by serving fresh or revalidated content without full retrieval from the origin server. Caching operates through directives that specify storage rules and validators that enable efficient freshness checks, allowing caches to determine if a stored response remains valid. These features apply across HTTP versions, with intermediaries like proxies distinguishing between shared caches serving multiple users and private caches dedicated to individuals. Caching directives control how and for how long responses may be stored and reused. The primary directive, Cache-Control, appears in both requests and responses to convey instructions such as max-age for specifying freshness lifetime in seconds, no-cache to require revalidation before reuse, and no-store to prohibit storage entirely. The public directive permits storage in shared caches, while private restricts it to private caches only, preventing sensitive data from being shared across users. Legacy directives include Expires, which sets an absolute expiration date for responses in HTTP date format, though it is overridden by Cache-Control: max-age if present. The Pragma header, a holdover from HTTP/1.0, primarily uses no-cache for backward compatibility but lacks the precision of modern Cache-Control. Validators facilitate conditional requests to check resource changes without transferring the full body, enabling 304 (Not Modified) responses. The header provides an opaque entity tag—a string identifier for the resource version—while Last-Modified supplies a of the last update. Clients include these in request headers like If-None-Match (comparing ETags) or If-Modified-Since (comparing timestamps); if the resource matches the validator, the server responds with 304, confirming the cached copy's validity. ETags support weak validation (prefixed with "W/") for semantically equivalent content, whereas timestamps assume monotonic clock behavior but risk precision loss in distributed systems. Content negotiation allows servers to select the most appropriate representation of a based on client preferences, optimizing delivery for device capabilities or user settings. In proactive negotiation, the client sends Accept to list preferred media types (e.g., text/html;q=1.0, application/json;q=0.8), Accept-Language for languages (e.g., en;q=1.0, fr;q=0.9), and Accept-Encoding for codings (e.g., gzip). The server ranks variants using quality values (q-factors from 0 to 1) and specificity, selecting the best match; if none fits, it may return 406 (Not Acceptable). This process ensures tailored responses, such as language-specific pages or compressed payloads, without client-side parsing of alternatives. Proxies and other intermediaries enhance caching efficiency but require careful variant handling to avoid serving incorrect content. Shared caches, such as those in content delivery networks (CDNs), store responses for reuse across users to scale distribution, while private caches like browser storage serve only the requesting user for personalized or secure content. The Vary response header lists request headers (e.g., Vary: Accept-Language) that influence selection, keying cache entries to ensure distinct are stored separately and preventing cross-user mismatches. Without Vary, caches might erroneously reuse a , leading to incorrect deliveries in negotiated scenarios. RFC 9111 (2022) consolidated and clarified HTTP caching semantics from prior specifications, introducing precise rules for staleness (when response age exceeds freshness lifetime) and revalidation via conditional requests. It deprecated the Warning header, shifting status details to Age for elapsed time since generation, and added the must-understand directive to enforce comprehension of unknown cache instructions, improving interoperability in diverse deployments. These updates address ambiguities in freshness calculations and intermediary behavior, ensuring robust caching in modern HTTP/2 and HTTP/3 environments.

Compression and multiplexing

HTTP employs compression techniques to reduce the size of transferred data, thereby improving efficiency over networks with limited bandwidth. Content compression is signaled through the Content-Encoding response header, which indicates the encoding applied to the payload body, such as , , or . Clients specify supported encodings via the Accept-Encoding request header, enabling servers to select an appropriate method during ; common values include "gzip" for the gzip format defined in RFC 1952, "deflate" for zlib-compressed data per RFC 1950 and RFC 1951, and "br" for Brotli as specified in RFC 7932. These methods apply lossless algorithms to the message body, excluding headers, and require the client to decompress the content to access the original representation. In , header compression is addressed separately through HPACK, a specialized format designed to eliminate redundancy in HTTP header fields while mitigating compression oracle attacks like . HPACK maintains a static table of 61 common header entries (e.g., ":method: GET" at index 2) and a dynamic table that stores frequently occurring fields, allowing headers to be represented via compact indices or literals with optional for further size reduction. This approach reduces header overhead significantly compared to uncompressed HTTP/1.1, where repetitive fields like "user-agent" or "accept" contribute substantial bytes per request. HTTP/3 uses QPACK (RFC 9204, 2022) for header compression, adapted to QUIC's stream-based . Unlike HPACK, which operates on a single connection and can introduce if a stream is blocked, QPACK encodes headers using instructions sent on dedicated encoder and decoder streams. It employs a similar static Huffman table and dynamic table for indexing common fields but blocks certain updates to prevent dependencies between independent streams, ensuring compression efficiency without blocking. Multiplexing enables multiple concurrent request-response exchanges over a single connection, addressing limitations in earlier HTTP versions. In , this is achieved through independent streams—bidirectional sequences of frames identified by unique 31-bit identifiers—allowing frames from different streams to be interleaved without blocking. Unlike HTTP/1.1 pipelining, which suffered from head-of-line (HOL) blocking where a delayed response stalled subsequent ones due to sequential processing, streams progress independently, eliminating application-level HOL blocking. HTTP/3 extends this capability via , using stream-based where each HTTP request-response pair occupies a dedicated QUIC stream, supporting up to 2^62-1 streams and isolating blocking to individual streams without affecting others. HTTP/2 and later versions introduce server push, allowing servers to proactively send resources anticipated by the client, such as CSS files or images linked in an response. This is initiated via the PUSH_PROMISE frame, which reserves a identifier and includes the promised request headers, followed by the pushed response on that . For example, upon receiving a GET request for an document, the server may push associated stylesheets, reducing round-trip times. supports push similarly using unidirectional of type 0x01. While compression and multiplexing enhance performance, they introduce trade-offs. Compression algorithms like gzip and Brotli impose CPU overhead on both servers (during encoding) and clients (during decoding), potentially increasing latency on resource-constrained devices despite bandwidth savings. In contrast, HTTP/1.1 pipelining's HOL blocking often led to inefficient resource utilization, as a single slow response could delay an entire queue, prompting the shift to in later versions.

Examples

HTTP/1.1 transaction

A typical HTTP/1.1 transaction involves a client initiating a request to retrieve a resource, such as an page, from a server, followed by the server's response containing the requested content or status . This exchange uses messages over a TCP connection, with the request specifying the method (e.g., GET), target URI, and version, along with optional headers. Consider a scenario where a client requests the root page from . The client sends the following request message, including the mandatory Host header to identify the target server:

GET / HTTP/1.1 Host: [example.com](/page/Example.com) Accept: text/[html](/page/HTML)

GET / HTTP/1.1 Host: [example.com](/page/Example.com) Accept: text/[html](/page/HTML)

This request line indicates a GET method for the root path ("/"), using HTTP/1.1, followed by headers specifying the host and acceptable content types. The empty line (CRLF) after the headers signals the end of the header section, with no message body for a simple GET. Upon receiving the request, the server processes it and responds with a status message if successful. For an unmodified resource, the server might return a 200 OK response with the HTML content, including headers for content type, length, and date:

HTTP/1.1 200 OK Date: Mon, 08 Nov 2025 12:00:00 GMT Server: ExampleServer/1.0 Content-Type: text/html Content-Length: 123 <!DOCTYPE html> <html> <head><title>Example</title></head> <body><h1>Hello, HTTP/1.1!</h1></body> </html>

HTTP/1.1 200 OK Date: Mon, 08 Nov 2025 12:00:00 GMT Server: ExampleServer/1.0 Content-Type: text/html Content-Length: 123 <!DOCTYPE html> <html> <head><title>Example</title></head> <body><h1>Hello, HTTP/1.1!</h1></body> </html>

The status line confirms success (200 OK), while headers provide metadata about the response, and the body delivers the HTML. The Content-Length header ensures the client knows when the body ends. A common variation is a conditional GET request, where the client includes an If-Modified-Since header to check for updates since a known date, avoiding unnecessary data transfer if the resource is unchanged. The client request would be:

GET / HTTP/1.1 Host: example.com If-Modified-Since: Mon, 01 Nov 2025 00:00:00 GMT

GET / HTTP/1.1 Host: example.com If-Modified-Since: Mon, 01 Nov 2025 00:00:00 GMT

If the resource has not changed, the server responds with a 304 Not Modified status, omitting the body to save bandwidth:

HTTP/1.1 304 Not Modified Date: Mon, 08 Nov 2025 12:00:00 GMT

HTTP/1.1 304 Not Modified Date: Mon, 08 Nov 2025 12:00:00 GMT

This allows the client to use its cached version. Such transactions can be tested using tools like , which sends HTTP/1.1 requests by default over HTTP. For the basic GET example, the command is curl -v http://[example.com](/page/Example.com)/, where -v enables verbose output to display the raw request and response. For the conditional variation, use curl -v -H "If-Modified-Since: Mon, 01 Nov 2025 00:00:00 GMT" http://[example.com](/page/Example.com)/.

HTTP/2 and HTTP/3 differences in practice

In HTTP/2, a typical multiplexed transaction for multiple GET requests utilizes binary-framed messages over TCP, enabling concurrent streams without blocking. For instance, a client might initiate two parallel requests to fetch resources: the first stream (ID 1) sends a HEADERS frame containing compressed method, path, and authority details via HPACK, followed by a DATA frame for the response body if needed; the second stream (ID 3, odd IDs for client-initiated streams) follows similarly, with frames interleaved across the connection to avoid head-of-line (HOL) blocking at the application layer. This framing allows efficient resource utilization, as seen in simulations using nghttp2, where header compression reduces redundancy— for example, repeating fields like ":method: GET" are indexed and referenced rather than resent. HTTP/3 builds on this by mapping frames to packets over UDP, integrating transport and security layers for lower latency. In a practical example, a resumed connection uses 0-RTT packets to send initial HTTP/3 HEADERS frames (e.g., for a GET /index.) without a full , carried in QUIC's frames within encrypted packets identified by connection IDs (CIDs) that persist across network changes. Subsequent frames for the response are multiplexed on independent QUIC streams, avoiding TCP's packet-level HOL blocking since lost packets affect only their stream. Key practical differences arise from their transport layers: HTTP/2 relies on TCP, where a single triggers retransmission delays across the entire connection, exacerbating HOL blocking in lossy networks; HTTP/3's , however, multiplexes streams natively and encrypts all packets by default with TLS 1.3 integrated, enabling faster recovery and seamless connection migration via CIDs—ideal for mobile handoffs. Tools like facilitate analysis of these, decoding HTTP/2 frames from TCP payloads or QUIC packets (with decryption keys), while nghttp2 simulates transactions for testing multiplexing efficiency. In mobile environments, these enhancements yield measurable latency reductions; for example, studies indicate achieves 12.4% faster on average compared to , with up to 20% shorter page load times under high-latency conditions (e.g., 200 ms RTT), due to QUIC's 0-RTT resumption and loss tolerance. 's header compression via HPACK remains a shared efficiency, but QUIC's stream isolation amplifies gains in variable networks.

Comparisons

With similar protocols

HTTP distinguishes itself from other application-layer protocols through its stateless request-response model optimized for distributed hypermedia systems, yet it shares foundational elements like header-based metadata and URI addressing with analogs in , , and real-time communication. These comparisons highlight HTTP's web-centric design versus more specialized or constrained alternatives. The (FTP), defined in RFC 959, focuses on efficient bulk file transfers between hosts using stateful sessions that maintain connection context across commands. Unlike HTTP's stateless nature, where each request is independent and typically uses a single TCP connection for both control and data, FTP employs separate control (port 21) and data (port 20) channels, supporting transfer modes like binary or ASCII to handle diverse file types. This statefulness enables directory navigation and resumable transfers but requires dedicated clients, making FTP less integrated into web browsers and ecosystems compared to HTTP's seamless embedding in hypertext retrieval. The (SMTP), outlined in RFC 5321, enables relay through a push-oriented model where servers proactively forward messages between domains, contrasting HTTP's pull-based client requests. Both protocols draw from the (MIME) framework in RFC 2045 for structuring payloads, including support for multipart bodies, attachments, and content types, though HTTP modifies MIME entity rules for web efficiency, such as chunked transfers absent in SMTP. SMTP operates on port 25 with simple command-response exchanges similar to HTTP methods, but its focus remains on asynchronous mail delivery rather than on-demand resource access. WebSocket, specified in RFC 6455, extends HTTP by initiating a protocol upgrade through a client GET request containing Upgrade: websocket and Connection: Upgrade headers, to which the server responds with status code 101 (Switching Protocols). This handshake transforms the connection into a full-duplex channel for bidirectional, low-latency data flow, diverging from HTTP's unidirectional request-response cycle that necessitates polling for updates. Post-upgrade, WebSocket employs framed messages over the persistent TCP link, ideal for real-time use cases like collaborative editing or live notifications, while reusing HTTP's port 80/443 infrastructure without HTTP's repeated connection overhead. gRPC utilizes for transport, multiplexing multiple RPC calls over a single connection, but replaces HTTP's text-oriented payloads—commonly in APIs—with compact binary serialized from service definitions in .proto files. This structured approach enforces typed contracts for requests and responses, enabling efficient unary, server-streaming, client-streaming, and bidirectional operations, unlike HTTP's more flexible, human-readable method for general web interactions. Developed for high-performance distributed systems, gRPC's binary encoding reduces bandwidth and latency compared to HTTP's verbose formats, particularly in environments. The (CoAP), detailed in RFC 7252, acts as a UDP-based lightweight analog to HTTP for (IoT) applications in resource-limited settings, such as low-power sensors with intermittent connectivity. It adopts HTTP-like methods (GET, POST, PUT, DELETE) and a RESTful resource model addressed by URIs, but operates over UDP (default port 5683) to minimize header overhead and support multicast, with optional reliability via confirmable messages rather than TCP's guaranteed delivery. Designed for constrained-node networks like , CoAP includes features for discovery and proxying to HTTP, facilitating between IoT devices and the web while avoiding HTTP's higher resource demands.

Version evolution summary

The evolution of HTTP has progressed through several versions, each addressing limitations in performance, efficiency, and reliability while maintaining where possible. HTTP/1.1 established the foundational text-based protocol over TCP, enabling persistent connections but suffering from . Subsequent versions introduced binary framing and in , still over TCP, and shifted to over UDP in for reduced latency and better handling of network changes. The following table summarizes key aspects of these versions.
VersionTransportFramingKey FeaturesLimitationsAdoption (2025)
HTTP/1.1TCPTextPersistent connections, pipelining, at , inefficient for multiple resources~9% of traffic ( share, down from higher historical usage due to upgrades)
HTTP/2TCPBinaryMultiplexing of requests over single connection, HPACK header compression, server pushInherits TCP , dependency on TCP for congestion control~60% of traffic, widely supported but declining relatively as grows
HTTP/3/UDPBinary0-RTT handshakes, connection migration across networks, integrated TLS 1.3 encryption, multiplexed streams without TCP HOL blockingPotential blocking by firewalls restricting UDP, higher initial implementation complexity~30% of traffic ( share); growing rapidly but requires fallbacks to for compatibility
Performance improvements in later versions include notable latency reductions; for instance, can achieve up to 12.4% lower page load times compared to in real-world tests, particularly beneficial in high-latency or lossy networks. Adoption often involves fallback mechanisms, where browsers negotiate the highest supported version via ALPN during TLS handshakes. Looking ahead, the HTTP Working Group focuses on incremental extensions such as improved caching semantics and privacy enhancements rather than a major HTTP/4 release, as 's foundation addresses core transport challenges effectively.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.