Recent from talks
Nothing was collected or created yet.
TCP half-open
View on WikipediaThe term half-open refers to TCP connections whose state is out of synchronization between the two communicating hosts, possibly due to a crash of one side. A connection which is in the process of being established is also known as embryonic connection. The lack of synchronization could be due to malicious intent.
RFC 793
[edit]According to RFC 793, a TCP connection is referred to as half-open when the host at one end of that TCP connection has crashed, or has otherwise removed the socket without notifying the other end. If the remaining end is idle, the connection may remain in the half-open state for unbounded periods of time.
Stateful Firewall Timeout
[edit]Another circumstance that can lead to half-open connections is if a stateful firewall times out a connection that is idle for too long. In this case, the firewall clears its internal state, and if either side of the connection sends a packet, the firewall will drop the packet. This will often result in a half-open connection as the two sides of the connection can end up with inconsistent connection states.
Embryonic connection
[edit]The term half-open connection can also be used to describe an embryonic connection, i.e. a TCP connection that is in the process of being established.
TCP has a three state system for opening a connection. First, the originating endpoint (A) sends a SYN packet to the destination (B). A is now in an embryonic state (specifically, SYN_SENT), and awaiting a response. B now updates its kernel information to indicate the incoming connection from A, and sends out a request to open a channel back (the SYN/ACK packet).
At this point, B is also in an embryonic state (specifically, SYN_RCVD). Note that B was put into this state by another machine, outside of B's control.
Under normal circumstances (see denial-of-service attack for deliberate failure cases), A will receive the SYN/ACK from B, update its tables (which now have enough information for A to both send and receive), and send a final ACK back to B.
Once B receives this final ACK, it also has sufficient information for two-way communication, and the connection is fully open. Both endpoints are now in an established state.
See also
[edit]References
[edit]- Twingate. (n.d.). What is a TCP Half Open Scan?. Retrieved May 2, 2025, from [1](https://www.twingate.com/blog/glossary/tcp-half-open-scan)
- Palo Alto Networks. (n.d.). TCP Half Closed and TCP Time Wait Timers. Retrieved May 2, 2025, from [2](https://docs.paloaltonetworks.com/pan-os/10-1/pan-os-networking-admin/session-settings-and-timeouts/tcp/tcp-half-closed-and-tcp-time-wait-timers)
- Sanchit Gurukul. (n.d.). Understanding TCP Half-Open Connections. Retrieved May 2, 2025, from [3](https://sanchitgurukul.com/understanding-tcp-half-open-connections)
External links
[edit]TCP half-open
View on GrokipediaOverview
Definition
A TCP half-open connection refers to a state where one endpoint maintains an established connection, but the peer has closed, aborted, or failed to respond without the first endpoint's knowledge, often due to a crash, reboot, or network partition.[6] This disrupts normal bidirectional communication, as the unaware endpoint may continue sending data, leading to resource waste or desynchronization.[2] The term "half-open" originates from RFC 793, which describes it in the context of desynchronized established connections requiring reset (RST) segments to abort and free resources.[6] In contrast to a fully open TCP connection with bidirectional acknowledgment and reliable data flow, a half-open connection lacks synchronization, rendering it inefficient and susceptible to issues like unnecessary traffic or failure detection delays. The term is also used in the context of connection establishment to describe embryonic connections during the three-way handshake, where the process is incomplete—specifically in the SYN-SENT state on the client after sending a SYN but before receiving SYN-ACK, or SYN-RCVD on the server after SYN-ACK but before the client's ACK.[7] These states reserve resources like a Transmission Control Block (TCB) but prevent data transfer until sequence numbers synchronize. They are transient, lasting roughly one round-trip time (RTT) or until timeout.Role in Connection Establishment
The TCP connection establishment process uses a three-way handshake to initialize a reliable, full-duplex connection. The client sends a SYN segment with its initial sequence number (ISN), leading the server to allocate a TCB and enter SYN-RCVD upon receipt. The server responds with SYN-ACK, acknowledging the SYN and providing its ISN and window size. The client then sends ACK, completing synchronization and transitioning to ESTABLISHED.[8] In this process, the embryonic half-open states (SYN-SENT and SYN-RCVD) ensure secure sequence number and flow control synchronization before data exchange, acting as placeholders without allowing application data. This prepares endpoints for immediate data handling post-establishment, minimizing delays.[9] These states persist briefly—milliseconds in local networks to seconds over wide-area links—before completion or timeout, releasing resources if the handshake fails.[8]Technical Specification
States in TCP Handshake
In the TCP connection establishment process, the client enters the SYN-SENT state upon issuing an active OPEN, which triggers the transmission of a SYN segment containing the initial sequence number (ISS). In this state, the client awaits a matching SYN-ACK from the server, while a retransmission timer is started to handle potential losses; the initial retransmission timeout (RTO) is set to 1 second, doubling with each subsequent retransmission up to a maximum of at least 60 seconds, with a default of up to 6 retries in Linux implementations before aborting the connection.[8][10][11] On the server side, the SYN-RECEIVED state is entered when a SYN is received in the LISTEN state, prompting the server to respond with a SYN-ACK segment and queue the connection request in the SYN backlog to await the client's final ACK. Modern implementations, such as in the Linux kernel, manage this backlog via a dedicated SYN queue (controlled by thetcp_max_syn_backlog parameter, defaulting to 256 entries) to handle multiple pending half-open connections without overflow, transitioning incompleted entries to an accept queue once the handshake completes.[8][11]
Key state transitions during the handshake include: the client moving from CLOSED to SYN-SENT upon sending the SYN; the server advancing from LISTEN to SYN-RECEIVED upon receiving the SYN and sending SYN-ACK; both sides reaching ESTABLISHED upon receipt of the final ACK; the client reverting to CLOSED or the server to LISTEN on timeout, receipt of RST, or connection abort.[12]
A textual representation of the relevant finite state machine portion illustrates these dynamics:
Client Side:
CLOSED --(active OPEN, send [SYN](/page/The_Syn))--> SYN-SENT --(receive [SYN](/page/The_Syn)-ACK, send ACK)--> ESTABLISHED
SYN-SENT --(timeout/retransmit [SYN](/page/The_Syn))--> SYN-SENT (up to max retries)
SYN-SENT --(receive RST)--> CLOSED
Server Side:
LISTEN --(receive [SYN](/page/The_Syn), send [SYN](/page/The_Syn)-ACK)--> SYN-RECEIVED --(receive ACK)--> ESTABLISHED
SYN-RECEIVED --(receive RST)--> LISTEN
SYN-RECEIVED --(final timeout)--> LISTEN
Client Side:
CLOSED --(active OPEN, send [SYN](/page/The_Syn))--> SYN-SENT --(receive [SYN](/page/The_Syn)-ACK, send ACK)--> ESTABLISHED
SYN-SENT --(timeout/retransmit [SYN](/page/The_Syn))--> SYN-SENT (up to max retries)
SYN-SENT --(receive RST)--> CLOSED
Server Side:
LISTEN --(receive [SYN](/page/The_Syn), send [SYN](/page/The_Syn)-ACK)--> SYN-RECEIVED --(receive ACK)--> ESTABLISHED
SYN-RECEIVED --(receive RST)--> LISTEN
SYN-RECEIVED --(final timeout)--> LISTEN
Description in RFC 793
RFC 793, published in September 1981, provides the foundational specification for the Transmission Control Protocol (TCP), implicitly defining half-open connections through the connection establishment rules detailed in Section 3.4.[2] This section outlines the three-way handshake process, where connections enter temporary pending states during synchronization, consuming resources until completion or timeout.[2] The process initiates with the active opener sending a SYN segment, formatted as<SEQ=ISS><CTL=SYN>, to propose an initial sequence number (ISS) for synchronization.[2] The responder, upon receiving the SYN, replies with a SYN-ACK segment, <SEQ=ISS><ACK=RCV.NXT><CTL=SYN,ACK>, acknowledging the opener's SYN while sending its own SYN to synchronize sequence numbers bidirectionally.[2] The opener then completes the handshake by sending an ACK, transitioning both endpoints to the ESTABLISHED state; until this final ACK, the connection remains pending on either or both sides.[2]
Although RFC 793 does not explicitly use the term "half-open" for the synchronization phase, it describes these pending connections as queued in states such as SYN-SENT (for the opener) or SYN-RECEIVED (for the responder), managed via Transmission Control Blocks (TCBs) to track incomplete handshakes.[2] Incomplete handshakes are handled through timeout mechanisms, including user timeouts that flush queues, signal an error (e.g., "connection aborted due to user timeout"), delete the TCB, and return the client to the CLOSED state or the server to the LISTEN state.[2] The specification includes pseudocode-like rules for state transitions, such as: in the LISTEN state, upon receiving a SYN, send SYN-ACK and enter SYN-RECEIVED; or in SYN-SENT, upon receiving SYN-ACK, send ACK and enter ESTABLISHED.[2] If an unacceptable segment like an invalid ACK arrives, a reset (RST) segment is sent to abort the attempt.[2]
This model from RFC 793 remains the core basis for half-open connections in TCP implementations, with subsequent documents like RFC 1122 offering clarifications on practical aspects such as timeout values without altering the fundamental synchronization rules.[2]
Network Management
Stateful Firewall Handling
Stateful firewalls utilize connection tracking mechanisms to monitor and manage half-open TCP connections, which occur during the initial SYN phase of the three-way handshake. These firewalls maintain a dynamic state table that logs key attributes of each connection attempt, including source and destination IP addresses, ports, protocol, and initial sequence numbers. When a SYN packet arrives, the firewall creates an embryonic entry in the table, typically marked as NEW or SYN_SENT, and permits the corresponding SYN-ACK response only if it aligns with the expected details from the originating SYN, thereby enforcing directional integrity.[14][15][16] Policy enforcement in stateful firewalls involves selectively dropping unsolicited or malformed SYN packets that violate access rules, while also imposing limits on the number of concurrent half-open connections per source IP to mitigate resource strain. This tracking integrates seamlessly with Network Address Translation (NAT), where the firewall records both original and translated addresses/ports in the state table to validate return traffic accurately, ensuring that half-open states persist correctly across NAT boundaries.[14][15][16] In Linux's iptables combined with the netfilter conntrack module, rules targeting SYN packets—such as those using-m conntrack --ctstate NEW or -p tcp --[syn](/page/The_Syn)—automatically generate temporary state entries for half-open connections, allowing subsequent packets to be permitted or dropped based on match criteria. OpenBSD's Packet Filter (PF) operates statefully by default, creating state entries upon the first matching packet in a pass rule (e.g., pass in proto tcp ... keep state), with configurable limits like max-src-states to cap half-open entries per source and options for logging these states via log directives to facilitate anomaly detection.[14][15]
In contrast to stateless firewalls, which evaluate each packet in isolation without context, stateful firewalls support tolerance for asymmetric routing in half-open scenarios by preserving bidirectional state data, although this demands greater memory allocation for expansive connection tables.[16]
Timeout Mechanisms
In TCP connection establishment, the client initiates a half-open connection by sending a SYN segment and starts a retransmission timer based on the initial round-trip time (RTT) estimate. Per RFC 1122 (updated by RFC 6298), the initial retransmission timeout (RTO) for the SYN segment is 1 second when no prior measurements exist, with subsequent timeouts doubling exponentially upon each retransmission (backoff) to account for potential network delays.[17][18] Implementations must ensure SYN retransmissions persist for at least 3 minutes total per RFC 1122, though common defaults like Linux's tcp_syn_retries=6 yield approximately 63 seconds (1s + 2s + 4s + 8s + 16s + 32s). This balances responsiveness with the requirement to avoid indefinite waits for non-responsive servers.[19][20] On the server side, upon receiving a SYN and transitioning to the SYN_RCVD state, the server sends a SYN-ACK and starts its own retransmission timer for the unacknowledged SYN-ACK segment. The timer uses an initial RTO of 1 second per RFC 6298 with exponential backoff. The default number of SYN-ACK retransmissions is 5 (e.g., in Linux via tcp_synack_retries), leading to a retransmission total of around 31 seconds (1s + 2s + 4s + 8s + 16s) before retry exhaustion. However, many systems configure the overall SYN_RCVD timeout (for discarding half-open entries from the listen queue backlog) to 30-60 seconds to balance responsiveness and resource usage, especially under load where the queue may evict oldest entries. This queue timeout is distinct from the retransmission timer and aids in DoS mitigation.[21][20] System administrators can tune these timeouts via operating system parameters to adapt to network conditions or load. For instance, in Linux, the kernel parameter net.ipv4.tcp_synack_retries controls the number of SYN-ACK retransmissions (default 5), while net.ipv4.tcp_syn_retries governs client-side SYN retransmissions (default 6). Under high load, shorter timeouts or reduced retries may be set to increase aggressiveness, preventing backlog overflow and mitigating denial-of-service risks by faster resource reclamation, though this can lead to higher connection failure rates in congested networks. Upon timeout expiration in either SYN_SENT or SYN_RCVD states, the TCP implementation aborts the half-open connection by deleting the transmission control block (TCB) and deallocating associated resources, such as memory and socket descriptors, to avoid leaks.[8] If a late or invalid segment arrives post-timeout (e.g., a delayed ACK), the receiving endpoint typically responds with a reset (RST) segment to explicitly terminate the aborted state and notify the sender.[22] This cleanup process ensures efficient resource turnover without leaving dangling half-open entries that could accumulate under failure conditions.[19]Security Implications
SYN Flood Attacks
A SYN flood attack exploits the TCP three-way handshake by overwhelming a target server with a flood of SYN packets containing spoofed source IP addresses. The attacker sends numerous SYN segments to initiate connections, prompting the server to allocate resources for each half-open connection in the SYN_RECV state and respond with SYN-ACK packets. Since the spoofed IPs do not reply with the expected ACK, these connections remain incomplete, consuming server memory and processing capacity without advancing to the established state.[4] This mechanism targets the server's backlog queue, which holds pending connections during the handshake; typical implementations limit this queue to around 1024 entries by default, though values can vary by system configuration. As the queue fills with half-open connections—each requiring approximately 256 bytes of memory for the SYN queue entry in modern Linux kernels—the server exhausts available slots, rejecting new legitimate SYN requests with RST packets or connection timeouts. The attack can deplete CPU cycles for generating SYN-ACKs and retransmissions, leading to broader resource exhaustion and denial of service for valid clients.[4][23][24] Historically, SYN floods emerged as a prominent vulnerability in the mid-1990s, first documented by Cheswick and Bellovin in 1994 and publicized in 1996 through Phrack Magazine and CERT Advisory CA-96.21, which reported real-world incidents like ISP mail server outages. Early exploits highlighted the attack's simplicity using spoofed IPs to evade traceability. Over time, variants evolved into distributed SYN floods (DDoS), leveraging botnets for amplified volume and persistence.[4][25][26] Detection of SYN floods often relies on network monitoring for anomalies such as a disproportionately high ratio of incoming SYN packets to outgoing SYN-ACK responses, typically exceeding normal traffic patterns by orders of magnitude. Server logs may reveal clusters of incomplete handshakes stuck in the SYN_RECV state, alongside spikes in half-open connection counts and retransmission attempts. These indicators, when correlated with sudden performance degradation, confirm the attack's presence without requiring deep packet inspection.[27][23]Mitigation Techniques
One primary mitigation technique against SYN flood attacks, which exploit half-open TCP connections, is the use of SYN cookies. This method allows a server to respond to incoming SYN packets with a SYN-ACK without allocating memory for a new connection state in the backlog queue. Instead, the server encodes the necessary connection information—such as IP addresses, ports, and a timestamp—into the sequence number of the SYN-ACK using a cryptographic hash, typically MD5 or similar. Upon receiving the final ACK from the client, the server reconstructs the state from this encoded information, verifying the connection only if it matches. This approach prevents resource exhaustion from forged SYNs, as no half-open state is created until the handshake completes. SYN cookies were originally proposed by Daniel J. Bernstein and are detailed in RFC 4987, which recommends their use for robust protection without requiring changes to the TCP specification.[4][28] Rate limiting provides another layer of defense by restricting the number of SYN packets processed from individual sources, thereby throttling potential floods. Servers or intermediate devices can enforce per-IP limits, such as allowing no more than 10 SYN requests per second from a single address, dropping excess packets to preserve resources. This can be combined with proxy mechanisms, where a front-end proxy validates incoming SYNs by completing the handshake itself before forwarding legitimate connections to the backend server, avoiding state allocation on the target host. Such hybrid strategies distribute the load and filter malicious traffic early, as outlined in RFC 4987's discussion of proxy-based filtering.[4][23] Kernel-level tuning on operating systems like Linux enhances resilience to half-open connection overloads. Administrators can increase the SYN backlog size via thetcp_max_syn_backlog parameter, typically setting it to 2048 or higher for high-traffic servers to accommodate more pending connections before overflow. Enabling SYN cookies through net.ipv4.tcp_syncookies=1 via sysctl automatically activates the cookie mechanism when the backlog fills, ensuring continued acceptance of valid connections. Modern network interface cards (NICs) with TCP offload engines (TOE) or eBPF support can further mitigate attacks by handling SYN state tracking in hardware, reducing CPU load on the host kernel. These tunings are documented in the Linux kernel networking parameters and have been widely adopted for server hardening.[29]
Advanced mitigation relies on intrusion prevention systems (IPS) and cloud-based DDoS scrubbing services for comprehensive protection. IPS tools, such as those using behavioral analysis, monitor traffic patterns to detect anomalous SYN volumes and dynamically block offending IPs or apply rate limits in real-time. For large-scale attacks, cloud services like AWS Shield or Cloudflare's DDoS protection route traffic through scrubbing centers, where SYN floods are filtered using global intelligence and machine learning before clean traffic reaches the origin server. These solutions scale beyond on-premises capabilities, providing always-on defense against distributed half-open connection threats.