TCP Connection Termination
Last Updated : 10 May, 2025
In TCP 3-way Handshake Process we studied that how connections are established between client and server in Transmission Control Protocol (TCP) using SYN bit segments. In this article, we will study how TCP close connection between Client and Server. Here we will also need to send bit segments to a server which FIN bit is set to 1.
TCP supports two types of connection releases like most connection-oriented transport protocols:
- Graceful connection release -
In the Graceful connection release, the connection is open until both parties have closed their sides of the connection. - Abrupt connection release -
In an Abrupt connection release, either one TCP entity is forced to close the connection or one user closes both directions of data transfer.
Abrupt connection release :
An abrupt connection release is carried out when an RST segment is sent. An RST segment can be sent for the below reasons:
- When a non-SYN segment was received for a non-existing TCP connection.
- In an open connection, some TCP implementations send an RST segment when a segment with an invalid header is received. This will prevent attacks by closing the corresponding connection.
- When some implementations need to close an existing TCP connection, they send an RST segment. They will close an existing TCP connection for the following reasons:
- Lack of resources to support the connection
- The remote host is now unreachable and has stopped responding.
When a TCP entity sends an RST segment, it should contain 00 if it does not belong to any existing connection else it should contain the current value of the sequence number for the connection and the acknowledgment number should be set to the next expected in- sequence number on this connection.
Graceful Connection Release :
The common way of terminating a TCP connection is by using the TCP header’s FIN flag. This mechanism allows each host to release its own side of the connection individually.
How mechanism works In TCP :
- Step 1 (FIN From Client) -
Suppose that the client application decides it wants to close the connection. (Note that the server could also choose to close the connection). This causes the client to send a TCP segment with the FIN bit set to 1 to the server and to enter the FIN_WAIT_1 state. While in the FIN_WAIT_1 state, the client waits for a TCP segment from the server with an acknowledgment (ACK). - Step 2 (ACK From Server) -
When the Server received the FIN bit segment from Sender (Client), Server Immediately sends acknowledgement (ACK) segment to the Sender (Client). - Step 3 (Client waiting) -
While in the FIN_WAIT_1 state, the client waits for a TCP segment from the server with an acknowledgment. When it receives this segment, the client enters the FIN_WAIT_2 state. While in the FIN_WAIT_2 state, the client waits for another segment from the server with the FIN bit set to 1. - Step 4 (FIN from Server) -
The server sends the FIN bit segment to the Sender(Client) after some time when the Server sends the ACK segment (because of some closing process in the Server). - Step 5 (ACK from Client) -
When the Client receives the FIN bit segment from the Server, the client acknowledges the server’s segment and enters the TIME_WAIT state. The TIME_WAIT state lets the client resend the final acknowledgment in case the ACK is lost. The time spent by clients in the TIME_WAIT state depends on their implementation, but their typical values are 30 seconds, 1 minute, and 2 minutes. After the wait, the connection formally closes and all resources on the client-side (including port numbers and buffer data) are released.
In the below Figures illustrate the series of states visited by the server-side and also the Client-side, assuming the client begins connection tear-down. In these two state-transition figures, we have only shown how a TCP connection is normally established and shut down.
TCP states visited by ClientSide -

TCP states visited by ServerSide -

Here we have not described what happens in certain scenarios like when both sides of a connection want to initiate or shut down at the same time. If you are interested in learning more about this and other advanced issues concerning TCP, you are encouraged to see Stevens’comprehensive book.
GATE Question -
Consider a TCP client and a TCP server running on two different machines. After completing the data transfer, the TCP client calls close to terminate the connection and a FIN segment is sent to the TCP server. Server-side TCP responds by sending an ACK which is received by the client-side TCP. As per the TCP connection state diagram(RFC 793), in which state does the client-side TCP connection wait for the FIN from the server-side TCP?
(A) LAST-ACK
(B) TIME-WAIT
(C) FIN-WAIT-1
(D) FIN-WAIT-2
Explanation : (D)
GATE CS 2017 (Set 1), Question 12
Similar Reads
Fixed and Flooding Routing algorithms In most situations, packets require multiple hops to make a journey towards the destination. Routing is one of the most complex and crucial aspects of packet-switched network design. Desirable Properties of Routing Algorithms:- Correctness and SimplicityRobustness: Ability of the network to deliver
5 min read
Distance Vector Routing (DVR) Protocol Distance Vector Routing (DVR) Protocol is a method used by routers to find the best path for data to travel across a network. Each router keeps a table that shows the shortest distance to every other router, based on the number of hops (or steps) needed to reach them. Routers share this information
5 min read
Unicast Routing - Link State Routing Unicast means the transmission from a single sender to a single receiver. It is a point-to-point communication between the sender and receiver. There are various unicast protocols such as TCP, HTTP, etc. TCP (Transmission Control Protocol) is the most commonly used unicast protocol. It is a connecti
6 min read
Internet Control Message Protocol (ICMP) Internet Control Message Protocol is known as ICMP. The protocol is at the network layer. It is mostly utilized on network equipment like routers and is utilized for error handling at the network layer. Since there are various kinds of network layer faults, ICMP can be utilized to report and trouble
11 min read
Open Shortest Path First (OSPF) Protocol Fundamentals Open Shortest Path First (OSPF) is a link-state routing protocol designed to efficiently route data within an Autonomous System (AS). It operates by using the Shortest Path First (SPF) algorithm to calculate the best path for packet forwarding. Unlike distance-vector protocols, OSPF triggers updates
7 min read
Types of Spanning Tree Protocol (STP) In Ethernet networks, switches use frames to forward data between devices. However, if there are multiple active paths between switches (such as when switches are interconnected), a loop can occur, causing frames to circulate indefinitely. This loop results in broadcast storms, high CPU utilization,
5 min read
Differences Between Virtual Circuits and Datagram Networks Computer networks that provide connection-oriented services are called Virtual Circuits while those providing connection-less services are called Datagram networks. For prior knowledge, the Internet that we use is based on a Datagram network (connection-less) at the network level as all packets from
7 min read
ARP, Reverse ARP(RARP), Inverse ARP (InARP), Proxy ARP and Gratuitous ARP Prerequisite IP Addressing, Introduction of MAC Addresses, Basics of Address Resolution Protocol (ARP) In this article, we will discuss about whole ARP-family, which are ARP, RARP, InARP, Proxy ARP and Gratuitous ARP. Let's try to understand each one by one.1. Address Resolution Protocol (ARP) -Addr
6 min read
Transport Layer responsibilities The transport Layer is the second layer in the TCP/IP model and the fourth layer in the OSI model. It is an end-to-end layer used to deliver messages to a host. It is termed an end-to-end layer because it provides a point-to-point connection rather than hop-to-hop, between the source host and destin
5 min read
Congestion Control in Computer Networks Congestion in a computer network happens when there is too much data being sent at the same time, causing the network to slow down. Just like traffic congestion on a busy road, network congestion leads to delays and sometimes data loss. When the network can't handle all the incoming data, it gets "c
7 min read