What does “listening” mean in terms of networking?
I am trying to make an HTTP server from scratch and had to understand about TCP listeners, so when I started to read about it and I was reading that TCP listeners do this or that and you bind an IP address to it with port number the only thing that was confusing me was that what does it mean “listening”?
I mean how does it listen? Is there a fixed interval? How is my Rust program is receiving this data? Where is this data coming from? Is there a fixed interval in which we check again if any request came?
So after doing some research, I got to know that -
When we say that a TcpListener instance in Rust is "listening" on a specific IP address and port, we're using a networking metaphor to describe the way the operating system monitors network traffic for connection attempts that match certain criteria (in this case, the IP address and port number)
The Concept of Binding
Binding to an IP Address and Port
IP Address: Every device connected to a network has an IP address, which is used to identify it on the network. When a TCP listener is bound to an IP address, it tells the operating system that this particular application is interested in network traffic sent to that address.
Port: A port is a numerical identifier in networking used to specify a specific application or service on a device. Since a single device can run multiple networked applications simultaneously, the port number helps differentiate which application should handle incoming data.
What exactly is Binding?
When you bind a server (like a Rust TcpListener) to a specific IP address, you're telling the server to listen for incoming connections on that IP address.
I was confused about what Binding means. Like does it mean from which address I’ll receive data? Or what?
So I tried to understand this with an analogy -
Imagine you have several mailboxes in front of your house, each with a different label or number. When someone sends you a letter, they choose one of those mailboxes based on the label or number you've given them.
Binding is like deciding which mailbox you’re going to check for mail. If you decide to only check the mailbox labeled "Bills," it means you’re telling anyone who wants to send you something, "Hey, if you want me to see it, put it in the 'Bills' mailbox."
Binding does not mean choosing from which addresses you'll accept request (or requests). Instead, it's about choosing which mailbox (among those at your house) you're going to use to receive any mail (or requests) sent to your house.
Now in this analogy, it might feel like how my house has multiple mailboxes in front of it. Do is it mean that my laptop has multiple IP addresses?
Well normally the NIC in a laptop has only one IP address configured but because in cloud platforms multiple websites are hosted on the same machine, the NIC with multiple IP addresses helps, so on the same machine different IP addresses can get requests for themselves without interfering with each other.
Now back to how listening works theoretically
Operating System's Role: The operating system keeps track of all the IP addresses and ports that applications are listening on. When network packets arrive at the device, the operating system checks the destination IP address and port number against this list.
Again here you can see that the operating system checks the destination of the IP address and the port number, so basically checking if the machine has multiple IP addresses configured and then to which one OS should send the request to.
Network Traffic Filtering: If the destination IP and port of an incoming packet match an application that's listening (i.e., our TcpListener), the operating system forwards this packet to that application. If not, the packet is ignored or rejected based on the system's network rules.
How does the OS "listen" to the request?
The process by which an operating system listens for and handles incoming network requests to a bound IP address and port doesn't typically rely on a timer that checks periodically (like every 1 second). Instead, the mechanism is more efficient and event-driven, utilizing the network stack and hardware capabilities to immediately notify the operating system of incoming packets. Here’s a simplified overview of how this works:
Network Interface Controller (NIC)
Each network packet arriving at a device first reaches the Network Interface Controller (NIC), the hardware component that connects a computer to a network. The NIC operates at a low level, handling the physical transmission and reception of data packets over the network medium.
so essentially it's neither the TCP listener nor the OS who is actually listening but the NIC.
The initial detection of incoming network packets is indeed a hardware-level function, primarily managed by the Network Interface Controller (NIC). The NIC is the first point of contact for all incoming data from the network. It operates at a low level, directly interfacing with the physical network medium (like Ethernet, Wi-Fi, etc.), and is responsible for the physical transmission and reception of data packets.
So what happens when concurrent requests are received by an NIC, I mean it can detect only one at a time or unlimited?
The Network Interface Controller (NIC) plays a crucial role in handling incoming network traffic, including dealing with concurrent requests. While the NIC is extremely fast and can process a vast number of packets per second, it indeed deals with one packet at a time due to the serial nature of network communication.
However, the combination of high-speed operation, buffering, and efficient handling by the operating system and application software makes it capable of managing what appears as concurrent requests. Here's how it works:
Serial Processing: Physically, the NIC receives packets one at a time because a network cable or wireless connection can only carry one packet's worth of data at any instant. However, because packets are small and the NIC operates at a very high speed, it can process many packets in a very short amount of time, giving the impression of parallelism.
Now again the question that bugged me here was -
How something can send a request to another thing without the other thing waiting for it?
Like in our current discussion, NIC is sending some information to the CPU but the CPU is working on something else, how does the CPU exactly receive it?
The process which is being described here is called asynchronous communication, where one component can send a signal or message to another component without the recipient actively waiting for it at that moment. Let's break down how this works in the context of the NIC sending an interrupt request (IRQ) to the CPU:
Interrupt Controller: Modern computer systems include an interrupt controller, a hardware component responsible for managing interrupts from various devices, including the NIC. The interrupt controller is constantly monitoring for incoming interrupts from different sources.
The main purpose of an interrupt controller is to arbitrate between multiple devices that may need the CPU's attention simultaneously. It ensures that each device gets a fair chance to interrupt the CPU when necessary.
When a hardware device, such as a network interface card (NIC) or a keyboard, needs to communicate with the CPU, it sends an interrupt signal to the interrupt controller.
Yes, that's correct! Even the Keystrokes from a keyboard can indeed generate interrupts in a computer system. When you press a key on your keyboard, it sends an electrical signal to the keyboard controller, which in turn generates an interrupt to the CPU via the interrupt controller. This interrupt informs the CPU that there is new input from the keyboard that needs to be processed.
So this explains how the data reaches the OS from the NIC (using the interrupt Handling), Now how does it reach the TCP listener?
That's a Topic for our next discussion 😁, Thanks for reading this far.
Top comments (0)