TCP is blazingly fast. 🦀
When every millisecond matters - real-time systems, database engines, caches, brokers, mission-critical software; these systems speak raw TCP.
Because when speed and control are the priority, abstraction is the enemy.
HTTP, while reliable, comes at a cost. It sits on top of TCP and adds overhead: headers, footers, encoding, decoding - latency creeps in.
The Cost of HTTP Overhead
Check out what happens under the hood with an HTTP response:
StatusCode : 200
StatusDescription : OK
Content : {72, 101, 108, 108...}
RawContent : HTTP/1.1 200 OK
Content-Length: 12
Hello world!
Headers : {[Content-Length, 12]}
RawContentLength : 12
HTTP is text-based. It requires parsing, extra processing, and additional steps before data even hits the network.
Now compare that to raw TCP:
const data = Buffer.from("hello")
tcpserverClientConnection.write(data)
No overhead. No extra encoding or decoding. Protocol-less. Just raw data, from point A to B, as fast as possible.
An even faster transport exists: UDP - used in real-time apps (P2P gaming, live video streaming). But UDP trades reliability for speed.
TCP: Three Layers Below HTTP
The deeper you go, the more control you have:
HTTP
└── Encoding/Decoding
└── Binary/Buffers
└── TCP
If you want to be more than a CRUD engineer, if you want to build networking systems - understanding TCP is non-negotiable.
TCP: A Practical Introduction
One of the most powerful things about Node.js? It is what you make it.
- Need a CRUD API? ✅
- Need a systems engine? ✅
- Need to extend it with C++? ✅
Your perspective on Node defines what you can build.
Let's start simple: working directly with raw memory.
const data = Buffer.alloc(4)
data.writeInt32BE(7) // Store the number 7 as raw binary
What's happening here?
- Allocate 4 bytes (32 bits) of memory
- Write a 32-bit integer (7) as raw data
This is lower-level JavaScript.
Now, let's go deeper and build a raw TCP server.
Writing a TCP Server in Node.js
A bare-bones TCP server:
// server.js
const net = require("node:net")
const server = net.createServer((c) => {
c.on("data", (data) => console.log(data))
c.on("error", (err) => console.log(err))
c.write("hello world") // Implicitly converts string to a buffer
c.end()
})
server.listen(3000)
Now, run the server and hit it with curl:
curl http://localhost:3000
🚨 ERROR:
curl: The server committed a protocol violation.
Why? Because we're speaking raw TCP, and curl expects HTTP (headers, status codes, and structured responses).
Let's create a TCP client instead:
//client.js
const net = require("node:net")
const c = net.createConnection({ port: 3000, host: "localhost" })
c.on("data", (data) => console.log(data))
c.write("hello")
Run the server and the client, and here's the output:
<Buffer 48 65 6c 6c 6f 20 77 6f 72 6c 64 21>
That's raw TCP at work. But we can decode it easily since its only a string:
c.on("data", (data) => {
console.log(data.toString()) // "hello world"
})
The Power of Raw TCP
So why does this matter? Why should you care?
Because some of the most legendary systems you use daily run on pure TCP:
- MySQL clients
- RabbitMQ
- Redis
- Neo4j
- Email clients (SMTP, IMAP, POP3)
These systems don't need HTTP's baggage - no cookies, no headers, no JSON parsing. Just raw, efficient, custom protocols.
Building a Custom Protocol
A protocol is simply an agreement:
"When you send me data, I expect it in this format, or I reject it."
Here's how HTTP enforces its protocol:
HTTP/1.1 200 OK
Content-Length: 12
Hello world!
Let's build our own simple binary protocol on top of raw TCP.
Defining Our Protocol
We'll structure our buffer like this:
buffer = msg length (4 bytes) | msg (variable length) | metadata (variable length)
First 4 bytes → Message length
Next bytes → Message
Remaining bytes → Metadata
Now let's encode a message server-side:
const server = net.createServer((c) => {
c.on("data", (data) => console.log(data.toString()))
const data = Buffer.from("hello world") // Encode message
const metadata = Buffer.from([0x00]) // No metadata
const len = Buffer.alloc(4) // Allocate 4 bytes for length
len.writeInt32BE(data.length, 0) // Store message length
const combinedBuffer = Buffer.concat([len, data, metadata])
c.write(combinedBuffer) // Send to client
c.end()
})
Now, on the client side, we'll decode this structured message:
const c = net.createConnection({ port: 3000, host: "localhost" })
c.on("data", (data) => {
console.log(data)
const len = data.readInt32BE(0) // Read message length
console.log(len)
const msg = data.subarray(4, 4 + len) // Extract message
console.log(msg.toString())
const metadata = data.subarray(4 + len) // Extract metadata
console.log(metadata)
})
c.write("hello")
🚀 And just like that, we've built a structured TCP messaging system.
The Bigger Picture
Believe it or not, this is how real-world networking applications work. In C, this buffer-based approach is the standard for high-performance data exchange.
By going beyond HTTP and working with raw TCP, you unlock the ability to build low-level networking systems.
What's Next?
This article is a tiny snippet from a larger series where we build a message broker for distributed systems from scratch.
You'll learn about:
✅ Long-lived TCP connections
✅ Buffer handling
✅ Serialization & deserialization
✅ Connection management
✅ Distributed systems architecture
If you've made it this far, you already think like a systems engineer - and that's rare.
The deeper you go, the more valuable you become.
You can find me on x
see ya!🫡
Top comments (12)
Bonus you can turn a TCP socket to HTTP, by simply passing an HTTP formatted protocol:
Now curl and the browser will understand you
Although use legal HTTP/1.1, so include a Host header field, and quite possibly Connection too unless you really wanted it to stay open.
There's some slightly odd things here, like suggesting that IMAP is "pure TCP"; IMAP has just as must parsing overhead as HTTP (possibly more, in fact). I've written parsers for both, and with the latest HTTP/1.1 specifications, it's reasonably easy - whereas IMAP has nested structures and literals and other weird things. HTTP/2 is rather better in this regard than either. XMPP, despite appearances, is pretty fast to parse (you can get it to close to a byte-by-byte scan). SMTP is middling; but in general we're more concerned with throughput on SMTP than we are about latency.
Also, TCP is surprisingly not that fast - it's fine once "warmed up", which is one of the reasons why later versions of HTTP (like HTTP/1.1, but also HTTP/2) use long-lived TCP connections. But it takes a while to get going, which is one of the reasons behind QUIC and other things.
Where using "raw TCP" comes into its own is in three areas:
All of these are addressed to some degree in HTTP/2 - and HTTP/3, being on QUIC, rids itself of the TCP legacies as well.
Nobody said IMAP is faster than HTTP the section is talking about custom protocols over raw tcp software:
Email clients (SMTP, IMAP, POP3)
These systems don't need HTTP's baggage - no cookies, no headers, no JSON parsing. Just raw, efficient, custom protocols.
Key: custom protocols, the section is examples of applications running on custom protocols not http, even the intro says so:
So why does this matter? Why should you care?
...
If you really wanted to make a case for HTTP vs TCP performance you would have picked MySQL which by the way on handshake return a raw buffer:
Taking one point in an example list of 5 applications using raw tcp and a custom protocol(NOT HTTP), and taking the 5th point skipping redis, databases is just confusing,
In a list about custom protocols and not performance and nowhere I mentioned performance,
In an article that's more than that list where nobody shoot http down, but highlighting custom protocols,
TCP warms up? a layer warms up? an engine warms up not a transport layer TCP has a cold start and Long lived connections are not for warming up but reusing a connections since creating a file descriptor is expensive that's all.
In languages like c, go that doesn't matter much
and..... HTTP3 is built on UDP and I clearly said in the article:
An even faster transport exists: UDP - used in real-time apps (P2P gaming, live video streaming). But UDP trades reliability for speed.
and..... HTTP2 is built atop TCP but it's faster than TCP? so JavaScript in Node.js built on top of c++ is faster than c++ by that logic?
HTTP/2 is not faster than TCP itself, it just uses TCP more efficiently than HTTP/1.x
You come across quite aggressive there, but I'm assuming you're not.
So, IMAP is a "custom protocol", in as much as it's not HTTP. But it's not particularly interesting in terms of speed, which was the headline of the article. One of the reasons IMAP doesn't use HTTP is because it predates it by decades, but it has a syntax substantially more complex than JSON, and it absolutely does have headers - exactly the same format as HTTP, in fact, only worse, because email. But anyway, assuming you were just illustrating that protocols exist at the same layer at HTTP, let's ignore that and move on.
TCP has a "cold start" and therefore "warms up", because it's doing window size discovery during connection startup (and, sometimes, PMTU discovery). It's not because creating a file descriptor is expensive - that's cheap, and UDP uses those too. These are problems that exist in C or Go (or C++) because they're inherent in the network properties of TCP. There's a related issue here with connection setup times, because TCP needs a three-way handshake, and typically you then need a TLS exchange as well, which is why HTTP/1.1 introduced connection reuse, and what you might be thinking of. On top of this, there's a class of problem called "Head of Line blocking", where slow, missing, or simply complicated data can stall the connection - HTTP/1.1 has its own entire class of HoL cases, but TCP has inherent ones too.
UDP is not "faster" - I really have no idea where that one comes from. But a protocol built on UDP directly can do certain things faster because HoL blocking doesn't exist in UDP because there's no line to be the head of. RTP, for example, streams your video calls over UDP because if data is dropped it's handled by the video codec. But you can stream video over TCP too - it's just that if packets get dropped everything will stall much more noticeably.
HTTP/2 runs on top of TCP, but it mitigates both some of the shortcomings of TCP and many of the shortcomings of HTTP/1.1. I didn't say it was faster than TCP - just takes advantage of a lot of the things you'd want to do in a new protocol. HTTP/2 doesn't have inherent HoL issues, though the underlying TCP still does.
HTTP/3 isn't really based on UDP, it's based on QUIC, which itself runs over UDP. UDP is no faster than TCP - they're both interfaces for sending packets over IP - UDP isn't "unreliable", as such, it's more that it "doesn't add reliability", which both TCP and QUIC (and SCTP, another transport layer which runs on top of UDP) do. QUIC does not have HoL issues inherently (though of course you can always add your own!).
Jesus my fault, you're right! 🫡😭. really regret putting email clients as filler 5th item in a list of network systems powered by tcp. I digress
For anything you are willing to save even a few millisecond, choosing Node is like using using bicycle to go on a moto gp
A pure JavaScript BSON serializer can outperform one built with C++ NAPI. Node.js itself is C++, which is why performance depends on the layer you're using, TCP, is just a thin wrapper around native code (tcp_wrap.cpp).
There's a reason we have tensorflow.js with cpp bindings just as python has C bindings It's perspective and how deep you can go! that's all
Check out some benchmarks with different HTTP servers setup on different language. Then you'll get to know, when requests reach >40-50K it's so slow.
HTTP:
TCP:
In conclusion as I did in my comment and article: Node.js is not for CRUD(HTTP) only, its a c++ engine with raw binary capabilities as TCP has shown.
BSON serializer and TCP has nothing to do with CRUD and HTTP. CRUD mindset will be the death of software 🤦🏽♀️
🫳🎤
lets write raw tcp server ...
const net = require("node:net")
const server = net.createServer((c) => {
🤦