How I Started
When I started my journey with web development, after some weeks I kept hearing about "requests" and "responses." Wait! What is a request? What is a response? Why does HTTP exist, and why is it called HTTP? I hate pretending to understand something. Therefore, I decided to build my own HTTP server—or at least pretend to :). Eventually, I got to the core idea: HTTP is just Hypertext Transfer Protocol—basically a set of rules for transferring data. But what really mattered was understanding that HTTP is built on top of TCP(Transmission Control Protocol), which I'll explain below. So, I started building my own http-server-ts.
Wake-up Call from my Friend
I asked my friend who had built his own server, and I was genuinely impressed. When I asked him how I could learn all this stuff related to web development, he gave me surprisingly simple advice: "Build your own server, or pretend like you're building one. Eventually, you'll learn." Here's my friend's project that impressed me. I started reading his blog, which turned out to be more beneficial than the two entire web development courses I took in college. People often say "don't reinvent the wheel," but I don't understand why. I think reinventing the wheel is actually the best way to learn, even if I build dumb stuff.
First Challenge: What is TCP?
I know it might sound stupid, but I genuinely didn't know what TCP was, even though I had been creating endpoints and APIs in my web classes, practicing with HTTP libraries and frameworks. So, I decided to search, and thanks to my friend's help, I grasped the main idea. TCP (Transmission Control Protocol) provides reliable communication. HTTP defines the format of messages (requests and responses), while TCP ensures those messages actually arrive correctly.
Basically, it's a 3-way handshake. Imagine a conversation between a client and server:
- Client says: "Yo, I want to connect"
- Server replies: "Yo, I got your message. Thanks for reaching out."
- Client confirms: "Thanks for getting my message. Let's start communicating."
That's it. That's TCP in simple terms. Once this handshake completes, they can exchange data reliably.
Parsing HTTP Requests: Not Rocket Science
I got a lot of hype about parsing HTTP requests, making it sound like some complex computer science problem. After actually implementing it, I realized it's not rocket science at all. You're just reading a string and splitting it into pieces.
Here's what an HTTP request actually looks like on the wire:
GET /api/users?id=123 HTTP/1.1\r\n
Host: localhost:8080\r\n
User-Agent: curl/7.64.1\r\n
Accept: */*\r\n
\r\n
That's it. Just text with \r\n (carriage return + line feed) separating lines, and \r\n\r\n marking the end of headers.
My parsing logic:
static parse(buffer: Buffer): HttpRequest {
const request = new HttpRequest();
const rawRequest = buffer.toString('utf-8');
const lines = rawRequest.split('\r\n');
// Step 1: Parse the first line "GET /api/users?id=1 HTTP/1.1"
const requestLineData = RequestLine.parse(lines[0]);
request.method = requestLineData.method;
request.path = requestLineData.path;
request.version = requestLineData.version;
// Step 2: Parse headers
// Step 3: Parse body
// That's it
}
That's literally it. Split on \r\n, grab the first line for the request method/path, grab everything until the blank line for headers, and everything after is the body.
The "complex" part people made sound difficult was just string splitting and basic parsing. Once I saw the actual format, I realized I had been overthinking it.
Building HTTP Responses: Serializing Data Back
If parsing is just reading strings and splitting them, then building responses is just the reverse: taking your data and formatting it back into the HTTP format.
export class ResponseBuilder {
private statusLine = new StatusLine();
private headers = new HeaderBuilder();
private body = "";
setStatus(code: HttpStatusCode, message?: string): this {
this.statusLine.set(code, message);
return this;
}
setHeader(name: string, value: string): this {
this.headers.set(name, value);
return this;
}
setBody(body: string): this {
this.body = body;
return this;
}
build(): string {
if (!this.headers.get('Content-Length') && this.body) {
this.setHeader('Content-Length',
Buffer.byteLength(this.body).toString());
}
return (
this.statusLine.toString() + // "HTTP/1.1 200 OK\r\n"
this.headers.toString() + // "Content-Type: text/html\r\n..."
"\r\n" + // Blank line
this.body // "<h1>Hello</h1>"
);
}
}
The pattern here is called the "Builder Pattern." You chain method calls to configure the response, then call build() to get the final string. It's just a cleaner way to construct something with multiple pieces.
HTTP Keep-Alive: Reusing Connections for Performance
Here's where things got interesting. Initially, my server opened a new TCP connection for every single request:
Request 1: Open connection → Send request → Get response → Close connection
Request 2: Open connection → Send request → Get response → Close connection
Request 3: Open connection → Send request → Get response → Close connection
Each of those "open connection" steps involves the 3-way TCP handshake I mentioned earlier. That's roughly 1.5 ~ times (RTT) of latency, which on a network might be 5ms. For 100 requests, that's 500ms of pure overhead just opening and closing connections.
Then I learned about HTTP Keep-Alive (also called persistent connections). The idea is simple: keep the connection open and reuse it for multiple requests.
Request 1: Open connection → Send request → Get response
Request 2: Send request → Get response (same connection!)
Request 3: Send request → Get response (same connection!)
...after 100 requests or timeout...
Close connection
Implementation:
export class KeepAliveManager {
private requestCount = 0;
private lastActivityTime: number;
private readonly config: KeepAliveConfig;
constructor(config: Partial<KeepAliveConfig> = {}) {
this.config = {
enabled: config.enabled ?? true,
timeoutMs: config.timeoutMs ?? 60000, // 60 seconds
maxRequests: config.maxRequests ?? 100 // Max 100 requests per connection
};
this.lastActivityTime = Date.now();
}
}
The logic is straightforward. First, track how many requests have been served on this connection. Then, track when the last activity happened. Lastly, keep the connection alive unless we've hit the max requests or the timeout.
I read about this technique in the HTTP/1.1 specification, which helped me understand it properly.
Unexpected Knowledge: JavaScript Event Loop and Callbacks
I thought I understood JavaScript until I started using callbacks for network programming. I had to stop and really learn how the event loop works.
After watching this video, I finally understood that JavaScript execution is divided into parts:
- Call Stack: Handles function execution in Last-In-First-Out order
- Web APIs: Where asynchronous operations happen
- Task Queue: Where completed async operations wait
- Event Loop: Moves tasks from the queue to the stack
Here's what happens when you register a callback:
conn.onData((data) => {
// This callback doesn't execute immediately
// It waits for data to arrive
});
In the above example, the onData function registers the callback and returns immediately. Then, when data arrives on the socket, Node.js puts the callback in the task queue. The event loop checks: "Is the call stack empty? Yes? Okay, move this callback from the queue to the stack." After that, the callback executes.
The interesting part is that the event loop never ends. By registering callbacks, I'm essentially creating infinite loops. The server keeps running because there are always callbacks waiting for events.
Language Specific: Buffers vs Strings
The dumb way I started:
// String concatenation (slow as hell)
let requestData = "";
conn.onData((data) => {
requestData += data.toString(); // Creates new string every time!
// Memory: Copy, copy, copy...
});
Every time you concatenate strings in JavaScript, it creates a brand new string in memory and copies everything over. For a few bytes, who cares? For a bunch of network chunks, this becomes a performance disaster.
The reasonable way:
// Buffer accumulation (fast)
class RequestBuffer {
private chunks: Buffer[] = [];
private totalSize: number = 0;
append(chunk: Buffer): void {
this.chunks.push(chunk); // Just store reference!
this.totalSize += chunk.length;
}
toBuffer(): Buffer {
return Buffer.concat(this.chunks); // Combine once when needed
}
}
Instead of copying data around, just store references to the chunks. When you finally need the complete data, combine everything once.
Conclusion
This journey of building an HTTP server in TypeScript has taught me more than any tutorial ever could. I've learned about TCP connections, HTTP protocol structure, event-driven programming, and the internals of how servers actually work. Honestly, networking was the topic I used to actively ignore. Whenever it came up in courses or conversations, I'd zone out or find excuses to skip over it. But diving into this project completely changed my perspective. Now I'm doing my best to learn more about networking, protocols, and everything I once avoided. This project is far from finished, I'll keep improving it as my understanding deepens. If you spot any mistakes, misconceptions, or areas where I'm still confused, please call them out. I'm completely open to criticism because that's how I learn best. Whether it's my understanding of the event loop, my implementation of Keep-Alive, or anything else in this post, I welcome your feedback.
Top comments (0)