Most developers use frameworks like Express or Flask without understanding what happens underneath. I built an HTTP/HTTPS server from raw TCP sockets in Go to learn how web servers actually work - no frameworks, just socket programming and protocol implementation.
The result? A journey from 250 RPS with buggy connection handling to 4,000 RPS at peak performance. Here's how I did it and what I learned.
Why Build This?
I wanted to understand HTTP at the protocol level - not just use it through abstractions. What does "parsing a request" actually mean? How does keep-alive work? What makes one server faster than another?
Building from TCP sockets up forced me to learn:
- How HTTP requests are structured byte-by-byte
- Connection lifecycle and reuse patterns
- TLS/SSL encryption from first principles
- Why implementation details matter for performance
What I Built
A lightweight HTTP/HTTPS server that:
- Parses HTTP requests directly from TCP socket connections
- Routes requests with custom method and path matching
- Serves static files with proper MIME type detection
- Supports HTTP/1.1 keep-alive for connection reuse
- Handles form data and JSON request bodies
- Includes optional TLS encryption for HTTPS
- Achieves 4,000 requests per second at peak
Tech stack: Pure Go, no web frameworks. Just net package for TCP, crypto/tls for HTTPS, and html/template for rendering.
The Performance Journey
Initial Version: 250 RPS (Buggy)
My first implementation had a critical bug in request processing combined with Connection: close on every response. The server worked but was crawling.
Bug Fix: 1,389 RPS
Fixed the request handling logic but still sent Connection: close on every response. This meant every request required a new TCP handshake - expensive.
Keep-Alive Implementation: 1,710 RPS
Added proper HTTP/1.1 keep-alive support. Connections stayed open for multiple requests. Immediate improvement but not optimal yet.
Peak Performance: 4,000 RPS
Discovered that concurrency level matters significantly. At 10 concurrent connections with connection reuse, the server hit 4,000 RPS with 0.252ms response times.
Total improvement: 16x from initial version
How It Actually Works
1. TCP Connection Handling
listener, err := net.Listen("tcp", ":8080")
if err != nil {
log.Fatal(err)
}
for {
conn, err := listener.Accept()
if err != nil {
continue
}
go handleConnection(conn) // One goroutine per connection
}
Each connection gets its own goroutine. Go's scheduler handles thousands of these efficiently.
2. HTTP Request Parsing
Reading from the socket gives you raw bytes. You need to parse:
- Request line:
GET /path HTTP/1.1 - Headers:
Content-Type: application/json - Body: Form data or JSON payload
func parseRequest(conn net.Conn) (*Request, error) {
reader := bufio.NewReader(conn)
// Read request line
requestLine, err := reader.ReadString('\n')
if err != nil {
return nil, err
}
// Parse method, path, version
parts := strings.Split(strings.TrimSpace(requestLine), " ")
if len(parts) != 3 {
return nil, errors.New("invalid request line")
}
method := parts[0]
path := parts[1]
// Read headers until empty line
headers := make(map[string]string)
for {
line, err := reader.ReadString('\n')
if err != nil || line == "\r\n" {
break
}
// Parse header: "Content-Type: application/json"
// ... header parsing logic
}
return &Request{Method: method, Path: path, Headers: headers}, nil
}
3. Keep-Alive Implementation
The breakthrough came from proper connection reuse:
func handleConnection(conn net.Conn) {
defer conn.Close()
for {
req, err := parseRequest(conn)
if err != nil {
break // Connection closed or invalid request
}
response := router.Handle(req)
// Send response with keep-alive
conn.Write([]byte("HTTP/1.1 200 OK\r\n"))
conn.Write([]byte("Connection: keep-alive\r\n"))
conn.Write([]byte("Content-Length: " + strconv.Itoa(len(response)) + "\r\n"))
conn.Write([]byte("\r\n"))
conn.Write([]byte(response))
// Connection stays open for next request
if req.Headers["Connection"] == "close" {
break
}
}
}
This simple change - keeping connections open - improved performance by 23% (1,389 → 1,710 RPS).
4. Routing System
type Handler func(*Request) (string, string)
type Router struct {
routes map[string]map[string]Handler // method -> path -> handler
}
func (r *Router) Register(method, path string, handler Handler) {
if r.routes[method] == nil {
r.routes[method] = make(map[string]Handler)
}
r.routes[method][path] = handler
}
func (r *Router) Handle(req *Request) string {
if handler, exists := r.routes[req.Method][req.Path]; exists {
statusCode, body := handler(req)
return createResponse(statusCode, body)
}
return createResponse("404", "Not Found")
}
5. HTTPS/TLS Support
Adding TLS was surprisingly straightforward with Go's crypto/tls:
cert, err := tls.LoadX509KeyPair("server.crt", "server.key")
if err != nil {
log.Fatal(err)
}
config := &tls.Config{Certificates: []tls.Certificate{cert}}
listener, err := tls.Listen("tcp", ":8443", config)
// Same connection handling as HTTP
for {
conn, err := listener.Accept()
if err != nil {
continue
}
go handleConnection(conn)
}
The TLS layer handles encryption/decryption transparently. Your HTTP parsing code stays the same.
Performance Analysis
Testing with different concurrency levels revealed interesting patterns:
| Concurrency | RPS | Response Time | Notes |
|---|---|---|---|
| 10 | 4,000 | 0.252ms | Peak performance |
| 50 | 2,926 | 0.342ms | Excellent |
| 100 | 2,067 | 0.484ms | Very good |
| 500 | 2,286 | 0.437ms | Good |
| 1000 | 1,463 | 0.683ms | Moderate load |
Key insights:
- Sweet spot: 10-200 concurrent connections
- Sub-millisecond response times at low concurrency
- 0% failure rate across all tests
- Connection reuse was the single biggest optimization
Example Usage
Here's how you'd use this server:
router := server.NewRouter()
// Simple API endpoint
router.Register("GET", "/ping", func(req *server.Request) (string, string) {
return server.CreateResponse("200", "text/plain", "OK", "pong")
})
// Form handling with browser detection
router.Register("POST", "/login", func(req *server.Request) (string, string) {
username := req.Body["username"]
browser := req.Browser // Parsed from User-Agent
if username == "admin" {
return server.CreateResponse("200", "text/html", "OK",
"<h1>Welcome "+username+"!</h1><p>Browser: "+browser+"</p>")
}
return server.CreateResponse("401", "text/html", "Unauthorized",
"<h1>Login Failed</h1>")
})
Quick start:
git clone https://github.com/codetesla51/raw-http
cd raw-http
go mod tidy
go run main.go
# Test it
curl http://localhost:8080/ping
# Returns: pong
curl -X POST http://localhost:8080/login \
-d "username=admin&password=secret"
What I Learned
1. HTTP Is Just Text Over TCP
Seeing the raw bytes demystified HTTP completely. It's not magic - just structured text following a protocol.
2. Connection Reuse Matters
The jump from 1,389 to 1,710 RPS came purely from keeping connections open. TCP handshakes are expensive.
3. Concurrency Level Impacts Performance
More concurrent connections doesn't always mean better performance. The server performed best at 10-200 concurrent requests, then degraded at higher levels.
4. Go's Concurrency Model Shines
One goroutine per connection is simple and scales well. No need for complex event loops or worker pools.
5. TLS/SSL Is Less Scary Than It Seems
With proper libraries, adding encryption is straightforward. The protocol complexity is handled for you.
6. Security Requires Constant Attention
Path traversal protection, request size limits, and proper error handling are essential. Easy to miss when building from scratch.
Limitations
This is a learning project, not production software:
- Basic error handling
- Simple routing (no regex or path parameters)
- No middleware system
- Limited HTTP method support
- Self-signed certificates only
But that's the point - understanding fundamentals before adding features.
Try It Yourself
The full source code is on GitHub: codetesla51/raw-http
Routes to try:
-
/- Home page -
/ping- Simple API endpoint -
/login- Form handling demo -
/welcome- Template rendering
For HTTPS (port 8443), the repo includes self-signed certificates for testing. For production, use Let's Encrypt or another CA.
Conclusion
Building an HTTP server from TCP sockets taught me more about web programming in a week than years of using frameworks. The 16x performance improvement wasn't just about optimization - it was about understanding what actually makes servers fast.
If you're curious about how the tools you use every day actually work, I highly recommend building them yourself. Start small, measure everything, and don't be afraid to make mistakes. My first version was buggy and slow, but that's how you learn.
More projects: devuthman.vercel.app
Source code: github.com/codetesla51/raw-http
Built with Go 1.21+ • Created by Uthman
Top comments (0)