Phase II: The Backend Architect
Day 3: Network Armor & High-Throughput Streams
19 min read
Series: Logic & Legacy
Day 3 / 40
Level: Network Architecture
โณ Context: In Day 1, we touched the raw TCP wire. In Day 2, we structured our communication using HTTP Semantics. Today, we confront the harsh realities of the open internet. We must establish borders, forge unbreakable cryptographic armor, and optimize our pipes to handle massive, sustained data streams without collapsing our servers.
1. The Border Guard: CORS & The OPTIONS Preflight
Every web developer has stared at the dreaded red console error: "Blocked by CORS policy." Junior developers blindly Google "how to disable CORS" and paste wildcard workarounds. Architects understand that CORS is not a bug; it is a critical security mechanism.
By default, web browsers enforce the Same-Origin Policy. If a script loaded on https://myfrontend.com tries to make an API call to https://mybackend.com, the browser will block it. Browsers do this to prevent malicious websites from quietly making requests to your banking app in the background.
๐ฟ Gita Wisdom: Sva-Dharma (The Origin Domain)
In the Gita, Krishna speaks of Sva-dharmaโperforming one's own duty within one's designated sphere. Operating outside your domain (Para-dharma) is perilous. Similarly, a browser restricts scripts to their Origin. To cross the border, diplomatic protocols (CORS) must be explicitly negotiated before the payload can march.
The Preflight Request (OPTIONS)
When you attempt a complex cross-origin request (like sending a JSON payload via POST), the browser pauses. Before sending your data, it sends an invisible OPTIONS request to the server. This is the Preflight.
The browser asks: "I am myfrontend.com. I want to send a POST request with an Authorization header. Do you allow this?"
The Server must explicitly reply with specific headers to grant passage:
Access-Control-Allow-Origin: https://myfrontend.comAccess-Control-Allow-Methods: POST, GET, OPTIONSAccess-Control-Allow-Headers: Authorization, Content-Type
If the server replies correctly, the browser opens the gate and sends the actual POST request. Note: CORS protects the browser. Postman and cURL ignore CORS entirely because they are not browsers executing untrusted JavaScript.
2. The Armor: Deep Dive into TLS/SSL
In Day 1, we learned that HTTPS wraps HTTP in TLS (Transport Layer Security). But how do two computers, communicating over a public network monitored by hackers, agree on a secret code without the hackers intercepting the code?
They use the greatest mathematical trick in computer science: The TLS Handshake.
The Two-Phase Encryption Engine
- Phase 1: Asymmetric Encryption (The Handshake): The Server holds a Public Key (which everyone can see via the SSL Certificate) and a Private Key (kept secret). The Client uses the Server's Public Key to encrypt a random "Pre-Master Secret". Crucial math property: Data encrypted with a Public Key can ONLY be decrypted by the Private Key. The Server receives it, decrypts it, and now both machines possess the same secret.
- Phase 2: Symmetric Encryption (The Payload): Asymmetric math is incredibly slow. Therefore, once the secret is shared, both machines use it to generate a fast Symmetric Key (like AES-256). From this microsecond onward, all HTTP data is encrypted symmetrically.
3. Persistent Connections & The Speed/RAM Tradeoff
Every time you make an HTTP request, the OSI Model requires a 3-way TCP Handshake (SYN, SYN-ACK, ACK), followed by the complex TLS Handshake. This takes hundreds of milliseconds before a single byte of JSON is even transmitted.
If an API makes 50 sequential requests to the same server, doing 50 handshakes will destroy your performance. The solution is Persistent Connections (Keep-Alive).
By keeping the TCP socket open after the first request, subsequent requests bypass the handshake and achieve near-zero latency. But this introduces the ultimate Backend tradeoff: Speed vs. RAM.
The Socket Constraint
Every open TCP connection consumes a File Descriptor and a block of RAM on your server. If you leave connections open (Keep-Alive) for 10,000 idle mobile clients, your server will exhaust its RAM and crash, even if CPU usage is 0%. You must tune the connection pool.
Tuning the aiohttp Connection Pool
import asyncio
import aiohttp
async def fetch_high_volume_data():
# The TCPConnector manages the Persistent Connection Pool
# limit=100: Max 100 concurrent open sockets to prevent RAM exhaustion
# keepalive_timeout=30: Close idle sockets after 30s to free memory
connector = aiohttp.TCPConnector(limit=100, keepalive_timeout=30)
# We use ONE session for multiple requests to reuse the open sockets!
async with aiohttp.ClientSession(connector=connector) as session:
for i in range(50):
# Request 2 through 50 will be lightning fast (no handshakes)
async with session.get('https://api.example.com/data') as response:
data = await response.json()
print(f"Fetched item {i}")
if __name__ == "__main__":
asyncio.run(fetch_high_volume_data())
4. The River: Streaming Data
Imagine a user requests a 5GB video file from your server. If you read that file into Python memory and return it as a standard HTTP response, your RAM immediately spikes by 5GB. Three concurrent users will crash the server.
To survive, we use HTTP Streaming (Transfer-Encoding: chunked). The server reads 1 Megabyte from the disk, flushes it down the TCP socket, and discards it from RAM. Memory usage remains a flat, predictable 1MB regardless of file size.
FastAPI Chunked Streaming
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
import time
app = FastAPI()
# A Generator that yields data lazily (See Phase I: Lazy Evaluation)
def fake_video_streamer():
for i in range(10):
time.sleep(0.5) # Simulate reading from disk
yield b"Here is a 1MB chunk of video binary data...\n"
@app.get("/video")
async def stream_video():
# FastAPI will keep the TCP socket open and stream chunks as they yield
# Server RAM usage remains practically zero.
return StreamingResponse(fake_video_streamer(), media_type="video/mp4")
๐ ๏ธ Day 3 Project: The Speed Test
Prove the theory of Persistent Connections to yourself.
- Write a script using the standard
requestslibrary that makes 20 individualrequests.get()calls to a public API. Wrap it in atime.perf_counter()and record the total time. - Now, wrap those 20 requests inside a
requests.Session()context manager (which implements connection pooling natively). - Run the benchmark. You will physically see the massive latency reduction from eliminating the repetitive TCP/TLS handshakes.
๐ฅ HTTP Part 4 Teaser
We have mastered the physical wire, the semantics, and the performance pipelines. Next, we secure the gates. Day 4 explores Authentication: JWTs, OAuth 2.0, and Stateless Security.
Architectural Consulting
If you are building a data-intensive AI application and require a Senior Engineer to architect your secure, high-concurrency backend, I am available for direct contracting.
Explore Enterprise Engagements โ
[โ Previous
Day 2: Verbs & Semantics](https://logicandlegacy.blogspot.com/2026/04/the-backend-architect-day-2-http.html)
[Next โ
Day 4: JWT & Identity Auth](#)
Originally published at https://logicandlegacy.blogspot.com
Top comments (0)