The Silent Killer: OutOfMemory (OOM) Errors
The most common mistake beginners make is reading an entire file into memory before saving it. If your server has 2GB of RAM and two users simultaneously upload 1.5GB videos, your process is guaranteed to crash.
Why Buffering Fails
In a traditional "Buffered" approach, the server waits for the entire file to arrive, stores it in a variable (RAM), and then writes it to the disk or S3. This is fine for a 10KB profile picture, but it is a disaster for high-resolution video or large logs.
The Solution: The "Pipe" and "Stream" Model
Instead of treating a file like a bucket that must be filled, treat it like a garden hose (a Stream).
Implementation Strategy:
- Readable Streams: The incoming HTTP request is a readable stream.
- Writable Streams: Your destination (AWS S3 or Local Disk) is a writable stream.
- Piping: You "pipe" the data directly from the request to the destination.
Benefits:
- Constant Memory Usage: Your RAM usage stays at ~20-50MB regardless of whether the file is 1MB or 100GB.
- Backpressure Management: Streams automatically slow down the data flow if the destination (e.g., a slow database) can't keep up.
Pro-Tip: In Node.js, libraries like busboy or multer (with storage engines) handle this under the hood, but understanding the underlying stream logic is what separates a junior from a senior dev.
Top comments (0)