Every web framework tutorial shows you how to accept a file upload.
Almost none show you what to do next.
You validate the Content-Type header. You check the extension. You think you're done.
You're not.
The default file upload stack leaves you exposed on four fronts: parsing security, file type spoofing, size abuse, and malware. These 7 tools close each gap without requiring a dedicated security team.
The average web app's file upload security posture in production. Source: Giphy
TL;DR: File upload security requires a stack, not a single library. These 7 tools cover parsing, validation, and scanning end-to-end.
Table of Contents
- pompelmi — antivirus scanning before the file touches permanent storage
- multer — secure multipart parsing with built-in size and field limits
- busboy — low-level streaming parser for fine-grained upload control
- file-type — detect the real file type from magic bytes, not the filename
- mime-types — map MIME types reliably without trusting user input
- sharp — re-encode images to eliminate embedded payloads
- archiver — control archive creation to prevent zip bomb and path traversal risks
1) pompelmi — scan files before they land
What it is: A minimal Node.js wrapper around ClamAV that scans any uploaded file and returns a typed verdict — Clean, Malicious, or ScanError. Zero runtime dependencies, no cloud, no daemon required.
Why it matters: The file upload attack surface doesn't start with storage — it starts with what you accept. A PDF that passes as application/pdf can still carry a macro payload. An image can embed executable content. pompelmi adds a scanning layer before any file touches your database or object storage, running ClamAV locally so no user data ever leaves your server.
Best for: Express, Fastify, NestJS, Next.js, SvelteKit, any Node.js app that accepts user file uploads
Links: | Website
pompelmi
/
pompelmi
Minimal Node.js wrapper around ClamAV — scan any file and get Clean, Malicious, or ScanError. Handles installation and database updates automatically.
pompelmi
ClamAV for humans
A minimal Node.js wrapper around ClamAV that scans any file and returns a typed Verdict Symbol: Verdict.Clean, Verdict.Malicious, or Verdict.ScanError. No daemons. No cloud. No native bindings. Zero runtime dependencies.
Table of contents
- Quickstart
- How it works
- API
- Docker / remote scanning
- Examples
- Internal utilities
- Supported platforms
- Installing ClamAV manually
- Testing
- Contributing
- Security
- License
Quickstart
npm install pompelmi
const { scan, Verdict } = require('pompelmi');
const result = await scan('/path/to/file.zip');
if (result === Verdict.Malicious) {
throw new Error('File rejected: malware detected');
}
How it works
- Validate — pompelmi checks that the argument is a string and that the file exists before spawning anything.
-
Scan — pompelmi spawns
clamscan --no-summary <filePath>as a child process and reads the exit code. - Map — the exit…
2) multer — multipart parsing with limits built in
What it is: Express middleware for handling multipart/form-data with configurable file size limits, field count caps, and pluggable storage engines.
Why it matters: Unbounded multipart parsing is a DoS vector. A malicious client can send a multi-gigabyte upload or thousands of fields and exhaust server memory before any route handler runs. Multer's limits configuration rejects requests that exceed your thresholds before the full payload is consumed.
Best for: Express apps, REST APIs with file upload endpoints, any multipart form processing
Multer is a node.js middleware for handling multipart/form-data, which is primarily used for uploading files. It is written
on top of busboy for maximum efficiency.
NOTE: Multer will not process any form which is not multipart (multipart/form-data).
Translations
This README is also available in other languages:
| العربية | Arabic |
| 简体中文 | Chinese |
| Français | French |
| 한국어 | Korean |
| Português | Portuguese (BR) |
| Русский язык | Russian |
| Español | Spanish |
| O'zbek tili | Uzbek |
| Việt Nam | Vietnamese |
| Türkçe | Turkish |
Installation
$ npm install multer
Usage
Multer adds a body object and a file or files object to the request object. The body object contains the values of the text fields of the form, the file or files object contains the files uploaded via the form.
Basic usage example:
Don't forget the enctype="multipart/form-data" in your form.
<form action="/profile" method="post" enctype="multipart/form-data">
<input type="…3) busboy — streaming multipart parsing for full control
What it is: A low-level streaming multipart parser for Node.js that processes uploads without buffering the entire payload in memory.
Why it matters: Multer is the right default. Busboy is the right tool when you need to act on files before they're fully received — streaming to S3, scanning chunks in flight, or enforcing byte-level limits mid-stream. It trades abstraction for control, which is what high-volume or security-critical pipelines need.
Best for: High-volume uploads, streaming directly to cloud storage, custom upload pipelines, apps where memory pressure matters
Description
A node.js module for parsing incoming HTML form data.
Changes (breaking or otherwise) in v1.0.0 can be found here.
Note: If you are using node v18.0.0 or newer, please be aware of the node.js
HTTP(S) server's requestTimeout
configuration setting that is now enabled by default, which could cause upload
interruptions if the upload takes too long.
Requirements
- node.js -- v10.16.0 or newer
Install
npm install busboy
Examples
- Parsing (multipart) with default options:
const http = require('http');
const busboy = require('busboy');
http.createServer((req, res) => {
if (req.method === 'POST') {
console.log('POST request');
const bb = busboy({ headers: req.headers });
bb.on('file', (name, file, info) => {
const { filename…4) file-type — detect real file types from magic bytes
What it is: A library that reads the first bytes of a file (magic bytes) to identify its true MIME type — regardless of the filename or Content-Type header.
Why it matters: File extension and Content-Type are user-controlled. An attacker renames a .exe to .jpg and your extension check passes. Magic byte detection reads the actual file signature — the bytes operating systems and compilers use to identify formats. Combined with an allowlist, it's the only reliable layer for type verification.
Best for: Image upload validation, document processing, any upload workflow where file type matters for security
sindresorhus
/
file-type
Detect the file type of a file, stream, or data
Detect the file type of a file, stream, or data
The file type is detected by checking the magic number of the buffer.
This package is for detecting binary-based file formats, not text-based formats like .txt, .csv, .svg, etc.
We accept contributions for commonly used modern file formats, not historical or obscure ones. Open an issue first for discussion.
Install
npm install file-type
This package is an ESM package. Your project needs to be ESM too. Read more. For TypeScript + CommonJS, see load-esm. If you use it with Webpack, you need the latest Webpack version and ensure you configure it correctly for ESM.
Important
File type detection is based on binary signatures (magic numbers) and is a best-effort hint. It does not guarantee the file is actually of that type or that the file is valid/not malformed.
Robustness against malformed input is best-effort. When…
Checking file extensions vs checking magic bytes. Source: Giphy
5) mime-types — reliable MIME type mapping
What it is: A comprehensive MIME type lookup library mapping file extensions to MIME types and vice versa, maintained against the IANA database.
Why it matters: Writing your own MIME allowlist is a maintenance burden with consistent gaps. mime-types provides the full IANA database in a maintained package. Use it with file-type for a two-layer check: magic bytes confirm the real format, MIME mapping drives your content handling logic.
Best for: Upload type allowlisting, Content-Type header generation, file serving middleware
jshttp
/
mime-types
The ultimate javascript content-type utility.
mime-types
The ultimate javascript content-type utility.
Similar to the mime@1.x module, except:
-
No fallbacks. Instead of naively returning the first available type
mime-typessimply returnsfalse, so dovar type = mime.lookup('unrecognized') || 'application/octet-stream'. - No
new Mime()business, so you could dovar lookup = require('mime-types').lookup. - No
.define()functionality - Bug fixes for
.lookup(path)
Otherwise, the API is compatible with mime 1.x.
Install
This is a Node.js module available through the
npm registry. Installation is done using the
npm install command:
$ npm install mime-types
Note on MIME Type Data and Semver
This package considers the programmatic api as the semver compatibility. Additionally, the package which provides the MIME data
for this package (mime-db) also considers it's programmatic api as the semver contract. This means the MIME type resolution is not considered
in the semver bumps.
In the past the version of mime-db…
6) sharp — re-encode images to strip dangerous payloads
What it is: A high-performance Node.js image processing library that converts, resizes, and re-encodes images using libvips.
Why it matters: An image that passes file-type validation can still contain EXIF metadata with XSS payloads, embedded scripts, or polyglot content that triggers vulnerabilities in downstream image parsers. Re-encoding through sharp strips all of this — the output is a clean, verified image. For any app serving user-uploaded images to other users, this step is non-negotiable.
Best for: Profile photos, user-generated image content, any pipeline that stores and serves uploaded images
Links: | Website
lovell
/
sharp
High performance Node.js image processing, the fastest module to resize JPEG, PNG, WebP, AVIF and TIFF images. Uses the libvips library.
sharp
The typical use case for this high speed Node-API module is to convert large images in common formats to smaller, web-friendly JPEG, PNG, WebP, GIF and AVIF images of varying dimensions.
It can be used with all JavaScript runtimes that provide support for Node-API v9, including Node.js (^18.17.0 or >= 20.3.0), Deno and Bun.
Resizing an image is typically 4x-5x faster than using the quickest ImageMagick and GraphicsMagick settings due to its use of libvips.
Colour spaces, embedded ICC profiles and alpha transparency channels are all handled correctly Lanczos resampling ensures quality is not sacrificed for speed.
As well as image resizing, operations such as rotation, extraction, compositing and gamma correction are available.
Most modern macOS, Windows and Linux systems do not require any additional install or runtime dependencies.
Documentation
Visit sharp.pixelplumbing.com for complete installation instructions, API documentation, benchmark tests and changelog.
Examples
npm install…7) archiver — create archives without exposing attack surfaces
What it is: A streaming archive creation library for Node.js supporting zip, tar, and other formats with programmatic entry control.
Why it matters: When you generate archives for users programmatically, archiver gives you explicit control over what's included — preventing path traversal, setting compression ratios, and limiting which files can enter the archive. When you own archive creation, you define the attack surface rather than inheriting it from user input.
Best for: Download bundles, backup generation, export features, any server-side archive creation workflow
archiverjs
/
node-archiver
a streaming interface for archive generation
Archiver
A streaming interface for archive generation
Visit the API documentation for a list of all methods available.
Install
npm install archiver --save
Quick Start
import fs from "fs";
import { ZipArchive } from "archiver";
// create a file to stream archive data to.
const output = fs.createWriteStream(__dirname + "/example.zip");
const archive = new ZipArchive({
zlib: { level: 9 }, // Sets the compression level.
});
// listen for all archive data to be written
// 'close' event is fired only when a file descriptor is involved
output.on("close", function () {
console.log(archive.pointer() + " total bytes");
console.log(
"archiver has been finalized and the output file descriptor has closed.",
);
});
// This…Final thoughts
File upload security is a pipeline, not a single check. Parse with size limits, detect real types from magic bytes, map to allowed MIME types, sanitize image content, scan for malware, and control any archive generation.
Skip any step and you have a gap. Use all seven and you have a defensible upload stack that doesn't trust user input at any layer.
What's your current file upload pipeline missing?




Top comments (0)