I wanted to build a free image compressor — no uploads to a server, no account required, just drop an image and get a compressed file back instantly. Here's what I learned building compressimg.pro.
Why browser-based?
Most image compression tools send your file to a server, compress it there, and send it back. That means:
- Your image leaves your device (privacy concern)
- You need a backend to maintain
- Latency on every compression
The browser can handle this entirely client-side using the Canvas API and Web Workers. No server, no cost, no privacy issue.
The stack
-
Next.js 14 with
output: 'export'(static site — deployed free on Vercel) - browser-image-compression — handles the heavy lifting
- TypeScript throughout
The static export is key. Since there's no server-side logic, the entire site builds to static HTML/JS and deploys in seconds.
// next.config.mjs
const nextConfig = {
output: 'export',
compress: true,
trailingSlash: true,
}
The compression logic
The core is surprisingly simple:
// lib/compress.ts
export async function compressImage(
file: File,
options: CompressOptions
): Promise<CompressResult> {
// Dynamic import — keeps initial bundle small
const { default: imageCompression } = await import('browser-image-compression')
const compressed = await imageCompression(file, {
maxWidthOrHeight: options.maxDimensionPx ?? 1920,
useWebWorker: true, // keeps main thread unblocked
initialQuality: options.quality / 100,
fileType: file.type,
alwaysKeepResolution: true,
})
return {
blob: compressed,
originalSize: file.size,
compressedSize: compressed.size,
previewUrl: URL.createObjectURL(compressed),
format: file.type.split('/')[1] ?? 'jpeg',
}
}
Two things worth noting:
useWebWorker: true — This is the most important flag. Without it, compression runs on the main thread and freezes the UI for 1–3 seconds on large files. With it, the browser spawns a worker thread and the UI stays responsive throughout.
Dynamic import — browser-image-compression is ~50KB gzipped. Importing it at the top of the file adds it to the initial bundle and hurts LCP. Importing it inside the handler means it only loads when the user actually selects a file.
The upload UX — state machine approach
The upload box has 5 states: idle, dragging, processing, done, error. Defining these upfront made the component much cleaner than the typical boolean-flag soup:
type UploadState = 'idle' | 'dragging' | 'processing' | 'done' | 'error'
The component handles three input methods — click, drag & drop, and Ctrl+V paste:
const handlePaste = useCallback((e: ClipboardEvent<HTMLDivElement>) => {
const item = Array.from(e.clipboardData.items)
.find((i) => i.type.startsWith('image/'))
if (!item) return
const file = item.getAsFile()
if (file) validateAndSelect(file)
}, [validateAndSelect])
Paste support is underrated — power users (designers, developers) paste screenshots constantly. Adding it took 10 lines and made the tool noticeably more useful.
The performance mistake that cost me LCP 7.1s → 2.0s
My first deploy had LCP of 7.1s on mobile. After running Lighthouse, the culprit was Google Analytics and AdSense loaded with strategy="afterInteractive".
Switching both to strategy="lazyOnload" dropped LCP to 2.0s instantly:
// Before — blocks LCP
<Script src="https://www.googletagmanager.com/gtag/js"
strategy="afterInteractive" />
// After — deferred until idle
<Script src="https://www.googletagmanager.com/gtag/js"
strategy="lazyOnload" />
The lesson: any third-party script with afterInteractive runs as soon as the page is interactive — which is right when the browser is trying to paint your LCP element. lazyOnload waits until the browser is genuinely idle.
Blob URL memory management
One thing that bites people: URL.createObjectURL() creates a reference that persists in memory until you explicitly release it. After the user downloads their file:
export function triggerDownload(blob: Blob, filename: string) {
const url = URL.createObjectURL(blob)
const a = document.createElement('a')
a.href = url
a.download = filename
a.click()
// Release after 1s to ensure download started
setTimeout(() => URL.revokeObjectURL(url), 1000)
}
Without revokeObjectURL, compressing many files in one session leaks memory progressively.
What I'd do differently
HEIC support is tricky. iOS devices often produce .heic files that browser-image-compression can't handle natively. I ended up adding a separate HEIC-to-JPEG conversion step before compression, which added complexity I didn't expect.
Target file size is hard. Users often want "compress to under 100KB" rather than "compress at quality 80." Hitting a target size requires binary search over the quality parameter — doable, but not something the library handles out of the box.
Try it
The full tool is live at compressimg.pro — free, no account, no upload limits. Supports JPG, PNG, WebP, HEIC, and GIF.
If you're building something similar, the browser-image-compression library does 90% of the hard work. The remaining 10% is UX — handling all the edge cases around file validation, state management, and performance.
Happy to answer questions in the comments.
Top comments (0)