Every online image compressor I tried had the same problem: they upload your photos to a server.
TinyPNG, iLoveIMG, Compress2Go — they all work the same way. You pick a file, it goes to someone else's computer, gets compressed, comes back. The compression is good. But your photo — with its GPS coordinates, device serial number, and timestamps baked into the EXIF data — just sat on a server you don't control.
I kept thinking: image compression is just math. It's Canvas API, quality parameters, and blob manipulation. There's no reason this needs a server.
So I built MiniPx. It compresses, converts, and resizes images entirely in the browser. Nothing gets uploaded. Ever. Here's how it works under the hood.
The core compression loop
The actual compression happens in about 20 lines. Load the image into a canvas, draw it, export as a blob with a quality parameter:
javascriptfunction compressAtQuality(img, w, h, fmt, quality) {
return new Promise((resolve, reject) => {
const canvas = document.createElement('canvas');
canvas.width = w;
canvas.height = h;
const ctx = canvas.getContext('2d');
// White background for JPEG (no transparency support)
if (fmt === 'image/jpeg') {
ctx.fillStyle = '#fff';
ctx.fillRect(0, 0, w, h);
}
ctx.drawImage(img, 0, 0, w, h);
canvas.toBlob(
(blob) => blob ? resolve(blob) : reject(new Error('No output')),
fmt,
fmt === 'image/png' ? undefined : quality
);
});
}
That's it. No sharp, no imagemagick, no server-side anything. The browser's built-in JPEG/WebP encoder handles the actual compression.
The problem nobody talks about: when compression makes files bigger
Here's something I didn't expect. If you take a well-optimized JPEG and run it through Canvas at quality 0.65, the output can be larger than the input. The browser re-encodes the entire image from scratch — it doesn't know the original was already compressed.
I hit this constantly during testing. Users would drop a 200KB JPEG and get back a 280KB file. That's embarrassing.
The fix is a fallback chain. If the initial compression produces a bigger file, step down through lower quality levels until you beat the original:
javascriptlet blob = await compressAtQuality(img, w, h, fmt, quality);
if (blob.size >= file.size && fmt !== 'image/png') {
for (const fallbackQ of [0.6, 0.45, 0.3, 0.2]) {
if (fallbackQ >= quality) continue;
const attempt = await compressAtQuality(img, w, h, fmt, fallbackQ);
if (attempt.size < file.size) {
blob = attempt;
break;
}
if (attempt.size < blob.size) blob = attempt;
}
// Last resort: try a different format entirely
if (blob.size >= file.size && fmt === 'image/webp') {
const jpegFallback = await compressAtQuality(
img, w, h, 'image/jpeg', Math.min(quality, 0.5)
);
if (jpegFallback.size < blob.size) blob = jpegFallback;
}
}
Not elegant, but it works. The user always gets a smaller file, even if the format or quality level isn't what they originally picked.
PNG is a special headache
PNG compression through Canvas is basically useless. The browser's PNG encoder produces files that are often 1.5-2x larger than the input because it doesn't do the advanced filtering and dictionary optimization that tools like pngquant use.
My workaround: if a PNG output is significantly larger than the input, quietly try WebP and JPEG alternatives and pick the smallest:
javascriptif (blob.size > file.size * 1.5 && fmt === 'image/png') {
const webpAlt = await compressAtQuality(img, w, h, 'image/webp', quality);
const jpegAlt = await compressAtQuality(img, w, h, 'image/jpeg', quality);
const smallest = [blob, webpAlt, jpegAlt].sort((a, b) => a.size - b.size)[0];
if (smallest.size < blob.size) blob = smallest;
}
This means a user who drops a 4MB PNG screenshot might get back a 400KB WebP instead of the 6MB PNG that Canvas would produce. The file extension changes, which is a tradeoff, but a 93% size reduction beats format purity.
HEIC conversion without a server
This was the trickiest part. iPhones save photos as HEIC by default. Most online converters upload them to a server for decoding because browsers don't natively support HEIC — except Safari does.
So MiniPx checks first:
javascriptconst supportsHEICNatively = async () => {
return new Promise((resolve) => {
const img = new Image();
img.onload = () => resolve(true);
img.onerror = () => resolve(false);
img.src = 'data:image/heic;base64,AAAAGGZ0eXBoZWlj';
setTimeout(() => resolve(false), 500);
});
};
Safari users get zero-dependency HEIC conversion through the same Canvas trick — load the HEIC into an , draw to canvas, export as JPEG. No libraries needed.
Chrome and Firefox users get heic2any, which is a WASM-based HEIC decoder. It's about 350KB, which is heavy, so I lazy-load it only when someone actually tries to convert a HEIC file:
javascriptconst heic2any = (await import('heic2any')).default;
return await heic2any({ blob: file, toType: 'image/jpeg', quality: 0.92 });
Safari users never download those 350KB. Chrome users only download them if they actually need HEIC conversion. Everyone else gets the lightweight path.
Stripping EXIF data (the privacy part)
This is maybe the most important feature and it's almost invisible. Photos from phones contain EXIF metadata: GPS coordinates, device model, serial numbers, timestamps, sometimes even your name.
When you re-draw an image through Canvas, the EXIF data doesn't come along. Canvas only sees pixels — it has no concept of metadata. So every image that passes through MiniPx comes out clean. No GPS. No device info. No timestamps.
I added a toggle for this ("Strip EXIF data") but it's on by default. The Canvas re-encoding handles it automatically — there's no extra code needed.
The architecture
MiniPx is a Next.js 15 static site. No API routes. No database. No server functions. The entire thing is pre-rendered HTML + JS served from Netlify's CDN.
Stack:
- Next.js 15 (static export)
- 5 client components: ImageTool, PDFTool, HEICTool, TrackedCTA, WebVitals
- Everything else is server-rendered (SEO content, schemas, navigation)
- 8 dependencies total
- Hosted on Netlify (free tier) The First Load JS for any page is about 103-106KB. That's the entire app — React, the compressor, the UI, everything. For comparison, TinyPNG's homepage loads 2.4MB of JavaScript. I'm pretty aggressive about keeping things server-rendered. The tool pages have long-form SEO content, FAQ accordions, and JSON-LD schemas, but all of that renders on the server as static HTML. The only client-side JavaScript is the actual image processing tool. What I'd do differently Batch processing is slow. Right now, files are processed sequentially. Web Workers would let me compress multiple images in parallel, but Canvas API doesn't work in Workers. OffscreenCanvas exists but browser support is spotty. I'm keeping an eye on this. The PNG problem is unsolved. Client-side PNG optimization is genuinely hard. There are WASM ports of pngquant and oxipng, but they add 500KB+ to the bundle. For now, the format-switching fallback works, but it's a hack. No preview. You can't see the compressed image before downloading it. Adding side-by-side preview would be a better UX, but it means holding two blob URLs in memory per image, which gets expensive with batch uploads. Try it
*MiniPx is free. No signup, no limits, no ads. *
If you're building something similar, the key insight is: Canvas + toBlob gives you 90% of what server-side image processing does, with zero infrastructure cost. The other 10% (PNG optimization, HEIC on non-Safari, advanced filters) requires WASM libraries, but you can lazy-load those so most users never pay the cost.
Top comments (0)