When I started building SammaPix, I had a decision to make: process images on a server like everyone else, or try something unconventional—handle everything in the browser.
The server approach seemed logical. Better performance, easier scaling, industry standard. But then I thought: why would I upload someone's family photos, medical documents, or confidential screenshots to my servers when I could process them right there in their browser?
That question led me down a rabbit hole of Canvas APIs, Web Workers, WebAssembly, and hard lessons about browser limitations. Here's what I learned building 20+ image tools entirely client-side.

The compress tool: drop images, adjust quality, download — all in your browser.
Why Client-Side Processing Isn't Actually Crazy
The moment I decided to go client-side, everyone asked the same question: "But doesn't the server make it faster?"
Not necessarily.
Here's the math: uploading a 5MB image to a server (200ms), processing it (300ms), downloading the result (200ms) = 700ms. Meanwhile, processing that same image in the browser with modern JavaScript? 150-400ms depending on the operation.
The bandwidth elimination is huge. But the real win isn't speed—it's privacy.
I could promise users their images stay private. Not "we delete them after 24 hours" or "they're encrypted in transit." Actually private. The image never leaves their device. Full stop.
That's not a marketing angle—that's the entire architecture.
The Architecture: What Actually Runs in the Browser
SammaPix has 20 tools. Most of them live entirely in client-side JavaScript:

All 20 tools, organized by category.
-
Image compression - Using
browser-image-compressionlibrary - Format conversion - Canvas API for PNG, JPEG, WebP
- Resize and crop - Canvas transformation matrix
- Rotate, flip, invert - Canvas filters and pixel manipulation
- Blur, sharpen, brightness - Canvas filtering
- Batch processing - Web Workers to avoid blocking the UI
This is 80% of the functionality. Here's a simple example of how compression works:
import imageCompression from 'browser-image-compression';
async function compressImage(file) {
const options = {
maxSizeMB: 1,
maxWidthOrHeight: 1920,
useWebWorker: true
};
try {
const compressedFile = await imageCompression(file, options);
return compressedFile;
} catch (error) {
console.error('Compression failed:', error);
}
}
That library handles the heavy lifting: quality reduction, JPEG chroma subsampling, WebP encoding where supported. It's battle-tested and open source.
For basic Canvas operations, the pattern is straightforward:
function rotateImage(imageElement, degrees) {
const canvas = document.createElement('canvas');
const ctx = canvas.getContext('2d');
const rad = (degrees * Math.PI) / 180;
canvas.width = imageElement.height;
canvas.height = imageElement.width;
ctx.translate(canvas.width / 2, canvas.height / 2);
ctx.rotate(rad);
ctx.drawImage(
imageElement,
-imageElement.width / 2,
-imageElement.height / 2
);
return canvas.toDataURL('image/jpeg', 0.95);
}
Canvas gives you a 2D drawing context. You transform it, draw the image, and export as a data URL. It's that simple.
The HEIC Problem (And Why I Added WebAssembly)
Then came iOS users.
When someone uploads an image from an iPhone, they get HEIC format. Browser support for HEIC is... let's say "limited." Safari handles it. Chrome? Not reliably. Firefox? No.
I had three options:
- Tell iPhone users "sorry, use PNG"
- Upload to server to convert HEIC
- Use WebAssembly to decode HEIC in the browser
I chose option 3.
The heic2any library wraps WebAssembly to decode HEIC without hitting a server:
import heic2any from 'heic2any';
async function handleHEICUpload(file) {
if (file.type === 'image/heic' || file.type === 'image/heif') {
try {
const convertedBlob = await heic2any({
blob: file,
toType: 'image/jpeg'
});
return convertedBlob;
} catch (error) {
console.error('HEIC conversion failed:', error);
}
}
return file;
}
The WebAssembly module (~300KB) gets fetched once and cached. Subsequent HEIC uploads use the cached version. The trade-off: first load includes that overhead, but after that, users own the entire conversion pipeline.
Where the Server Still Matters: AI Features
Here's where I'm honest: some features need a backend.
When a user asks SammaPix to automatically rename an image based on its content, or generate alt-text, those features call Google Gemini Flash API. The image gets sent to Google, briefly analyzed, and the text comes back.
I don't lie about this. The app clearly indicates which tools use AI and what that means:
// This tool sends the image to Google Gemini API
const aiTools = [
{ id: 'auto-rename', serverRequired: true, privacy: 'analyzed by Google' },
{ id: 'alt-text-generator', serverRequired: true, privacy: 'analyzed by Google' },
];
// Everything else runs client-side
const clientTools = [
{ id: 'compress', serverRequired: false },
{ id: 'convert-format', serverRequired: false },
{ id: 'batch-resize', serverRequired: false },
// ... 18 others
];
For AI features, I send only what's necessary (the image), not any metadata, and I don't log or store the results. Google's API handles privacy per their terms. Users can choose to skip those features entirely and stay 100% local.
This hybrid approach lets me offer intelligence without violating the privacy promise.
Performance: Processing 500 Images in the Browser
The next challenge was batch processing. If someone uploads 500 images, processing them sequentially locks the UI. Each image takes 200-500ms, so 500 images = 2-4 minutes of frozen interface. Unacceptable.
Web Workers solve this:
// main.js
const worker = new Worker('/imageWorker.js');
const imageQueue = [];
function addToBatch(file) {
imageQueue.push(file);
}
function processBatch() {
worker.postMessage({
action: 'compress-batch',
files: imageQueue,
options: { maxSizeMB: 1, maxWidthOrHeight: 1920 }
});
}
worker.onmessage = (event) => {
const { progress, result, error } = event.data;
updateUI(progress);
if (result) {
downloadProcessedImage(result);
}
};
Now processing happens on a separate thread. UI stays responsive. Users see progress in real-time.
In practice, this handles 500 images without breaking a sweat. I've tested it with 1000+ images and the browser stays responsive.
Memory Management: The Real Gotcha
Here's what nobody tells you about client-side image processing: memory.
When you decompress a JPEG into Canvas, it expands to raw pixels. A 2MB JPEG becomes 8-12MB in memory (because each pixel is 4 bytes: RGBA).
Process 50 high-res images in parallel and you're looking at 400-600MB in memory. Mobile browsers start garbage-collecting aggressively. Desktop browsers slow down.
The solution: process sequentially with cleanup between each file, and set explicit canvas memory limits:
const MAX_CANVAS_SIZE = 4096; // Prevent 8K images from exploding memory
function constrainImageDimensions(width, height, maxSize = MAX_CANVAS_SIZE) {
if (width > maxSize || height > maxSize) {
const ratio = Math.min(maxSize / width, maxSize / height);
return {
width: Math.floor(width * ratio),
height: Math.floor(height * ratio)
};
}
return { width, height };
}
This ensures the browser doesn't grind to a halt on 8K images or extreme batch sizes.
The Numbers
After several months of SammaPix in the wild:
- 1M+ images compressed via browser
- Zero server costs for image processing (only API calls for AI features)
- 99.2% success rate (failures mostly user-environment issues: out of memory, browser crashes)
- Average processing time: 250ms per image compression
- Mobile support: Works on iOS Safari, Chrome, Firefox
Why This Approach Matters
Client-side processing isn't inherently better. It's a trade-off.
Pros:
- Genuine privacy (images never leave device)
- No server infrastructure costs
- Faster for simple operations
- Offline capable
Cons:
- Limited by device hardware
- Harder to implement complex features
- Browser inconsistencies (especially mobile)
- Users bear the computational cost
But if privacy is important to your users, if you want to eliminate data storage liability, or if you want to build a tool that works offline—client-side is worth the complexity.
What I'd Do Differently
If I built this again:
- Use a service worker earlier (not just for offline, but for caching and resource management)
- Implement SharedArrayBuffer for true multi-threaded processing (when browsers support it widely)
- Profile memory usage from day one (memory leaks hide in browser tools until you hit them at scale)
- Be transparent about AI features from the start (I added this later; it should be baked into the UX)
- Test on real devices not just emulators (mobile behavior is different)
The Privacy Argument
Here's the thing: most image tools send your images to a server. They'll tell you it's encrypted, deleted after 24 hours, GDPR compliant.
And maybe that's true. But wouldn't you rather not have to trust that?
SammaPix doesn't need your trust. The image never leaves. You can disable your network, run the app offline, and it still works. No backdoors. No data collection. No ToS change surprises.
That's not just a feature—it's a different category of application.
Takeaway
Building tools entirely in the browser forces you to learn the platform deeply. Canvas APIs, Web Workers, WebAssembly, service workers—you touch all of them.
Is it harder than a server-side equivalent? Sometimes. Is it worth it? Ask the developers who've built client-side tools while keeping infrastructure costs near zero.
The browser is powerful. We forget that sometimes.
If you want to see all this in action, check out SammaPix — it's free and open to try.
Built with Next.js 14, Canvas API, and way too much time in the browser DevTools.
Top comments (0)