Most online file converters have a dirty secret: they upload your files to their servers, process them, and send them back. Even the ones that promise "files are deleted after 1 hour" — you're still trusting a third party with your passport photos, medical records, and personal documents.
I wanted to fix this. So I built OneWeeb — a file converter that runs 100% in your browser. Your files never leave your device. Ever.
Here's how it works under the hood.
The Problem With Server-Side Converters
When you use a typical online converter:
Your Device → Upload → Their Server → Process → Download → Your Device
Your file travels across the internet twice. It sits on someone else's server. You have no control over what happens to it.
With client-side processing:
Your Device → Process → Your Device
That's it. No network requests. No server. No trust required.
The Tech Stack
Everything runs on vanilla JavaScript and browser APIs. No frameworks, no build tools. Here's what powers each conversion type:
Image Conversion (JPG ↔ PNG ↔ WebP)
The Canvas API is the backbone of all image conversions:
function convertImage(file, outputFormat, quality = 0.92) {
return new Promise((resolve) => {
const img = new Image();
const canvas = document.createElement('canvas');
img.onload = () => {
canvas.width = img.width;
canvas.height = img.height;
const ctx = canvas.getContext('2d');
ctx.drawImage(img, 0, 0);
canvas.toBlob(
(blob) => resolve(blob),
`image/${outputFormat}`,
quality
);
};
img.src = URL.createObjectURL(file);
});
}
The magic is canvas.toBlob(). It takes whatever is drawn on the canvas and exports it in any format the browser supports: JPEG, PNG, WebP, even AVIF in newer browsers.
Key insight: The browser already has built-in encoders/decoders for all major image formats. We're just leveraging what's already there.
Image Compression
Compression is just conversion with a lower quality parameter:
async function compressToTargetSize(file, targetKB) {
let quality = 0.9;
let blob = await convertImage(file, 'jpeg', quality);
// Binary search for optimal quality
let min = 0.1, max = 0.95;
while (max - min > 0.02) {
quality = (min + max) / 2;
blob = await convertImage(file, 'jpeg', quality);
if (blob.size / 1024 > targetKB) {
max = quality;
} else {
min = quality;
}
}
return blob;
}
This is how the "Compress to 50KB" and "Compress to 100KB" tools work. Binary search finds the optimal quality level to hit the target file size. Usually converges in 6-8 iterations.
JPG to PDF
For PDF generation, I use jsPDF loaded from CDN:
async function imagesToPDF(files, pageSize = 'a4') {
const { jsPDF } = window.jspdf;
const pdf = new jsPDF({ orientation: 'p', unit: 'mm', format: pageSize });
for (let i = 0; i < files.length; i++) {
if (i > 0) pdf.addPage();
const imgData = await readFileAsDataURL(files[i]);
const dims = await getImageDimensions(imgData);
const pageWidth = pdf.internal.pageSize.getWidth();
const pageHeight = pdf.internal.pageSize.getHeight();
const margin = 10; // mm
// Scale image to fit page while maintaining aspect ratio
const { width, height } = fitToPage(
dims.width, dims.height,
pageWidth - margin * 2, pageHeight - margin * 2
);
// Center on page
const x = margin + (pageWidth - margin * 2 - width) / 2;
const y = margin + (pageHeight - margin * 2 - height) / 2;
pdf.addImage(imgData, 'JPEG', x, y, width, height);
}
return pdf.output('blob');
}
The entire PDF is generated in memory and offered as a download via URL.createObjectURL(). No server involved.
PDF to JPG
This one uses PDF.js (Mozilla's PDF rendering library):
async function pdfToImages(file, scale = 2, quality = 0.9) {
const arrayBuffer = await file.arrayBuffer();
const pdf = await pdfjsLib.getDocument({ data: arrayBuffer }).promise;
const images = [];
for (let i = 1; i <= pdf.numPages; i++) {
const page = await pdf.getPage(i);
const viewport = page.getViewport({ scale });
const canvas = document.createElement('canvas');
canvas.width = viewport.width;
canvas.height = viewport.height;
const ctx = canvas.getContext('2d');
ctx.fillStyle = '#FFFFFF';
ctx.fillRect(0, 0, canvas.width, canvas.height);
await page.render({ canvasContext: ctx, viewport }).promise;
const blob = await new Promise(
resolve => canvas.toBlob(resolve, 'image/jpeg', quality)
);
images.push(blob);
}
return images;
}
PDF.js renders each page onto a canvas, then we export the canvas as JPEG. The scale parameter controls resolution — 2x is great for most uses, 3x for print quality.
💡 Deep dive: I wrote a full step-by-step tutorial on this specifically — How to Build a Client-Side PDF to JPG Converter (No Server Required) — covers canvas size limits, white background fix, quality vs scale tradeoffs, and password-protected PDFs.
Audio Extraction (MP4 → WAV/MP3)
This uses the Web Audio API:
async function extractAudio(videoFile) {
const audioContext = new AudioContext();
const arrayBuffer = await videoFile.arrayBuffer();
const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
// Convert to WAV
const wavBlob = audioBufferToWav(audioBuffer);
return wavBlob;
}
function audioBufferToWav(buffer) {
const numChannels = buffer.numberOfChannels;
const sampleRate = buffer.sampleRate;
const format = 1; // PCM
const bitDepth = 16;
// Interleave channels
const length = buffer.length * numChannels * (bitDepth / 8);
const wavBuffer = new ArrayBuffer(44 + length);
const view = new DataView(wavBuffer);
// Write WAV header
writeString(view, 0, 'RIFF');
view.setUint32(4, 36 + length, true);
writeString(view, 8, 'WAVE');
writeString(view, 12, 'fmt ');
view.setUint32(16, 16, true);
view.setUint16(20, format, true);
view.setUint16(22, numChannels, true);
view.setUint32(24, sampleRate, true);
view.setUint32(28, sampleRate * numChannels * (bitDepth / 8), true);
view.setUint16(32, numChannels * (bitDepth / 8), true);
view.setUint16(34, bitDepth, true);
writeString(view, 36, 'data');
view.setUint32(40, length, true);
// Write audio data
// ... (interleave channels and write samples)
return new Blob([wavBuffer], { type: 'audio/wav' });
}
The browser decodes the video's audio track, and we manually construct a WAV file by writing the binary header + PCM data. Pure JavaScript, no FFmpeg, no server.
The "Offline Test" — Proving It's Really Client-Side
Here's my favorite party trick: disconnect from the internet after the page loads, and everything still works.
Try it yourself:
- Go to oneweeb.com
- Wait for the page to fully load
- Turn off WiFi / enable airplane mode
- Convert a file
It works. Because there's nothing to "phone home" to.
You can also verify this by opening DevTools → Network tab and watching during conversion. Zero network requests.
Performance Considerations
Client-side processing has trade-offs:
Advantages:
- Instant conversion (no upload/download wait)
- Works offline
- No server costs
- Unlimited conversions
- True privacy
Limitations:
- Large files consume browser memory
- No access to server-side libraries (FFmpeg, ImageMagick)
- Processing speed depends on user's device
- Some conversions aren't possible client-side (e.g., DOC to PDF)
For the conversions that ARE possible client-side (images, basic PDF, audio extraction), the user experience is dramatically better. No progress bars waiting for uploads. No "your file will be ready in 30 seconds." Just instant results.
What I Learned
Browsers are incredibly powerful. Canvas API, Web Audio API, FileReader, Blob, ArrayBuffer — these APIs can handle most common file conversions without any server.
CDN libraries fill the gaps. jsPDF for PDF creation, PDF.js for PDF reading, JSZip for ZIP files. All run client-side.
Privacy is a feature, not a constraint. When I tell people "your files never leave your device," their reaction is always: "Wait, that's possible?" It's a genuine differentiator.
SEO matters more than I expected. Building the tool is 30% of the work. Getting people to find it is the other 70%.
Try It Out
🔗 OneWeeb.com — Free, private, browser-based file converter
Tools include:
- Image conversion (JPG, PNG, WebP, SVG, GIF)
- Image compression (to specific KB targets)
- JPG to PDF / PDF to JPG
- Audio extraction from video
- Temporary email service
Everything is free, no signup required, no daily limits.
If you have questions about the implementation or want to discuss client-side file processing, drop a comment below!
Have you built anything using browser APIs that surprised you with what's possible? I'd love to hear about it.

Top comments (2)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.