DEV Community

Cover image for How I Built a Privacy-First Browser Tool Suite to Fix My Own Freelance Workflow (No Backend, No Uploads, No Nonsense)
Roman Popovych
Roman Popovych

Posted on

How I Built a Privacy-First Browser Tool Suite to Fix My Own Freelance Workflow (No Backend, No Uploads, No Nonsense)

Ever had one of those freelance days where you spend more time switching between tabs than actually building something?

Client sends over 10 raw photos, 3MB each. No logo. No favicon. SEO meta tags that need to be written from scratch. You open one site to convert images, another to compress them, a third to generate favicons. One of them has a casino ad that takes up half the screen. Another one slaps a watermark on your file without warning. A third one probably uploads your images somewhere and you have no idea where they go.

I counted once. Six different sites for one client project. Before writing a single line of code.

At some point I just got tired of it and decided to build my own thing. Not because I thought it would be easy, but because I had 1-2 hours after work and nothing better to do with them.

That project is devtools.abect.com — a set of free browser-based tools for developers. Image converters, compressors, favicon generator, SEO meta tag generator. No backend, no account, no watermarks, no ads.

This is the story of how it works under the hood.


The Problem with "Free" Online Tools

Most online converters work the same way: you pick a file, it gets uploaded to their server, processed somewhere you can't see, and sent back to you. You just hand over your client's assets to a random server and hope for the best.

For personal photos, maybe fine. For client work — internal screenshots, unreleased brand assets, documents — that's a different story.

Beyond privacy, there's the UX problem. File size limits. Daily caps on the free tier. Watermarks you don't notice until you've already sent the file to the client. Ads that make the page unusable on mobile.

I wanted something that just works. No friction, no accounts, no surprises.


The Architecture: Why Zero Backend?

The core idea was simple: run everything in the browser.

No server means no uploads. No uploads means no privacy risk. No server also means no hosting costs, no infrastructure to maintain, and no rate limits to worry about.

Modern browsers are surprisingly capable. Between the Canvas API, File API, Blob URLs, and Web Crypto, you can do serious image processing entirely client-side. The question wasn't "can this be done?" but "how far can this actually go?"

Turns out — pretty far.

Here is the full stack of browser APIs the project uses, and what each one actually does:

Canvas API — the core of every image operation. Draws the image onto an off-screen canvas, applies transformations, and exports to the target format via toBlob(). Every converter, compressor, and the favicon renderer goes through this pipeline.

File API — reads files dropped or selected by the user directly in the browser. Files never touch a network request. They go straight into the Canvas pipeline.

Blob URL API — creates in-memory object URLs for previews and downloads. URL.createObjectURL() gives you an instant download link from raw binary data, no server round-trip required.

Web Crypto API — generates unique IDs for each file in a batch queue using crypto.randomUUID(). Stateless, no tracking, collision-proof.

JSZip — assembles ZIP archives from Blob objects inside the browser tab. Batch download of 20 converted images: one ZIP, zero server calls.

TypedArrays + DataView — used specifically for .ico file generation. More on this in a second.

Everything verifiable. Open DevTools, go to the Network tab, drop a file in. You will see zero outgoing file transfers.


How the Image Pipeline Actually Works

The Canvas-based image conversion pipeline is straightforward once you understand it, but there are a few non-obvious details.

async function convertImage(file, targetFormat, quality = 0.85) {
  // Step 1: load the file into an HTMLImageElement
  const bitmap = await createImageBitmap(file);

  // Step 2: draw onto an off-screen canvas
  const canvas = new OffscreenCanvas(bitmap.width, bitmap.height);
  const ctx = canvas.getContext('2d');
  ctx.drawImage(bitmap, 0, 0);

  // Step 3: export to target format
  const blob = await canvas.convertToBlob({
    type: `image/${targetFormat}`,
    quality: quality,
  });

  return blob;
}
Enter fullscreen mode Exit fullscreen mode

OffscreenCanvas is the key detail here. It runs off the main thread so the UI stays responsive when processing a batch of 20 images. No freezing, no spinner blocking the page.

For compression, the same pipeline applies — same format in and out, just a lower quality value. The live preview before download is just a second canvas render at the target quality, displayed immediately.

One edge case that took some debugging: AVIF export via convertToBlob has inconsistent browser support. Chrome supports it, Firefox does not. The solution was a format support detection step at initialization:

async function detectSupportedFormats() {
  const canvas = document.createElement('canvas');
  canvas.width = canvas.height = 1;

  const formats = ['webp', 'avif', 'image/jpeg'];
  const supported = {};

  for (const fmt of formats) {
    const dataURL = canvas.toDataURL(`image/${fmt}`);
    supported[fmt] = dataURL.startsWith(`data:image/${fmt}`);
  }

  return supported;
}
Enter fullscreen mode Exit fullscreen mode

If AVIF is not supported, the option is hidden from the UI rather than shown as a broken feature.


The Interesting Part: Building .ico Files from Scratch

This was the part I did not expect to spend time on.

Most favicon generators either use a library or just output a PNG and rename it .ico. Neither approach is quite right. A proper .ico file is a binary container format that can hold multiple image sizes in a single file. Browsers pick the right one depending on context.

The Canvas API cannot produce .ico files natively. toBlob() supports image/png, image/jpeg, image/webp — that's it. So building a multi-size .ico had to be done manually with TypedArrays and DataView.

Here is the structure of an ICO file:

[ICONDIR header]       — 6 bytes
[ICONDIRENTRY × N]     — 16 bytes per image
[Image data × N]       — PNG or BMP blobs
Enter fullscreen mode Exit fullscreen mode

In code:

function buildIcoFile(pngBlobs) {
  // pngBlobs: array of { blob, width, height }
  const count = pngBlobs.length;
  const headerSize = 6 + count * 16;

  // calculate total buffer size
  const totalSize = headerSize + pngBlobs.reduce((acc, { blob }) => acc + blob.size, 0);
  const buffer = new ArrayBuffer(totalSize);
  const view = new DataView(buffer);

  // ICONDIR header
  view.setUint16(0, 0, true);      // reserved, must be 0
  view.setUint16(2, 1, true);      // type: 1 = ICO
  view.setUint16(4, count, true);  // number of images

  let dataOffset = headerSize;

  pngBlobs.forEach(({ width, height, size }, i) => {
    const entryOffset = 6 + i * 16;

    view.setUint8(entryOffset,     width  < 256 ? width  : 0);
    view.setUint8(entryOffset + 1, height < 256 ? height : 0);
    view.setUint8(entryOffset + 2, 0);   // color palette count (0 = no palette)
    view.setUint8(entryOffset + 3, 0);   // reserved
    view.setUint16(entryOffset + 4, 1, true); // color planes
    view.setUint16(entryOffset + 6, 32, true); // bits per pixel
    view.setUint32(entryOffset + 8, size, true);       // image data size
    view.setUint32(entryOffset + 12, dataOffset, true); // offset from file start

    dataOffset += size;
  });

  // write PNG blobs into the buffer after the directory
  // (requires converting each Blob to ArrayBuffer via FileReader or arrayBuffer())
  return buffer;
}
Enter fullscreen mode Exit fullscreen mode

Each PNG is rendered at 16x16, 32x32, 48x48, and 64x64 via Canvas, then packed into a single ICO container. The result downloads as a proper multi-resolution favicon — same as what a desktop application would produce.

This was the part that required actually reading the ICO spec and working through the binary layout by hand. No library. Just DataView, a lot of offset arithmetic, and eventually a working .ico file.


What I Learned About AI-Assisted Development

The entire project was built with Claude Code. I set a rule at the start: no hand-written code. I wanted to see how far that could go.

It went pretty far — but not in the way I expected.

The AI handles boilerplate well. Repetitive logic, utility functions, component scaffolding — fast and clean. Where it struggles is multi-step browser API chains with edge cases. The Canvas pipeline for batch processing with OffscreenCanvas required several correction rounds. The ICO binary layout had to be verified offset by offset.

The role I ended up playing was less "developer" and more "spec reader + QA." I read the ICO binary format documentation, checked the Canvas API behavior across browsers, and caught the places where the output was technically valid but practically wrong. The AI wrote the code; I told it why it was wrong and what to fix.

That's a different workflow, not a better or worse one. Just different.


SEO Structure: One Tool, One URL

One decision that shaped the whole project was URL structure.

The obvious approach is a single converter page: /convert?from=png&to=jpg. One page, one component, all formats handled by query params.

I went the other direction: /png-to-jpg, /jpg-to-webp, /webp-to-avif — each format pair gets its own page with its own content, meta tags, and JSON-LD schema.

The reasoning: someone searching "png to jpg online" is not the same person as someone searching "compress webp file." Different intent, different content, different page. A generic converter page captures neither well.

Each tool page has:

  • A HowTo JSON-LD schema with numbered steps
  • A FAQPage schema with 8-9 questions (same content renders as the visible FAQ, no duplication)
  • A format comparison table (good for featured snippet capture)
  • Implementation guides with code examples for React, Next.js, Vue, and WordPress

Whether this actually moves rankings is an open question. I started tracking in Google Search Console after the SEO overhaul in late April. Will share results in a follow-up post once there is enough data to say anything useful.


What's Next

Right now the project has 26 tools. A few things I'm weighing for what comes next:

  • JSON-LD Generator — fill in a form, get valid structured data for any schema type. Useful for the same workflow that led to building the meta tag generator.
  • SVG Optimizer — strip unnecessary metadata and reduce file size client-side. SVGO in the browser.
  • CSS Gradient Generator — simple, high search volume, fits the browser-based model well.

The constraint is time, not ideas. One new tool per week is realistic while keeping the existing ones properly maintained.


Try It

devtools.abect.com — free, no account, no watermarks, no ads. Works offline after the first load.

If you find a bug or something behaves wrong in your browser, I want to know. And if you have a tool you keep opening a separate site for — tell me in the comments. That's basically how this whole project started.

Top comments (4)

Collapse
 
kidd0 profile image
Yura Fedoryszyn

Yeah great idea, i also thought of it!

Collapse
 
forze-dev profile image
Roman Popovych

thx for support

Collapse
 
dana_melay_7799ef01434dfd profile image
Dana Melay

Great post — really impressive work! Keep moving in the same direction, you’re growing and improving with every step 🚀

Collapse
 
forze-dev profile image
Roman Popovych

sounds like an AI bot
but also thx