DEV Community

TateLyman
TateLyman

Posted on

I Made a Free Photo Editor, Meme Generator, and Background Remover — All Client-Side

I've been on a kick lately building browser-based tools that don't upload your stuff anywhere. After doing PDF tools and converters, I figured image editing was the obvious next target. So I built three things: a photo editor, a meme generator, and a background remover. All running on Canvas API, all client-side, all free.

Here's what I learned about pushing Canvas to its limits.

The Photo Editor: CSS Filters Meet Canvas

Browsers have CSS filters — brightness(), contrast(), saturate(), blur(), etc. They're great for previewing edits, but if you want to actually export an edited image, you need to apply those filters at the pixel level using Canvas.

The nice thing is that the Canvas 2D context supports the same filter property as CSS:

function applyFilters(sourceCanvas, filters) {
  const output = document.createElement('canvas');
  output.width = sourceCanvas.width;
  output.height = sourceCanvas.height;
  const ctx = output.getContext('2d');

  // Build the CSS filter string
  const filterStr = [
    `brightness(${filters.brightness}%)`,
    `contrast(${filters.contrast}%)`,
    `saturate(${filters.saturation}%)`,
    `hue-rotate(${filters.hueRotate}deg)`,
    `blur(${filters.blur}px)`,
    `sepia(${filters.sepia}%)`,
    `grayscale(${filters.grayscale}%)`
  ].join(' ');

  ctx.filter = filterStr;
  ctx.drawImage(sourceCanvas, 0, 0);

  return output;
}
Enter fullscreen mode Exit fullscreen mode

This works great for standard adjustments. But for more advanced stuff — like selective color adjustments or vignettes — you need to go deeper with getImageData and manipulate pixels directly.

Here's a vignette effect that darkens the edges:

function applyVignette(canvas, intensity = 0.5) {
  const ctx = canvas.getContext('2d');
  const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
  const data = imageData.data;

  const cx = canvas.width / 2;
  const cy = canvas.height / 2;
  const maxDist = Math.sqrt(cx * cx + cy * cy);

  for (let y = 0; y < canvas.height; y++) {
    for (let x = 0; x < canvas.width; x++) {
      const i = (y * canvas.width + x) * 4;
      const dx = x - cx;
      const dy = y - cy;
      const dist = Math.sqrt(dx * dx + dy * dy) / maxDist;

      const darken = 1 - (dist * dist * intensity);
      data[i] *= darken;     // R
      data[i + 1] *= darken; // G
      data[i + 2] *= darken; // B
    }
  }

  ctx.putImageData(imageData, 0, 0);
}
Enter fullscreen mode Exit fullscreen mode

The key insight: dist * dist gives you a smooth falloff from center to edge. Linear distance would look harsh. Squaring it makes the darkening ramp up gradually, which looks way more natural.

The Meme Generator: Text Rendering on Canvas

Meme generation sounds trivial — slap some text on an image, right? But getting the text to look like an actual meme took more work than I expected.

The classic meme font is Impact, white with black outline. The outline is the tricky part. Canvas has strokeText and fillText, and you need both:

function drawMemeText(ctx, text, x, y, maxWidth, fontSize) {
  ctx.font = `bold ${fontSize}px Impact, Arial Black, sans-serif`;
  ctx.textAlign = 'center';
  ctx.textBaseline = 'top';

  // Word wrap
  const lines = wrapText(ctx, text, maxWidth);
  const lineHeight = fontSize * 1.1;

  lines.forEach((line, i) => {
    const ly = y + i * lineHeight;

    // Black outline — draw it thick
    ctx.lineWidth = fontSize / 12;
    ctx.strokeStyle = '#000000';
    ctx.lineJoin = 'round';
    ctx.miterLimit = 2;
    ctx.strokeText(line, x, ly);

    // White fill on top
    ctx.fillStyle = '#FFFFFF';
    ctx.fillText(line, x, ly);
  });
}

function wrapText(ctx, text, maxWidth) {
  const words = text.split(' ');
  const lines = [];
  let current = '';

  words.forEach(word => {
    const test = current ? current + ' ' + word : word;
    if (ctx.measureText(test).width > maxWidth && current) {
      lines.push(current);
      current = word;
    } else {
      current = test;
    }
  });
  if (current) lines.push(current);
  return lines;
}
Enter fullscreen mode Exit fullscreen mode

Two things that make this look right:

  1. lineJoin = 'round' — Without this, the stroke corners on letters like W and M look spiky and weird.
  2. Stroke before fill — If you fill first and stroke second, the outline covers the white text. Stroke first, then fill on top so the white sits cleanly inside the outline.

I also added auto-sizing. If the text is too long, the font size shrinks until it fits. Nobody wants to manually pick font sizes on a meme generator.

function autoSizeFont(ctx, text, maxWidth, startSize = 72, minSize = 24) {
  for (let size = startSize; size >= minSize; size -= 2) {
    ctx.font = `bold ${size}px Impact, Arial Black, sans-serif`;
    const lines = wrapText(ctx, text, maxWidth);
    if (lines.length <= 3) return size;
  }
  return minSize;
}
Enter fullscreen mode Exit fullscreen mode

The Background Remover: Flood Fill + Edge Detection

This one was the most fun to build. Full-blown background removal like Photoshop uses ML models (and honestly, the best ones run server-side). But you can get surprisingly decent results with a tolerance-based flood fill.

The idea: user clicks the background color, and the algorithm floods outward from that point, removing any pixel within a color tolerance:

function floodFillRemove(imageData, startX, startY, tolerance = 30) {
  const { data, width, height } = imageData;
  const visited = new Uint8Array(width * height);
  const stack = [[startX, startY]];

  const idx = (startY * width + startX) * 4;
  const targetR = data[idx];
  const targetG = data[idx + 1];
  const targetB = data[idx + 2];

  while (stack.length > 0) {
    const [x, y] = stack.pop();
    if (x < 0 || x >= width || y < 0 || y >= height) continue;

    const pos = y * width + x;
    if (visited[pos]) continue;
    visited[pos] = 1;

    const i = pos * 4;
    const dr = data[i] - targetR;
    const dg = data[i + 1] - targetG;
    const db = data[i + 2] - targetB;
    const diff = Math.sqrt(dr * dr + dg * dg + db * db);

    if (diff <= tolerance) {
      data[i + 3] = 0; // Set alpha to 0 (transparent)
      stack.push([x + 1, y], [x - 1, y], [x, y + 1], [x, y - 1]);
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

This works well for solid-color backgrounds. A white wall, a green screen, a solid blue sky — the flood fill eats through those cleanly.

For gradients and complex backgrounds, I added an edge-aware mode that combines the flood fill with a Sobel edge detector. The edge detection finds boundaries in the image, and the flood fill stops at those boundaries:

function sobelEdges(imageData) {
  const { data, width, height } = imageData;
  const edges = new Float32Array(width * height);

  const gx = [-1, 0, 1, -2, 0, 2, -1, 0, 1];
  const gy = [-1, -2, -1, 0, 0, 0, 1, 2, 1];

  for (let y = 1; y < height - 1; y++) {
    for (let x = 1; x < width - 1; x++) {
      let sumX = 0, sumY = 0;
      let k = 0;

      for (let ky = -1; ky <= 1; ky++) {
        for (let kx = -1; kx <= 1; kx++) {
          const i = ((y + ky) * width + (x + kx)) * 4;
          const gray = (data[i] + data[i+1] + data[i+2]) / 3;
          sumX += gray * gx[k];
          sumY += gray * gy[k];
          k++;
        }
      }

      edges[y * width + x] = Math.sqrt(sumX * sumX + sumY * sumY);
    }
  }

  return edges;
}
Enter fullscreen mode Exit fullscreen mode

The Sobel operator runs two 3x3 kernels across the image — one detecting horizontal edges, one detecting vertical. The magnitude at each pixel tells you how "edgy" that spot is. I use a threshold (usually around 40-60) to decide where the flood fill should stop.

It's not going to match Remove.bg on a photo of someone standing in a forest. But for product photos, logos, headshots with clean backgrounds — it works remarkably well for zero server cost.

Composite Operations: The Secret Weapon

Canvas has a feature called globalCompositeOperation that controls how new drawing operations blend with existing content. Most people only use the default (source-over), but there are 26 blend modes.

I use these heavily in the photo editor for layer effects:

// Color overlay
ctx.globalCompositeOperation = 'multiply';
ctx.fillStyle = '#ff6600';
ctx.fillRect(0, 0, canvas.width, canvas.height);

// Screen blend (lightens)
ctx.globalCompositeOperation = 'screen';
ctx.drawImage(overlayCanvas, 0, 0);

// Draw only where existing content is
ctx.globalCompositeOperation = 'source-atop';
ctx.drawImage(textureCanvas, 0, 0);
Enter fullscreen mode Exit fullscreen mode

source-atop is particularly useful — it draws new content only where there's already opaque pixels. Great for applying textures or patterns to cut-out shapes without bleeding outside the edges.

Performance Notes

Pixel manipulation with getImageData is slow on large images. A 4000x3000 photo has 12 million pixels, and iterating through all of them in JavaScript takes real time.

Two things that helped:

  1. Downsample for preview, full-res for export. While the user is adjusting sliders, I apply filters to a 800px-wide preview canvas. Only when they hit "Download" do I process the full resolution.

  2. Typed arrays over regular arrays. ImageData.data is already a Uint8ClampedArray, so operations on it are fast. But if you're doing intermediate calculations, use Float32Array instead of regular arrays — the typed array operations get JIT-optimized way better.

Try Them

All three tools are live and free:

No uploads, no accounts, no watermarks. Your images stay in your browser the entire time. If you're curious how any specific part works, the JavaScript is all right there in the page source.

Top comments (0)