DEV Community

Genglin Zheng
Genglin Zheng

Posted on

My side project just had its best month ever. I have no idea why. Here's what happened.

Okay so this is a little embarrassing to admit —

I launched bulkpictools.com three months ago and genuinely forgot to check
the analytics for like two weeks straight.

When I finally opened Google Search Console, I had to look twice.

Month 3 traffic was more than Month 1 and Month 2 put together.

I'm not going to pretend I have a clean explanation for this. I did some SEO
work around the end of Month 2 — rewrote a bunch of meta descriptions, cleaned
up the page content so it actually matched what people were searching for,
stopped being lazy about alt text. Basic stuff. The kind of stuff you tell
yourself you'll do later and then actually do later.

Did that cause the jump? Maybe. Probably? Google took its time, which, honestly,
very on-brand for Google.


But while I was waiting for the SEO stuff to kick in (or not), I shipped
something that I've been wanting to build for a while.

A background remover. Except the whole model runs in the browser.

No server. Zero backend calls. The image never goes anywhere.

I know "client-side AI" sounds like it should be complicated but the actual
core of it is pretty straightforward once you get ONNX Runtime Web set up:

const session = await ort.InferenceSession.create('./model.onnx');
const tensor = preprocessImage(imageData);
const { output } = await session.run({ input: tensor });
applyAlphaMask(canvas, originalImage, output);
Enter fullscreen mode Exit fullscreen mode

choose several images from local

auto removing background
The annoying part wasn't the model. It was memory. When someone tries to
process 10 high-res photos back to back, things get ugly fast. Still working
through that. Releasing > perfect, etc.


The thing nobody mentions about client-side inference is how weird it feels
the first time it actually works. You drag in a photo, the model loads,
and suddenly the background is just... gone. Locally. No spinner waiting
on a server response. It's fast in a way that feels slightly wrong.

Privacy-wise it's also just cleaner. I don't have to store anything,
process anything server-side, or explain to users where their photos go.
They go nowhere. That's the whole point.


Quick note on how this actually gets built:

I have a full-time job. This entire project gets worked on during my commute —
phone in one hand, trying not to miss my stop with the other. Most of the
testing happens on my phone screen, which is a chaotic way to do QA but
also weirdly effective because if it works on a moving subway, it works.

I'm not romanticizing the grind or whatever. It's just the actual situation.
You ship with the time you have.


Anyway. Month 4 starts now. Continuing to validate the AI tools,
fix what's broken, and figure out which features people actually use
vs which ones I thought they'd use (always humbling).

If you've built anything client-side AI recently — especially anything
dealing with image processing in the browser — drop a comment.
I'm curious what models you're running and what memory headaches
you've run into.

bulkpictools.com — go break something and tell me what breaks.

Top comments (0)