DEV Community

Ranjith Kumar
Ranjith Kumar

Posted on • Originally published at Medium

The Developer Owns the UX. The AI Owns the Code.

My mom does bead art. The kind where you sit with a tray of tiny plastic beads and, over hours — sometimes days — assemble them into an intricate portrait or devotional motif. It's meditative, precise, and deeply personal.

The bottleneck has always been the pattern. You can't look at a photograph and start placing beads. You need to know exactly which bead goes where, in what color, on a grid that maps to the physical constraints of the project: how wide it is, how many colors of beads you've bought, how coarse or fine the detail needs to be.

She was doing this by eye, or with rough printouts. I kept thinking: there has to be a better way. So I opened Gemini and started a conversation.

What came out of that is BeadGen — a fully local, zero-dependency browser tool that converts any photo into a ready-to-stitch bead pattern. No backend. No npm. No install. You open an HTML file and use it.

But this post isn't really about the tool. It's about something I learned while building it: when you let AI write the code, the developer's most important job becomes the experience.


What It Actually Does

Before the technical deep-dive, here's the simplest way to show it.

Take the Golden Gate Bridge at sunset — rich gradients, a complex rust-red structure, water, sky, fog, warm light hitting the cables at an angle. Thousands of colors.

Run it through BeadGen at 150 beads wide, no gradient mode on, full color palette.

Every circle is a bead. Every color in that output is a real, distinct bead color a crafter would need to source and place by hand. The structure of the bridge is preserved. The mood of the sunset is preserved. The complexity is managed — reduced to something a human can actually execute, one bead at a time.

That transformation — from photograph to stitchable grid — is the whole product.


What "AI Owns the Code" Actually Means

I want to be precise here, because "AI-assisted development" has become a meaningless phrase. Everyone says it now. Here's what it meant for me on this project:

I wrote very little code from scratch. I used Gemini as the primary implementer — describing what I needed, reviewing what came back, asking it to revise or explain. The logic for color quantization, the Canvas API rendering pipeline, the pixel buffer manipulation — most of that was AI-native code that I read, understood, and occasionally redirected, but didn't author line by line.

What I didn't delegate: every decision about what the tool should feel like.

How many controls is too many? Where does the slider sit? What does "No Gradient Mode" actually mean to someone who isn't a developer? What should the output look like when it downloads? Should error states be loud or quiet?

None of that came from Gemini. All of it required me to stay in the room, stay opinionated, and push back when the generated UI drifted toward "technically correct but confusing to use."

That division of labor — AI on implementation, human on experience — turned out to be the whole game.


The Technical Problem (Because It's Interesting)

BeadGen solves three sub-problems in sequence:

1. Resolution mapping. A bead project has a fixed width in bead count — say, 150 beads wide. The photo needs to be downsampled to that exact grid resolution, with aspect ratio preserved.

2. Color quantization. The Golden Gate photo has thousands of colors. My mom's bead collection has maybe 10. The image's palette has to collapse to that count without destroying the image.

3. Rendering. Each cell in the grid gets drawn as a filled circle (the bead shape), not a square pixel, on a canvas the user can download.

The interesting one is color quantization. The naive approach — round every color to the nearest bucket — looks terrible. You lose the soul of the image because you're treating all of color space uniformly, when an image's color distribution is wildly uneven.

The right approach is the Median Cut algorithm:

  1. Put all pixels in one bucket.
  2. Find which color channel (R, G, or B) has the widest range across all pixels in the bucket.
  3. Sort by that channel and split at the median.
  4. Recurse until you have N buckets.
  5. Each bucket's representative color is the average of its pixels.

The result is that color splits happen where the actual data is most varied — not uniformly across abstract color space. The Golden Gate image has a massive warm cluster (the bridge, sunset light) and a separate cool cluster (the bay, the fog, the sky). Median Cut finds that divide naturally and allocates palette slots accordingly. The rust-red cables stay rust-red. The blue-gray water stays blue-gray.

function nearestColor(r, g, b, palette) {
  let minDist = Infinity;
  let closest = palette[0];
  for (const color of palette) {
    const dist =
      (r - color.r) ** 2 +
      (g - color.g) ** 2 +
      (b - color.b) ** 2;
    if (dist < minDist) {
      minDist = dist;
      closest = color;
    }
  }
  return closest;
}
Enter fullscreen mode Exit fullscreen mode

Euclidean distance in RGB space isn't perceptually perfect — LAB color space would be more accurate — but it's fast, dependency-free, and more than sufficient for bead-level fidelity. I knew about the tradeoff. I chose simplicity deliberately. That was a developer decision, not an AI one.


The Feature My Mom Asked For

There's one feature in BeadGen I'm particularly glad I added: No Gradient Mode.

Photos have gradients everywhere — smooth transitions between light and shadow, color bleeding, soft backgrounds. The Golden Gate sunset is practically nothing but gradient. In a photograph, that's beautiful. In a bead pattern, it's a nightmare. You'd need 80 colors to represent that sky faithfully, and no one has 80 colors of beads.

No Gradient Mode posterizes the output. After quantization, it snaps colors more aggressively to the palette and flattens subtle transitions into solid bands. The pattern looks more graphic, more like flat illustration — and is actually stitchable by a human working from a printed sheet.

The AI didn't suggest this. My mom did. She looked at her first output and said the gradients were too much.

That's the feature you add when your user is in the room. And that's exactly the kind of thing that wouldn't exist if I'd treated the AI as the product owner instead of the implementer.


The Stack: Deliberately, Aggressively Simple

BeadGen is:

  • Vanilla JavaScript
  • HTML5 Canvas API
  • Zero dependencies
  • Zero build step
  • index.html + script.js + style.css, runs off your local file system — literally file:///your-path/index.html

This was a firm decision. My mom needed to use this tool, not install it. That means no localhost server, no terminal, no Python environment. She opens a file in Chrome. Done.

The Canvas API handles everything:

  • drawImage() to resample the photo to bead-grid resolution
  • getImageData() to read pixel values for quantization
  • arc() to draw each bead as a filled circle
  • toDataURL() to export the result as a downloadable PNG

Working with raw pixel buffers is verbose and unforgiving, like writing assembly. But it keeps the whole thing self-contained, fast, and completely offline. Those are UX decisions first, technical decisions second.


Where I Had to Stay Opinionated

Here's where I want to be honest about the limits of delegating to AI.

Gemini was excellent at implementing what I described. It was not good at knowing what to describe. When left to generate UI scaffolding on its own, it produced things that were functional but unintuitive — too many options exposed at once, labels that made sense to a developer but not to someone who just wants to make a bead pattern, layouts that were complete but crowded.

Every time I let a generated UI suggestion through without interrogating it, my mom got confused. Every time I pushed back — "this should be one toggle, not three settings," "this label needs to say what it does, not what it is" — the tool got better.

The AI wrote the code correctly. I had to tell it what "correct" meant for this user.

That gap — between code that works and an experience that works — is where the developer still has to show up. And I don't think that gap is going away anytime soon.


What's Next

A few things on the roadmap:

  • Perceptual color distance — Switching from Euclidean RGB to CIEDE2000 in LAB color space for more accurate palette matching, especially for skin tones and subtle warm-to-cool transitions like that bridge sunset.
  • Bead inventory input — Instead of "give me N colors," let the user input their actual bead colors and map to those exactly.
  • Print layout export — A PDF with row-by-row bead counts and a color legend, formatted for A4/Letter printing.
  • Row-by-row guided mode — Step through one row at a time with bead counts, similar to knitting pattern notation.

Try It

The source is on GitHub: ranji2612/beads_design

Clone it, open index.html in a browser, upload a photo. No setup. Runs entirely offline.

If you're a crafter and you make something with it, I'd genuinely love to see it. If you're a developer and you want to tackle LAB-space color distance or a print export feature, PRs are open.


AI wrote most of the code. I designed the experience. My mom makes the patterns. That's a pretty good division of labor.

Top comments (0)