DEV Community

nero bowman
nero bowman

Posted on

Showcase Tuning: A Visual Debugging Workflow for AI-Assisted Rendering Code

Rendering code has a testing problem that most developers quietly accept:
you can write all the unit tests you want, but none of them tell you whether
the output actually looks right.

Unit tests verify logic. They can't catch inverted normals, clipped sprites,
washed-out colors, or a balloon shape that looks like a UFO.

So I built a workflow called Showcase Tuning to solve this - and packaged
it as a Claude Code skill so AI can run the entire loop autonomously.


The Core Idea

The workflow is a tight loop:

Write a harness → Run it → Look at what came out → Fix the renderer → Repeat

The harness is a small, standalone program that calls your actual rendering
code with deterministic inputs and saves the output as a PNG. It's not a mock
or reimplementation - it's a camera pointed at your real code.

A few rules keep the loop honest:

  • Deterministic inputs - fixed seeds and hardcoded data so every run is comparable
  • Fix the renderer, not the harness - the harness is just a capture mechanism; defects live in the rendering code
  • Generate before reviewing - never guess what the output looks like; always produce the image
  • One component at a time - isolation keeps feedback tight and results unambiguous

A Real Example

Here's what a session looks like. I used it to fix a hot air balloon renderer
in an Android/Kotlin project.

Step 1 - Initial inspection

Claude runs the harness for the first time. The balloon is a plain oval with
only 2 of 5 color palettes rendering. Claude identifies 6 distinct issues:
wrong shape, missing palettes, short ropes, no gore lines, plain basket,
no skirt.

Step 1

Step 2 - First round of fixes

All 5 palettes now render. Basket, gore lines, and ropes are present. But
the envelope looks like a diamond - a symmetric sine profile is the culprit.

Step 2

Step 3–4 - Diagnosing and redesigning the shape

Claude traces the problem to buildEnvelopePath, rewrites the profile curve.
The shape improves but is still too squat.

Step 4

Step 5–6 - Final refinements

Height ratio adjusted. Basket positioning fixed - it was floating too far
below the envelope. The skirt now flows into short ropes into the basket.
All 5 palettes render correctly. Session complete.

Step 6

The whole thing - from a broken oval to a finished balloon - took a few
minutes of iterative AI-driven debugging instead of hours of manual inspection.


The Claude Code Skill

The repo includes a SKILL.md file for Claude Code
that implements the full workflow as an agentic loop. You just describe what
to focus on:

showcase tune the particle system
showcase the tile map renderer at night
showcase the character sprite with all animation frames
Enter fullscreen mode Exit fullscreen mode

Claude reads the rendering code, writes the harness, runs it, inspects the
images, traces defects to their source, fixes them, and re-runs until the
output passes visual review.

Platform support is built in - Android/JVM, Web/TypeScript, Python,
Rust/C++.

Output goes to a Showcase/ directory (auto-added to .gitignore).


Why This Fills a Real Gap

I checked existing Claude Code skills and tooling before building this -
there's nothing that specifically targets the visual rendering debugging loop.
Most AI coding workflows assume textual output: test passes/failures, console
logs, type errors. Showcase tuning is built around the case where the artifact
is an image and "correct" means "looks right."


Get It

👉 github.com/nerobowman/Showcase-tuning

The skill behavior lives entirely in SKILL.md - easy to read, extend,
and adapt to your stack. Contributions welcome.


Have you hit this problem with rendering code before? Curious how others
have approached visual debugging with or without AI.

Top comments (0)