DEV Community

Cover image for How I turned a visual idea into a reusable OpenClaw skill
Umar Pathan
Umar Pathan

Posted on

How I turned a visual idea into a reusable OpenClaw skill

OpenClaw Challenge Submission 🦞

This is a submission for the OpenClaw Challenge

What I Built

A Designer Agent for OpenClaw that turns any photo into a non photorealistic render. Two effects ship with it right now:

  1. ASCII Pixel β€” your subject gets rebuilt out of colored ASCII glyphs, sitting on a blurred and pixelated version of the original background.
  2. Dot Shape β€” a flat blue canvas with white circles and squares sized by luminance, with faint ASCII text running behind everything.

You hand the agent an image, you say "apply ASCII pixel" or "apply dot shape", and it writes its own Python, runs it, and sends back a PNG. No web UI, no boilerplate. The whole thing lives in a single SKILL.md file you drop into OpenClaw.

The point was to make a skill that any OpenClaw user can grab and use the same day, without me being in the loop.

How I Used OpenClaw

Honestly the origin was lazier than this sounds. I was scrolling X one night and saw someone post a portrait done in ASCII, with this beautiful blurred backdrop bleeding through the characters. I screenshotted it, dragged it into my OpenClaw chat, and said "make me one of these for the photo I'm about to send."

The first attempt was rough. The agent did something ASCII shaped but the background was flat black, the subject color was washed out, and bright spots in the original (window glare, lamp glow) were getting rendered as dense glyphs instead of leaving the dotted background alone. It looked busy in the wrong places.

So I started arguing with it. Each round I would point at a specific failure and we would rewrite the rule that caused it:

  • The bright background problem turned out to be a "luminance supplement" the agent had bolted onto the rembg mask. It thought bright pixels were probably foreground. Killing the supplement and trusting only the rembg mask fixed it.
  • Color looked muddy because raw RGB averages from a dim photo stay dim. I had it normalize per cell so the brightest channel always hits 255. Hue stays put, saturation pops.
  • The grid overlay was hitting background cells too, which made the dotted background feel cluttered. Restricting the 5 percent white grid to subject cells only cleaned it up immediately.
  • Cell sizing went through three rounds before 11 by 14 px felt right at 900 px wide. Smaller and the glyphs got unreadable. Larger and the subject lost its silhouette.

Once the ASCII version was locked, I wanted a sibling effect that felt like a poster instead of a render. Same pipeline, different paint: blue background, white shapes sized by inverted luminance, ASCII running behind in a slightly lighter blue so it reads as texture not noise. I tried 1x, 1.5x and 2x density. 2x looked like polka dots. 1x lost the subject. 1.5x is what shipped.

Every parameter I locked in went straight into the SKILL.md as a fixed value with a short reason. The skill file reads like a contract. The agent is told, in plain language, "do not upgrade these numbers." That single line saved me from regression every time I asked it to add a new feature.

That is the real workflow OpenClaw enabled. I was not writing the script. I was negotiating the spec, and the agent was rewriting its own implementation each time the spec changed.

Demo

Here is what the two effects look like on different inputs.

Dot Shape on a horse photo. White shapes mixed 50 / 50 between circles and squares, ASCII flicker behind in lighter blue.

Dot Shape effect on a running horse

ASCII Pixel on the same horse, different photo. Notice how the bright sky and ground stay as quiet background dots while the horse itself becomes the only thing made of glyphs.

ASCII Pixel rendering of a rearing horse at dusk

Portrait test. The normalize_color step is doing the heavy lifting here, the orange and blue tones come straight from the source photo.

ASCII Pixel portrait

Wide aspect ratio still works because the cell grid is built off pixel size, not image proportions.

ASCII Pixel of a shark underwater

And one with strong color contrast between subject and background. The blurred backdrop holds the mood, the figure carries the detail.

ASCII Pixel of a person against red sky

The SKILL.md is included with this submission. Drop it into your OpenClaw skills folder, point your agent at any image, say which effect you want, and you get a PNG back.

What I Learned

A few things I did not expect:

The biggest improvements came from removing logic, not adding it. The luminance supplement was a clever feature that was actively making the output worse. Trusting one source of truth (rembg) and letting the rest of the pipeline be dumb gave a cleaner result than any hybrid approach.

Also, writing the skill file in second person, like I was handing instructions to a junior designer, worked dramatically better than writing it as documentation. Phrases like "do not add a luminance supplement" and "preserve the ramp exactly" stopped the agent from getting creative in the wrong places. When I wrote the same constraint as a description ("the ramp is @#S08Xox+=;:-,.") the agent treated it as a suggestion and would occasionally swap characters around.

The other surprise was how much of the work was visual taste, not code. Picking 1.5x density over 1x or 2x took ten minutes of staring at outputs side by side. No amount of math was going to tell me which one looked right. Having an agent that could regenerate all three variants in one prompt let me make that call quickly instead of writing a render loop myself.

If you are building skills for OpenClaw, my one piece of advice is keep the skill file boring. Lock the parameters, list the steps in order, write a hard rules section, and let the agent figure out the implementation each time. The less room you leave for interpretation on the values, the more freedom you can give it on the code.

ClawCon Michigan

I did not attend in person this year. Hoping to make the next one.

Top comments (7)

Collapse
 
sofiacm profile image
Sofia Martinez

love seeing a creative idea come to life with OpenClaw

Collapse
 
umarpathan profile image
Umar Pathan

Thank you, Sofia! Really glad you enjoyed it 😊

Collapse
 
lucas77 profile image
Lucas Oliveira

Cool project, I like how you turned the idea into a working OpenClaw skill.

Collapse
 
umarpathan profile image
Umar Pathan

Thanks a lot, Lucas!

Collapse
 
aaravs profile image
Aarav Sharma

This shows how creative and hands-on you can get with OpenClaw.

Collapse
 
umarpathan profile image
Umar Pathan

Thanks, Aarav! That’s exactly what I was going for. Really appreciate it πŸ™Œ

Collapse
 
jackthompson12 profile image
Jack Thompson

This kind of hands-on experiment really highlights the flexibility of OpenClaw skills.