<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Umar Pathan</title>
    <description>The latest articles on DEV Community by Umar Pathan (@umarpathan).</description>
    <link>https://dev.to/umarpathan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/umarpathan"/>
    <language>en</language>
    <item>
      <title>How I turned a visual idea into a reusable OpenClaw skill</title>
      <dc:creator>Umar Pathan</dc:creator>
      <pubDate>Mon, 27 Apr 2026 15:39:57 +0000</pubDate>
      <link>https://dev.to/umarpathan/openclaw-challenge-222g</link>
      <guid>https://dev.to/umarpathan/openclaw-challenge-222g</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/openclaw-2026-04-16"&gt;OpenClaw Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;A Designer Agent for OpenClaw that turns any photo into a non photorealistic render. Two effects ship with it right now:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;ASCII Pixel&lt;/strong&gt; — your subject gets rebuilt out of colored ASCII glyphs, sitting on a blurred and pixelated version of the original background.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dot Shape&lt;/strong&gt; — a flat blue canvas with white circles and squares sized by luminance, with faint ASCII text running behind everything.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You hand the agent an image, you say "apply ASCII pixel" or "apply dot shape", and it writes its own Python, runs it, and sends back a PNG. No web UI, no boilerplate. The whole thing lives in a single &lt;a href="https://pub-43a0e06d9551420887525349d4f2aa27.r2.dev/SKILL.md" rel="noopener noreferrer"&gt;SKILL.md&lt;/a&gt; file you drop into OpenClaw.&lt;/p&gt;

&lt;p&gt;The point was to make a skill that any OpenClaw user can grab and use the same day, without me being in the loop.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Used OpenClaw
&lt;/h2&gt;

&lt;p&gt;Honestly the origin was lazier than this sounds. I was scrolling X one night and saw someone post a portrait done in ASCII, with this beautiful blurred backdrop bleeding through the characters. I screenshotted it, dragged it into my OpenClaw chat, and said "make me one of these for the photo I'm about to send."&lt;/p&gt;

&lt;p&gt;The first attempt was rough. The agent did &lt;em&gt;something&lt;/em&gt; ASCII shaped but the background was flat black, the subject color was washed out, and bright spots in the original (window glare, lamp glow) were getting rendered as dense glyphs instead of leaving the dotted background alone. It looked busy in the wrong places.&lt;/p&gt;

&lt;p&gt;So I started arguing with it. Each round I would point at a specific failure and we would rewrite the rule that caused it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The bright background problem turned out to be a "luminance supplement" the agent had bolted onto the rembg mask. It thought bright pixels were probably foreground. Killing the supplement and trusting only the rembg mask fixed it.&lt;/li&gt;
&lt;li&gt;Color looked muddy because raw RGB averages from a dim photo stay dim. I had it normalize per cell so the brightest channel always hits 255. Hue stays put, saturation pops.&lt;/li&gt;
&lt;li&gt;The grid overlay was hitting background cells too, which made the dotted background feel cluttered. Restricting the 5 percent white grid to subject cells only cleaned it up immediately.&lt;/li&gt;
&lt;li&gt;Cell sizing went through three rounds before 11 by 14 px felt right at 900 px wide. Smaller and the glyphs got unreadable. Larger and the subject lost its silhouette.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the ASCII version was locked, I wanted a sibling effect that felt like a poster instead of a render. Same pipeline, different paint: blue background, white shapes sized by inverted luminance, ASCII running behind in a slightly lighter blue so it reads as texture not noise. I tried 1x, 1.5x and 2x density. 2x looked like polka dots. 1x lost the subject. 1.5x is what shipped.&lt;/p&gt;

&lt;p&gt;Every parameter I locked in went straight into the &lt;a href="https://pub-43a0e06d9551420887525349d4f2aa27.r2.dev/SKILL.md" rel="noopener noreferrer"&gt;SKILL.md&lt;/a&gt; as a fixed value with a short reason. The skill file reads like a contract. The agent is told, in plain language, "do not upgrade these numbers." That single line saved me from regression every time I asked it to add a new feature.&lt;/p&gt;

&lt;p&gt;That is the real workflow OpenClaw enabled. I was not writing the script. I was negotiating the spec, and the agent was rewriting its own implementation each time the spec changed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Here is what the two effects look like on different inputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dot Shape&lt;/strong&gt; on a horse photo. White shapes mixed 50 / 50 between circles and squares, ASCII flicker behind in lighter blue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favwszbwf5ei5jf51t0uv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favwszbwf5ei5jf51t0uv.jpg" alt="Dot Shape effect on a running horse" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ASCII Pixel&lt;/strong&gt; on the same horse, different photo. Notice how the bright sky and ground stay as quiet background dots while the horse itself becomes the only thing made of glyphs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgm6xeopfnufn1v2nxgpo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgm6xeopfnufn1v2nxgpo.jpg" alt="ASCII Pixel rendering of a rearing horse at dusk" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Portrait test. The normalize_color step is doing the heavy lifting here, the orange and blue tones come straight from the source photo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhz4vtecvs8or67v0s8h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhz4vtecvs8or67v0s8h.jpg" alt="ASCII Pixel portrait" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wide aspect ratio still works because the cell grid is built off pixel size, not image proportions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9srwvkhjjusl3vt6ml41.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9srwvkhjjusl3vt6ml41.jpg" alt="ASCII Pixel of a shark underwater" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And one with strong color contrast between subject and background. The blurred backdrop holds the mood, the figure carries the detail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F77upl1ep5w6c8lw7te1r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F77upl1ep5w6c8lw7te1r.jpg" alt="ASCII Pixel of a person against red sky" width="800" height="1000"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://pub-43a0e06d9551420887525349d4f2aa27.r2.dev/SKILL.md" rel="noopener noreferrer"&gt;SKILL.md&lt;/a&gt; is included with this submission. Drop it into your OpenClaw skills folder, point your agent at any image, say which effect you want, and you get a PNG back.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;A few things I did not expect:&lt;/p&gt;

&lt;p&gt;The biggest improvements came from &lt;em&gt;removing&lt;/em&gt; logic, not adding it. The luminance supplement was a clever feature that was actively making the output worse. Trusting one source of truth (rembg) and letting the rest of the pipeline be dumb gave a cleaner result than any hybrid approach.&lt;/p&gt;

&lt;p&gt;Also, writing the skill file in second person, like I was handing instructions to a junior designer, worked dramatically better than writing it as documentation. Phrases like "do not add a luminance supplement" and "preserve the ramp exactly" stopped the agent from getting creative in the wrong places. When I wrote the same constraint as a description ("the ramp is &lt;code&gt;@#S08Xox+=;:-,.&lt;/code&gt;") the agent treated it as a suggestion and would occasionally swap characters around.&lt;/p&gt;

&lt;p&gt;The other surprise was how much of the work was visual taste, not code. Picking 1.5x density over 1x or 2x took ten minutes of staring at outputs side by side. No amount of math was going to tell me which one looked right. Having an agent that could regenerate all three variants in one prompt let me make that call quickly instead of writing a render loop myself.&lt;/p&gt;

&lt;p&gt;If you are building skills for OpenClaw, my one piece of advice is keep the skill file boring. Lock the parameters, list the steps in order, write a hard rules section, and let the agent figure out the implementation each time. The less room you leave for interpretation on the values, the more freedom you can give it on the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  ClawCon Michigan
&lt;/h2&gt;

&lt;p&gt;I did not attend in person this year. Hoping to make the next one.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>openclawchallenge</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
