<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Eugeniy Smirnov</title>
    <description>The latest articles on DEV Community by Eugeniy Smirnov (@jenissimo).</description>
    <link>https://dev.to/jenissimo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jenissimo"/>
    <language>en</language>
    <item>
      <title>How to Tame Your AI Pixel Art</title>
      <dc:creator>Eugeniy Smirnov</dc:creator>
      <pubDate>Wed, 30 Jul 2025 09:01:53 +0000</pubDate>
      <link>https://dev.to/jenissimo/how-to-tame-your-ai-pixel-art-3pk5</link>
      <guid>https://dev.to/jenissimo/how-to-tame-your-ai-pixel-art-3pk5</guid>
      <description>&lt;p&gt;Over the past couple of years, generative AI has become a kind of magic brush for everything — concept art, icons, illustrations, covers, avatars, even sprites. Especially pixel art.&lt;br&gt;
You just type something like &lt;em&gt;"Pixel art goose with goggles in the style of SNES"&lt;/em&gt; into Midjourney, Stable Diffusion, DALL·E, or Image-1 — and bam, you've got yourself a glorious pixelated goose in under 10 seconds.&lt;/p&gt;

&lt;p&gt;But if you’ve ever tried putting that goose into an actual game —&lt;br&gt;
you already know the pain.&lt;/p&gt;

&lt;p&gt;So I decided to dig deeper - and build an open-source tool that &lt;strong&gt;fixes AI-generated pixel art and turns it into clean, game-ready, pixel-perfect sprites&lt;/strong&gt;.&lt;/p&gt;




&lt;p&gt;Before we dive in, there’s a link to the tool and source code waiting for you at the end of the article — but first, let’s talk theory. Just a bit. I promise it’s useful.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction: Why AI Still Doesn’t &lt;em&gt;Get&lt;/em&gt; Pixel Art
&lt;/h3&gt;

&lt;p&gt;Let’s get one thing straight — AI has no idea what a pixel is.&lt;br&gt;
Modern models like Stable Diffusion don’t work with a pixel grid the way we do. They operate in a weird fuzzy space called &lt;strong&gt;latent space&lt;/strong&gt;, where images start as pure noise — literally a cloud of randomized fog — and gradually get shaped into something that looks like an image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dzdata.medium.com/intro-to-diffusion-model-part-1-29fe7724c043" rel="noopener noreferrer"&gt;It’s a beautiful process&lt;/a&gt;…&lt;br&gt;
But it’s not &lt;strong&gt;discrete&lt;/strong&gt;. There’s no concept of hard edges, clean pixels, or fixed palettes. Everything is smooth, continuous, and a little dreamy.&lt;/p&gt;

&lt;p&gt;Let’s take a look at what that means for a typical piece of AI-generated “pixel art”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mai8u8db85ysveed4ld.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mai8u8db85ysveed4ld.png" alt="Looks great at first glance — but those pixels are fake." width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First off, the pixel grid is all over the place — nothing aligns the way it should.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmf9bm04liz3mbqyhu2pz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmf9bm04liz3mbqyhu2pz.png" alt="Pixel grid is all over the place" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Second, even with a fixed grid, most “pixels” don’t land perfectly inside it — they bleed over, blur at the edges, or just float slightly off.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fud50pv0yoeeuzk8o790k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fud50pv0yoeeuzk8o790k.png" alt="Pixels don’t land perfectly to a grid" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Third, if we extract the actual color palette from the image, it looks something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpok08nss7515uxpettp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpok08nss7515uxpettp.png" alt="Large color palette" width="800" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So if we perform a naive downscale — say, using nearest neighbor — we end up with something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1b4ou4yj42iorpvn6za9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1b4ou4yj42iorpvn6za9.png" alt="Naive downscale is a mess" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  So how do we fix this?
&lt;/h3&gt;

&lt;p&gt;To turn AI-generated art into true, pixel-perfect assets, we need a smarter pipeline. One that can automatically:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Detect the pixel size&lt;/li&gt;
&lt;li&gt;Find the right grid alignment&lt;/li&gt;
&lt;li&gt;Extract a clean, limited palette&lt;/li&gt;
&lt;li&gt;Downscale without losing important details&lt;/li&gt;
&lt;li&gt;Clean up noise and artifacts in the final image&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 1: Detecting Pseudo-Pixel Scale
&lt;/h3&gt;

&lt;p&gt;To start, we need to figure out the size of the “fake pixels” in the AI-generated image.&lt;br&gt;
I use an &lt;strong&gt;edge-aware detection&lt;/strong&gt; method based on &lt;a href="https://en.wikipedia.org/wiki/Sobel_operator" rel="noopener noreferrer"&gt;&lt;strong&gt;Sobel gradients&lt;/strong&gt;&lt;/a&gt; + &lt;strong&gt;tile-based voting&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1.1: Selecting informative tiles
&lt;/h4&gt;

&lt;p&gt;The image is split into a 3×3 grid, and I pick the tiles with the &lt;strong&gt;highest variance&lt;/strong&gt; — these are the ones most likely to contain meaningful edges, not flat backgrounds or empty space.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn2c9lkgjorubdcvt05i2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn2c9lkgjorubdcvt05i2.png" alt="Example: a 150×150 tile selected for its high level of detail" width="150" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1.2: Finding edges using the Sobel filter
&lt;/h4&gt;

&lt;p&gt;The &lt;strong&gt;Sobel filter&lt;/strong&gt; helps us detect areas with sharp color transitions — in other words, the &lt;em&gt;borders&lt;/em&gt; between fake pixels.&lt;/p&gt;

&lt;p&gt;We apply it in two directions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sobel X&lt;/strong&gt; for vertical edges&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sobel Y&lt;/strong&gt; for horizontal edges&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hx7i2wokzekile34koh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hx7i2wokzekile34koh.png" alt="Sobel X and Sobel Y" width="300" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This gives us two 2D maps of “sharpness” — bright lines show where the image &lt;em&gt;pretends&lt;/em&gt; to be pixelated.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1.3: Generating the edge profile
&lt;/h4&gt;

&lt;p&gt;Next, we convert the 2D edge maps into simple &lt;strong&gt;1D profiles&lt;/strong&gt;.&lt;br&gt;
For vertical edges (Sobel X), we sum up brightness &lt;strong&gt;column by column&lt;/strong&gt;.&lt;br&gt;
For horizontal edges (Sobel Y), we sum &lt;strong&gt;row by row&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The result is a pair of clean graphs — where &lt;strong&gt;peaks&lt;/strong&gt; in the curve hint at potential pixel boundaries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2u91016a35houbkby61w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2u91016a35houbkby61w.png" alt="Edge profile: peaks reveal where the fake pixel boundaries are hiding" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1.4: Voting for the pixel scale
&lt;/h4&gt;

&lt;p&gt;Now that we have the peaks, we look at the &lt;strong&gt;distances between them&lt;/strong&gt;.&lt;br&gt;
The idea is simple: if a fake pixel grid exists, it’ll leave a &lt;strong&gt;rhythmic pattern&lt;/strong&gt; — repeating every N pixels.&lt;/p&gt;

&lt;p&gt;So we collect all the distances between neighboring peaks and build a histogram of those distances.&lt;/p&gt;

&lt;p&gt;In most test images, common step sizes like &lt;strong&gt;8&lt;/strong&gt;, &lt;strong&gt;12&lt;/strong&gt;, &lt;strong&gt;16&lt;/strong&gt;, or &lt;strong&gt;24 pixels&lt;/strong&gt; show up clearly.&lt;br&gt;
(&lt;strong&gt;43px&lt;/strong&gt; just happens to make a nice demo case 🙂)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2cwmzcaozc7cc57jqeb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2cwmzcaozc7cc57jqeb.png" alt="Histogram of distances between edge peaks — the most common value is likely the pixel scale" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To make things more visual, we highlight the &lt;strong&gt;strongest color transitions&lt;/strong&gt; across the entire image — in other words, the &lt;strong&gt;borders between pseudo-pixels&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This heatmap helps confirm that the structure is &lt;strong&gt;regular&lt;/strong&gt;, and that the detected scale actually matches the “grid” the AI was trying to fake.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Warm zones = strong edges&lt;/strong&gt; where the neural net likely “drew” pixel boundaries.&lt;br&gt;
The more regular these zones look, the more confident we can be in our scale detection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fznv29vwnb1l690z955kf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fznv29vwnb1l690z955kf.png" alt="Edge heatmap — warm lines show where fake pixel borders are most prominent." width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Grid Alignment &amp;amp; Smart Cropping
&lt;/h3&gt;

&lt;p&gt;So, we’ve figured out the &lt;strong&gt;pixel scale&lt;/strong&gt; — in our example, that’s 43×43 pixels.&lt;br&gt;
But that’s only half the job.&lt;/p&gt;

&lt;p&gt;Knowing the size of the grid isn’t enough — we also need to figure out &lt;strong&gt;where&lt;/strong&gt; the grid starts.&lt;br&gt;
In other words: how should we align it so that the grid lines actually match the structure of the image?&lt;/p&gt;

&lt;p&gt;Here’s how we do it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Convert the image to &lt;strong&gt;grayscale&lt;/strong&gt; — we only care about structure, not color.&lt;/li&gt;
&lt;li&gt;Use the &lt;strong&gt;Sobel filter&lt;/strong&gt; again to detect sharp edges — these are likely to be fake pixel borders.&lt;/li&gt;
&lt;li&gt;Build &lt;strong&gt;1D edge profiles&lt;/strong&gt; — summing edge intensity row-by-row and column-by-column.&lt;/li&gt;
&lt;li&gt;Loop through &lt;strong&gt;all possible offsets&lt;/strong&gt; from 0 to &lt;code&gt;scale - 1&lt;/code&gt;, and for each one, check how well the grid lines up with the edge peaks.&lt;/li&gt;
&lt;li&gt;Pick the offset &lt;code&gt;(x, y)&lt;/code&gt; where the grid &lt;strong&gt;best matches&lt;/strong&gt; the image’s internal structure.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;findOptimalCrop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;grayMat&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;scale&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sobelX&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Mat&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sobelY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Mat&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Sobel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;grayMat&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sobelX&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CV_32F&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Sobel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;grayMat&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sobelY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CV_32F&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;profileX&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Float32Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;grayMat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cols&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;fill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;profileY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Float32Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;grayMat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;fill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dataX&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;sobelX&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data32F&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dataY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;sobelY&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data32F&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;y&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;grayMat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;y&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;grayMat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cols&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;idx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;y&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;grayMat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cols&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nx"&gt;profileX&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;abs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;dataX&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;idx&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
        &lt;span class="nx"&gt;profileY&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;abs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;dataY&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;idx&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;findBestOffset&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;profile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;bestOffset&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;maxScore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;offset&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;offset&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;offset&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;currentScore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;offset&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;currentScore&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;profile&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;currentScore&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;maxScore&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;maxScore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;currentScore&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
          &lt;span class="nx"&gt;bestOffset&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;offset&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;bestOffset&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bestDx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;findBestOffset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;profileX&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;scale&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bestDy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;findBestOffset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;profileY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;scale&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bestDx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;y&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bestDy&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;sobelX&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nx"&gt;sobelY&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a result, we now know &lt;strong&gt;both&lt;/strong&gt; the size &lt;strong&gt;and&lt;/strong&gt; the position of each pseudo-pixel.&lt;br&gt;
Which means we can confidently draw a grid that &lt;strong&gt;matches the visual structure&lt;/strong&gt; of the image.&lt;/p&gt;

&lt;p&gt;At this point, we crop the image so that its dimensions are divisible by 43 —&lt;br&gt;
ensuring that every grid block becomes a clean, well-defined &lt;strong&gt;cell&lt;/strong&gt; we can safely analyze, downscale, and process without distortion.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Building the Color Palette
&lt;/h3&gt;

&lt;p&gt;Great pixel art isn’t just about the grid — it’s also about the &lt;strong&gt;palette&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Take retro consoles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;NES&lt;/strong&gt; had 54 visible colors (out of 64 total), and only &lt;strong&gt;4 colors per tile&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;SNES&lt;/strong&gt; allowed more, but still operated with strict palette limits&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, classic pixel art was always as much about &lt;strong&gt;restraint&lt;/strong&gt; as it was about resolution.&lt;/p&gt;

&lt;p&gt;For us, palette limitation solves multiple problems at once:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reduces noise&lt;/strong&gt; — eliminates soft gradients, anti-aliasing, and random color speckles&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Brings that retro feel&lt;/strong&gt; — the image looks clean and deliberate, as if it were actually drawn for the GBA or Mega Drive&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Makes it easier to match your sprite with the rest of your game assets&lt;/strong&gt;, especially if you're using a shared palette (like one from &lt;a href="https://lospec.com" rel="noopener noreferrer"&gt;lospec.com&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To build the palette, I use &lt;strong&gt;Wu quantization&lt;/strong&gt;, via the excellent &lt;a href="https://github.com/ibez/image-q" rel="noopener noreferrer"&gt;&lt;code&gt;image-q&lt;/code&gt;&lt;/a&gt; library.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Quantization&lt;/strong&gt; is the process of collapsing similar colors into a smaller, fixed set.&lt;/p&gt;

&lt;p&gt;For example, if an AI-generated image contains 1500 shades of green, quantization will reduce that to maybe 4 or 8 — and remap every pixel to the closest match.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ji3wpbm0mxdrof1ira1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ji3wpbm0mxdrof1ira1.png" alt="Quantized palette" width="800" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Downscaling with Dominant Color Voting
&lt;/h3&gt;

&lt;p&gt;A simple nearest-neighbor downscale sometimes falls apart on AI-generated pixel art — edges get muddy, colors bleed, and the result just doesn’t feel right.&lt;/p&gt;

&lt;p&gt;So I implemented a smarter method: &lt;strong&gt;block-wise color voting&lt;/strong&gt; based on dominant color.&lt;/p&gt;

&lt;p&gt;Here’s how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We take each block (say, 43×43 pixels — one “pseudo-pixel”)&lt;/li&gt;
&lt;li&gt;Count how many times each color appears inside the block&lt;/li&gt;
&lt;li&gt;If a single color clearly dominates (over &lt;strong&gt;5%&lt;/strong&gt; more than the next), we assign it as the block’s final color&lt;/li&gt;
&lt;li&gt;If there’s no clear winner — we fall back to a &lt;strong&gt;mean RGB blend&lt;/strong&gt; to avoid visual gaps&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxufl83x7zclu7wi7peta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxufl83x7zclu7wi7peta.png" alt="Demo block" width="256" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ch35bg42t3qt2cg81wa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ch35bg42t3qt2cg81wa.png" alt="Colors distribution" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Result
&lt;/h3&gt;

&lt;p&gt;At the end of the pipeline, we get a clean, &lt;strong&gt;game-ready pixel art image&lt;/strong&gt; — properly aligned, color-limited, and downscaled with intention.&lt;/p&gt;

&lt;p&gt;There are also a couple of optional cleanup steps that help polish the result even further:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Morphological cleanup&lt;/strong&gt; before downscaling — useful for removing stray pixels or minor noise left by the AI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Post-downscale quantization&lt;/strong&gt; — sometimes new colors appear after blending or averaging, so we run one more palette cleanup pass to ensure the final image stays crisp and consistent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof1t9ebu8o96l4qwrv2m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof1t9ebu8o96l4qwrv2m.png" alt="Final result" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ Of course, not every image turns out perfect on the first try.&lt;/p&gt;

&lt;p&gt;Some AI outputs are just too messy — or too painterly — for a fully automated cleanup. In those cases, you might need to tweak a few settings (like downscale method or target palette size), or give the final image a little &lt;strong&gt;pixel-editor love&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;(&lt;em&gt;This tool won’t replace pixel artists — but it can definitely save them hours.&lt;/em&gt;)&lt;/p&gt;

&lt;h3&gt;
  
  
  More Examples
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1catzj9g5775wvwe5ik5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1catzj9g5775wvwe5ik5.png" alt="Corgi" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpgoxiiyvq23wesq6k6d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpgoxiiyvq23wesq6k6d.png" alt="Dude sprite" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fle8u6nc39wi7iru3nngx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fle8u6nc39wi7iru3nngx.png" alt="Another dude sprite" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo8ze5nkjupo623iwl3lm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo8ze5nkjupo623iwl3lm.png" alt="Another goose sprite" width="800" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s next?
&lt;/h3&gt;

&lt;p&gt;There’s still room to push this further.&lt;/p&gt;

&lt;p&gt;One possible direction is integrating ideas from &lt;a href="https://johanneskopf.de/publications/pixelart/" rel="noopener noreferrer"&gt;&lt;em&gt;Depixelizing Pixel Art&lt;/em&gt; by Kopf &amp;amp; Lischinski (2011)&lt;/a&gt;:&lt;br&gt;
Convert the sprite into vector curves, clean them up, then re-rasterize with full control.&lt;/p&gt;

&lt;p&gt;Other future improvements could include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More advanced &lt;strong&gt;grid alignment algorithms&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Alternative &lt;strong&gt;downscaling strategies&lt;/strong&gt; with better structure preservation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’ve got ideas — or want to build something on top — the project is open-source and waiting for your forks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Try it now
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Interactive tool on Itch:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://jenissimo.itch.io/unfaker" rel="noopener noreferrer"&gt;https://jenissimo.itch.io/unfaker&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source code on GitHub:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://github.com/jenissimo/unfake.js" rel="noopener noreferrer"&gt;https://github.com/jenissimo/unfake.js&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Pull requests are welcome! If you spot a bug, edge case, or just have an idea — feel free to open an issue or contribute.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you enjoyed this article, feel free to follow me on &lt;strong&gt;&lt;a href="http://x.com/jenissimo" rel="noopener noreferrer"&gt;X&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>gamedev</category>
      <category>computervision</category>
      <category>tooling</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
