<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: taesoon jang</title>
    <description>The latest articles on DEV Community by taesoon jang (@taesoon_jang_6eb84d38b8f5).</description>
    <link>https://dev.to/taesoon_jang_6eb84d38b8f5</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/taesoon_jang_6eb84d38b8f5"/>
    <language>en</language>
    <item>
      <title>I built a pixelation method optimized for human perception</title>
      <dc:creator>taesoon jang</dc:creator>
      <pubDate>Sat, 18 Apr 2026 16:22:04 +0000</pubDate>
      <link>https://dev.to/taesoon_jang_6eb84d38b8f5/i-built-a-pixelation-method-optimized-for-human-perception-473b</link>
      <guid>https://dev.to/taesoon_jang_6eb84d38b8f5/i-built-a-pixelation-method-optimized-for-human-perception-473b</guid>
      <description>&lt;h2&gt;
  
  
  0. What is pixelation?
&lt;/h2&gt;

&lt;p&gt;Pixelation is the process of converting an image into a grid of large pixels.&lt;/p&gt;

&lt;p&gt;It’s commonly used in pixel art, games, or image compression.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hj1otykaql0j90j553q.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hj1otykaql0j90j553q.jpg" alt=" " width="800" height="534"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Original&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcw06qv79j0u44x6eqfuy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcw06qv79j0u44x6eqfuy.png" alt=" " width="800" height="525"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Pixelated&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  1. Problem: Pixelation looks worse than it should
&lt;/h2&gt;

&lt;p&gt;Most pixelation algorithms try to preserve as much information as possible.&lt;/p&gt;

&lt;p&gt;But at very low resolutions — like 10×10 or 15×15 or 20x20 — this approach breaks down.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9q9ukjj9qr67mlef0m1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9q9ukjj9qr67mlef0m1.png" alt=" " width="256" height="256"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Original&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvf76nwaiywp1ogi9zf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvf76nwaiywp1ogi9zf5.png" alt=" " width="255" height="255"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;15x15 pixelated&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At small sizes, these images stop looking like objects and start looking like noise.&lt;/p&gt;

&lt;p&gt;At this scale, preserving information actually hurts readability.&lt;/p&gt;
&lt;h2&gt;
  
  
  2. Context: This started from a nonogram problem
&lt;/h2&gt;

&lt;p&gt;I ran into this while building a color nonogram game.&lt;br&gt;
I wanted to automatically generate nonogram puzzles from AI-generated images.&lt;/p&gt;

&lt;p&gt;Nonograms have unusually strict constraints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;resolution: 10×10 ~ 15×15&lt;/li&gt;
&lt;li&gt;colors: 2–3 max&lt;/li&gt;
&lt;li&gt;must be immediately recognizable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnlhvrqb7t0ato2062jw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnlhvrqb7t0ato2062jw.png" alt=" " width="800" height="794"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I tried a few standard approaches, but they all produced similar noisy results&lt;/p&gt;
&lt;h2&gt;
  
  
  3. Key idea: Optimize for perception, not fidelity
&lt;/h2&gt;

&lt;p&gt;So I stopped trying to preserve the original image.&lt;/p&gt;

&lt;p&gt;Instead, I focused on one question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“What makes a tiny image recognizable?”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  4. Color Quantization (Perceptual-first)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Lab color space&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of RGB, I used Lab color space for distance calculation.&lt;/p&gt;

&lt;p&gt;RGB distance doesn’t align well with human perception.&lt;br&gt;
Two colors that look similar to us can be far apart in RGB space.&lt;/p&gt;

&lt;p&gt;Lab space fixes this by aligning distance with perception.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Over-segmentation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If I want k colors, I don’t cluster into k.&lt;/p&gt;

&lt;p&gt;Instead:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;cluster into k × 5 colors&lt;/li&gt;
&lt;li&gt;pick the most frequent → main color&lt;/li&gt;
&lt;li&gt;choose k colors that are far from the main color and from each other&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This creates a palette with strong contrast instead of subtle variation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Representative color selection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most approaches use cluster centroids (average color).&lt;/p&gt;

&lt;p&gt;But averages often produce “muddy” colors.&lt;/p&gt;

&lt;p&gt;Instead, I select the most frequent color inside each cluster.&lt;/p&gt;

&lt;p&gt;This keeps colors sharp and recognizable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contrast pruning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After selecting k colors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;remove colors that are too similar&lt;/li&gt;
&lt;li&gt;allow palette size to shrink&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This further improves readability.&lt;/p&gt;
&lt;h2&gt;
  
  
  5. Downscaling (Winner-takes-all)
&lt;/h2&gt;

&lt;p&gt;Traditional resizing blends colors.&lt;/p&gt;

&lt;p&gt;Example: 51% red + 49% blue → purple&lt;/p&gt;

&lt;p&gt;This makes the result harder to recognize.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Block majority voting&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead, for each pixel:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;take the region&lt;/li&gt;
&lt;li&gt;select the most frequent label&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example: 51% red + 49% blue → red&lt;/p&gt;

&lt;p&gt;This preserves structure instead of mixing it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accent color boosting&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Small but important features often get lost.&lt;/p&gt;

&lt;p&gt;To fix this, non-main colors get extra weight&lt;/p&gt;

&lt;p&gt;This ensures details survive.&lt;/p&gt;
&lt;h2&gt;
  
  
  6. Result
&lt;/h2&gt;

&lt;p&gt;With this approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;fewer colors&lt;/li&gt;
&lt;li&gt;sharper edges&lt;/li&gt;
&lt;li&gt;clearer shapes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s look at some results.&lt;br&gt;
&lt;em&gt;Original / traditional pixelation / limited palette / perception-optimized&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6k1vdprn6as7x641cb7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6k1vdprn6as7x641cb7q.png" alt=" " width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zebfjeh9gn5jp9togv5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zebfjeh9gn5jp9togv5.png" alt=" " width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjgxacfu2a9hnwug7hp9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjgxacfu2a9hnwug7hp9.png" alt=" " width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frp12z0ho6721y3r3cowo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frp12z0ho6721y3r3cowo.png" alt=" " width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using this approach, I generated over 2,000 nonogram puzzles from AI-generated images.&lt;br&gt;
Most of them were immediately recognizable without manual adjustment.&lt;/p&gt;

&lt;p&gt;I used this in my nonogram project (playable demo, web version):&lt;br&gt;
&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://nonokingdom.taemon.us/game/" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;nonokingdom.taemon.us&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  7. Possible applications
&lt;/h2&gt;

&lt;p&gt;This approach could be useful for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ultra-low resolution icons&lt;/li&gt;
&lt;li&gt;fast-loading web images&lt;/li&gt;
&lt;li&gt;AI-generated asset cleanup&lt;/li&gt;
&lt;li&gt;puzzle generation pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  8. Closing
&lt;/h2&gt;

&lt;p&gt;This came out of trying to make small images more readable.&lt;br&gt;
It turns out that preserving less can sometimes work better.&lt;/p&gt;

</description>
      <category>python</category>
      <category>imageprocessing</category>
      <category>programming</category>
      <category>gamedev</category>
    </item>
  </channel>
</rss>
