DEV Community

Cover image for How to Convert Images to 1-Bit Pixel Art Without Losing All the Detail
Alan West
Alan West

Posted on

How to Convert Images to 1-Bit Pixel Art Without Losing All the Detail

Have you ever tried converting a photograph or illustration to pure black and white — not grayscale, but literally just black or white pixels? If you have, you know the result is usually garbage.

I hit this exact problem a few weeks ago. I was building a feature for a print-preview tool that needed to render images on a monochrome thermal printer. My first attempt — a simple threshold — turned every image into an unrecognizable blob. Faces disappeared. Landscapes became abstract nightmares. The detail was just gone.

Turns out, rendering images in 1-bit color is a solved problem. It's just not an obvious one.

Why Simple Thresholding Fails

The naive approach is straightforward: loop through every pixel, and if its brightness is above 128 (on a 0-255 scale), make it white. Otherwise, make it black.

from PIL import Image

def naive_threshold(image_path, output_path):
    img = Image.open(image_path).convert("L")  # Convert to grayscale first
    pixels = img.load()
    width, height = img.size

    for y in range(height):
        for x in range(width):
            pixels[x, y] = 255 if pixels[x, y] > 128 else 0

    img.save(output_path)
Enter fullscreen mode Exit fullscreen mode

This works in the most literal sense. But the output looks terrible for anything with gradients, subtle shading, or fine detail. A smooth sky becomes a hard edge between black and white. Textures vanish completely.

The root cause is quantization error. When you snap a pixel that's, say, 130 (barely above the threshold) to 255, you've introduced an error of 125. That error just disappears — it's thrown away. Multiply that across thousands of pixels and you've lost a massive amount of visual information.

The Fix: Error Diffusion Dithering

The key insight behind dithering is simple: don't throw away the error. Instead, spread it to neighboring pixels that haven't been processed yet. This way, the average brightness of a region stays close to the original, even though each individual pixel is either black or white.

The most well-known algorithm for this is Floyd-Steinberg dithering, published in 1976. Here's how it works:

  1. Process pixels left-to-right, top-to-bottom
  2. For each pixel, snap it to black or white
  3. Calculate the error (original value minus the new value)
  4. Distribute that error to four neighboring pixels using a specific weighting

The distribution pattern looks like this:

         [current]   7/16
  3/16     5/16      1/16
Enter fullscreen mode Exit fullscreen mode

The pixel to the right gets 7/16 of the error. The pixel below-left gets 3/16. Below gets 5/16. Below-right gets 1/16. These fractions add up to 1, so no error is lost.

Implementing Floyd-Steinberg in Python

Here's a working implementation:

import numpy as np
from PIL import Image

def floyd_steinberg_dither(image_path, output_path):
    img = Image.open(image_path).convert("L")
    # Use float array so error accumulation doesn't clip
    pixels = np.array(img, dtype=np.float64)
    height, width = pixels.shape

    for y in range(height):
        for x in range(width):
            old_val = pixels[y, x]
            new_val = 255.0 if old_val > 128 else 0.0
            error = old_val - new_val
            pixels[y, x] = new_val

            # Spread the quantization error to neighbors
            if x + 1 < width:
                pixels[y, x + 1] += error * 7 / 16
            if y + 1 < height:
                if x - 1 >= 0:
                    pixels[y + 1, x - 1] += error * 3 / 16
                pixels[y + 1, x] += error * 5 / 16
                if x + 1 < width:
                    pixels[y + 1, x + 1] += error * 1 / 16

    result = Image.fromarray(np.clip(pixels, 0, 255).astype(np.uint8))
    result.save(output_path)
Enter fullscreen mode Exit fullscreen mode

The difference is dramatic. Where the naive threshold gives you blobs, Floyd-Steinberg produces something that actually looks like the original image from a normal viewing distance. Your brain fills in the gradients from the dithering pattern.

A Couple Gotchas I Ran Into

The dtype trap

Notice I'm using float64 for the pixel array. If you keep it as uint8 (the default when loading an image), the error values will clip. A negative error wraps around to 255 instead of staying negative, and your output turns into psychedelic noise. I spent about 20 minutes staring at corrupted output before I caught this.

Serpentine scanning

Basic Floyd-Steinberg processes every row left-to-right. This can create visible directional artifacts — a kind of diagonal streaking. The fix is serpentine scanning: alternate direction each row. Process row 0 left-to-right, row 1 right-to-left, row 2 left-to-right, and so on. You need to mirror the error distribution kernel when going right-to-left.

Threshold tuning

The hardcoded 128 threshold isn't always ideal. For images that are predominantly dark or light, you might want to adjust it. Some implementations use the mean brightness of the image as the threshold. Others use adaptive thresholds calculated per-region. Experiment with your specific use case.

Beyond Floyd-Steinberg

Floyd-Steinberg isn't the only game in town. Other error diffusion kernels exist with different tradeoffs:

  • Jarvis-Judice-Ninke uses a larger 3-row kernel. Smoother results but slower, and it can look a bit soft.
  • Stucki is another larger kernel that's a good middle ground.
  • Atkinson dithering (used in the original Macintosh) only distributes 6/8 of the error, intentionally losing some. This creates higher contrast output with more white space — it has a distinctive retro aesthetic.
  • Ordered dithering (Bayer matrix) isn't error diffusion at all — it uses a repeating threshold pattern. It's fast, trivially parallelizable (unlike error diffusion), and works well for real-time rendering.

For my thermal printer project, I ended up going with Atkinson. The higher contrast worked better on the low-resolution print head, and the slightly stylized look was actually a feature, not a bug.

The Pillow Shortcut

If you don't need custom control, Pillow has this built in:

from PIL import Image

img = Image.open("input.png").convert("L")
# mode "1" is 1-bit, dither=FLOYDSTEINBERG is the default
dithered = img.convert("1", dither=Image.Dithering.FLOYDSTEINBERG)
dithered.save("output.png")
Enter fullscreen mode Exit fullscreen mode

Three lines. Done. The built-in implementation is also significantly faster than a pure Python loop since it runs in C under the hood.

But writing your own implementation is worth doing at least once. Understanding the error diffusion concept unlocks a whole category of problems — color quantization, palette mapping, audio dithering — where the same principle applies. You're trading spatial precision for perceptual accuracy, and that tradeoff shows up everywhere in computing.

Prevention Tips for Future Image Processing Work

  • Always work in float space when doing any pixel math that involves accumulation or subtraction. Convert to your output format at the very end.
  • Profile before optimizing. The naive Python loop above is slow for large images. But reach for NumPy vectorization or C extensions only after you've confirmed the algorithm is correct.
  • Test with diverse images. An algorithm that looks great on photographs might fall apart on line art or text. I test with at least one photo, one illustration, and one high-contrast graphic.
  • Consider your output medium. Screen rendering, thermal printing, laser engraving, and e-ink displays all have different characteristics. The "best" dithering algorithm depends on where the result ends up.

The 1-bit constraint sounds impossibly limiting until you see what error diffusion can do with it. Sometimes the tightest constraints produce the most elegant solutions.

Top comments (0)