DEV Community

nazunya
nazunya

Posted on

colorlip: A JavaScript library for extracting perceptually representative colors from illustrations and photos

Hey everyone, I'm nazunya!

A while ago, I was building a local service for collecting illustrations and reference images. At that time, I created a color extraction feature to obtain colors that represent those illustrations and photos. It turned out surprisingly well, so I decided to extract it into a library.

I have released v1.0.0 as an npm package called colorlip, and I'd like to share it with you!

Quick Overview!

  • A lightweight library available for browsers and Node.js that extracts representative colors and palettes based on human perception.

  • Tuned for illustrations and product photos, making it ideal for creator-focused services, social media, and e-commerce sites.

  • In addition to Hex and RGB, it outputs HSL, Lab, oklch, and more, which is convenient for post-processing and browser-based usage.

Try it out right now:
https://na-zu-nya.github.io/colorlip/

GitHub:
https://github.com/na-zu-nya/colorlip

try colorlip

Motivation

While building a service to organize illustrations and other media, I was developing a mechanism to extract representative colors so that users could collect images with similar color schemes.

I tried using the average color and existing well-known libraries like Color Thief and node-vibrant, but I ran into the following issues:

  • The background color had too much influence.
  • It would pick up unexpected colors.
  • The results did not match the perceived atmosphere of the colors.

Since existing solutions are designed to be general-purpose, they felt a bit lacking when applied to illustrations and artwork.

I decided to create a library that is tailored to anime and manga-style illustrations, one that can accurately output the colors that perceptually define an illustration, while also being reasonably fast.

How color extraction works

If you're curious about the nitty-gritty, here’s a breakdown!

In short, it works by downsampling the image to get a feel for its features first. Then, it quantizes the pixels into RGB and groups them in Lab color space. After that, it scores the colors based on things like whether they’re toward the center, the edges, or spread out, to build a palette. Finally, it picks the representative and accent colors from that palette.

How color extraction works (click to expand)
  • Resize the image to 150x150.
    • If using core, raw pixels can be used directly.
  • Perform stride sampling to estimate the median and spread of overall saturation, and whether edges are concentrated in the center or distributed throughout the image.
    • We infer that images with distributed edges are likely photographs or motifs extending to the periphery, while those with edges concentrated in the center likely have the subject in the middle.
    • This information is used in subsequent steps to adjust the weighting of central versus peripheral colors.
  • Filter by truncating alpha values below 0.5, and by applying lower bounds for saturation and a specific range of lightness.
    • This removes noise and white background colors.
  • Quantize pixels and apply weights that incorporate centrality, edge intensity, saturation, and alpha.
  • Merge similar colors in Lab space to create clusters.
    • We use Lab space, which is closer to human perception, because grouping in RGB space tends to merge colors that are perceptually distant.
    • Also, since converting all pixels to Lab space is computationally expensive, we perform quantization in RGB first.
  • Score these clusters based on the weights.
    • Using only occurrence frequency tends to favor colors spread across the entire image, so we utilize the weights calculated in the previous steps.
    • We determine the score by building on those previous weights and applying factors for centrality ratio, peripheral ratio, center of gravity, variance, and accent suitability.
  • Merge similar colors by checking color differences.
    • This prevents the palette from consisting only of similar shades of a specific color.
  • Finally, pick dominant and accent colors to create a palette.
    • For dominant colors, we reference the scores and re-evaluate the top candidates based on their original scores, weights, and whether they stand out in the OKLCH color space.
    • For accent colors, we select candidates that are sufficiently distant from the dominant colors and likely to be visually appealing.

How’s this different from other libraries?

Most existing libraries rely on standard quantization algorithms to pick representative colors based on their frequency. A lot of them use MMCQ, which reduces color counts to find a palette. At its core, MMCQ is a straightforward compression and summarization tool—it gathers colors, chunks them into boxes in RGB space, and picks the winners.

colorlip takes a more practical, hands-on approach:

  • It analyzes the image's composition and edge distribution.
  • It figures out which parts of the image are focal points versus the background.
  • It clears out colors that look like noise.
  • It groups together colors that actually look similar to the human eye.
  • It rates colors based on which ones dominate the scene and which ones act as accents.

Instead of just summarizing data, it’s more like it’s "curating" the colors. It’s a bit more heuristic and goes a step beyond typical image processing. While that makes it a little less "one-size-fits-all" than generic libraries, I think it’s reached a point where it’s genuinely useful for real-world tasks.

It should work great for things like:

  • Character art and illustrations
  • Product thumbnails
  • Extracting UI themes
  • Images where the subject is clearly distinct from the background

If you're working with these kinds of images, colorlip should be a solid choice.

Comparison

I've compared the speed and color extraction results of this library in comparison to other well-known ones.
Please note that each library has different uses and purposes, so take this as a rough comparison rather than a definitive benchmark.
I hope this will serve as a reference for your library selection.

First, here are the comparison results for photos.

Comparison of landscape photos

Comparison of snapshots

While there isn't a massive difference, by picking up dominant and accent colors, I believe it captures the general impression quite well.

Next are the illustrations. I'm using my own artwork here since I couldn't find freely available sample illustrations. I have tested it with various illustrations for verification, and the results have been generally positive.

Comparison of illustrations

There are also benefits in terms of speed; if semantic colors are not required, colorlip shows almost no difference in execution time compared to other lightweight libraries, and I believe it provides better results.
It can be used for batch processing on the server side as well as for quickly picking colors in the browser, which means smoother UX in the browser.

Summary

Please give it a try! I hope you like the library. 🌷

I'm also working on a web app called paletty.cc — it's a palette manager built on colorlip, currently in beta.

https://paletty.cc

Thanks for reading!

Top comments (0)