Mix blue and yellow paint. You get a dark, murky olive green. Now open any color picker and blend blue and yellow in RGB. Bright green. Cheerful, saturated, completely wrong.
This isn't a minor rendering quirk. RGB color mixing is solving a fundamentally different equation than what happens when pigments meet on a surface. After scraping 3,065 reviews across 10 color apps, I found 15 one-star reviews from people explicitly frustrated by fake RGB mixing sold as "paint simulation." One Paleto user put it bluntly: "Another application that is supposed to let you mix paints but does NOT use real world color mixing." I spent a year building an app that does it right, and the rabbit hole went deeper than I expected.
What RGB Gets Wrong
RGB treats color as three numbers. Convenient for screens, useless for physical reality.
A real pigment isn't a point in 3D space. It's a spectral reflectance curve across 380–730nm. Ultramarine blue reflects strongly around 450nm and absorbs nearly everything else. Cadmium yellow reflects from about 530nm upward. When you physically mix them, each pigment keeps absorbing its respective wavelengths. What survives is a narrow band around 500–530nm, plus a lot of overall absorption. Dark, muted green. Not remotely the vivid lime that RGB interpolation predicts.
RGB interpolation averages perceptual encodings. It slides linearly between two encoded values in a space designed for screens, not for modeling how light interacts with matter. For pastel tints the error is tolerable. For saturated pigments it fails badly. And it fails in exactly the ways that make color mixing unintuitive to learn, because the tool you're using to explore is lying about the physics.
Kubelka-Munk in Plain Language
The standard model for opaque pigment mixing is Kubelka-Munk, a two-flux approximation from 1931. "Two-flux" means it tracks light going in (toward the substrate) and light coming back out (toward your eye). At each wavelength, two coefficients describe what happens: K for absorption and S for scattering.
The mixing rule is simple in principle. To combine two pigments, you add their K/S ratios at every wavelength, weighted by concentration. Then you convert the combined K/S curve back to a reflectance curve, and from there to a color you can display on screen. The math isn't hard. The data is the hard part.
This also explains why additive and subtractive mixing give opposite results from the same starting colors. Blue and yellow light aimed at the same spot? You're adding energy across the spectrum. Short wavelengths from the blue source, long wavelengths from the yellow source, medium wavelengths from both. Your eye gets nearly the full visible spectrum at once. Close to white. Blue and yellow paint on the same surface? Each pigment removes energy. The blue absorbs everything above ~500nm. The yellow absorbs everything below ~530nm. The only wavelengths that survive both filters are that narrow green band. You're left with less total light and a dark green.
Same input colors, opposite physical operations.
Building It in TypeScript
I built this as an iOS app using Vue 3, TypeScript, and Capacitor (no native dependencies beyond the shell). The color science runs entirely in the browser's JS engine, which created some interesting constraints.
For the Kubelka-Munk core, I'm using spectral.js and its LHTSS method, which synthesizes reflectance curves from sRGB values using 7 basis spectra. This lets you start from any screen color and get a plausible spectral curve to feed into the K-M pipeline, rather than requiring measured spectral data for every possible input.
But for the pigment matching engine, I wanted real data. I integrated measured spectral reflectance curves for 21 pigments from Golden Heavy Body Acrylics, measured with an X-Rite MS7000 spectrophotometer and published by Eric Haines. Each pigment has 31 reflectance samples (400–700nm at 10nm steps) plus precomputed K/S ratios. The paint solver runs a three-stage pipeline: prefilter the top candidates by CIEDE2000 distance, grid-search combinations of up to 5 pigments across 10 weight distributions, then apply white/black correction if the best result still exceeds a ΔE of 5.
Perceptual color difference uses a full CIEDE2000 implementation validated against all 34 Sharma, Wu & Dalal test pairs to ±0.0001 accuracy. Getting this right matters because the entire pigment matching pipeline depends on reliably answering "how different do these two colors look?" CIEDE2000 is the current standard for that question, and the edge cases around near-zero chroma and achromatic colors will bite you if your implementation is sloppy.
For Munsell notation (the system artists and material scientists actually use to communicate color), I embedded the 2,734-entry RIT Munsell Renotation dataset, converted from Illuminant C to D65 via Bradford chromatic adaptation. Lookup performance on older iPhones mattered, so the LUT is pre-sorted by L* with binary search to narrow candidates to a ±10 lightness band before nearest-neighbor matching. Under 0.5ms on an iPhone SE.
All rendering uses Canvas 2D. No WebGL, no Three.js. I built 56 generative composition styles (Albers squares, Truchet tiles, flow fields, reaction-diffusion, Chladni patterns, domain warping, and dozens more) that take a palette and render it as generative art. Pure 2D context, and it performs fine for the kind of pixel work these compositions need.
The app ships three separate mixing engines. Paint mode uses full K-M subtractive mixing through spectral.js. Light mode does proper additive mixing in linear RGB (not gamma-encoded, which is another common mistake). And an industrial colorant mode draws from a database of 38 historical and modern colorants with metadata on lightfastness, transparency, toxicity, and era of first use, because whether a pigment will fade in sunlight is just as important as what color it makes.
What Surprised Me
I expected the color science to be the interesting part. It was, technically. But what actually hooked people were the 23 perception challenges.
These are interactive exercises that test specific aspects of how you see color. Hue discrimination, value step matching, simultaneous contrast, chromatic adaptation, Bezold effect. Players kept discovering asymmetries in their own perception. Someone with razor-sharp hue discrimination who couldn't distinguish value steps to save their life. Someone else who aced the Munsell value scale but fell apart on saturation judgments. Color perception is personal in ways people don't realize until you measure it. Across competitor reviews, I found 29 mentions of people wanting games, challenges, or quizzes in their color apps. None of those apps have them.
The composition exports surprised me too. I built them as a way to visualize palettes, but people started using them as phone wallpapers and print art. A generative Truchet tile pattern in your carefully curated palette turns out to be something people actually want to look at.
Reddit users in r/colorscience and r/painting gave me feature ideas that shipped. Interactive reflectance chart overlays came from a commenter who wanted to see why two visually similar colors mixed differently. The CVD (color vision deficiency) simulation mode connected with colorblind users in ways I hadn't anticipated. Being able to show a friend "this is literally what I see" turned out to be powerful. I found 24 reviews across competing apps from users asking for exactly this feature.
And the business model. One-time $6.99 lifetime purchase, no subscription, no ads, no account required. I scraped 3,065 reviews across the top 10 color apps and found 177 negative reviews about subscription pricing. Coolors switched from one-time to subscription and got hammered (34 complaints about that change alone). Pantone Connect sits at a 1.72 average rating, largely from pricing rage over $60/year for color names. "Pay once, own it" shouldn't be a differentiator in 2026. But it is, and people mention it in reviews unprompted.
Try It
Chrooma Colors on the App Store. Free on iOS. $6.99 lifetime Pro for all 23 challenges and advanced mixing tools. No subscription, no ads, no account.
If you're into spectral color math, I'd genuinely love to talk about it. K-M breaks down for transparent glazes where the two-flux assumption falls apart (four-flux models exist but the data requirements are brutal). The LHTSS basis spectra approach is a clever hack but it's still an approximation. Metamerism means two colors that match under D65 can diverge wildly under tungsten, and I'm still figuring out the best way to surface that in the UI. Drop a comment if any of this is your thing.
Top comments (0)