DEV Community

kavela
kavela

Posted on • Originally published at kavela.pro

Matching camera pixels to RAL codes on Android: building a color matcher with CIE Lab and DeltaE

We build small, single-purpose tools at KAVELA LTD. One of them is RAL Picker — point your phone camera at a furniture surface, a wall, a sample fan-deck, and the closest RAL Classic code shows up in a stable readout. It sits at the intersection of two unforgiving problems: cameras lie about color, and "closest color" only means something if you pick the right color space.

This post walks through the parts of the build that took the most iteration: why we throw away RGB before matching, why we don't trust a single pixel, and how a 213-color lookup ended up cheaper than the camera preview itself.

Why not just match in RGB

The naive version is two lines: read the pixel under the crosshair, run a Euclidean distance over the RAL table, return the smallest. We shipped that first. It was wrong in interesting ways.

RGB is a display format, not a perceptual one. Two colors that look obviously different to a human can be RGB-near; two colors that look like the same beige can be RGB-far. The classic example is dark olive vs. dark teal — they're miles apart visually, surprisingly close in 8-bit RGB.

For anything that has to agree with human judgment — paint matching, fabric, swatches — you want a perceptually uniform color space. CIE Lab is the standard one. Equal numerical distance ≈ equal perceived difference. The metric you compute over Lab is called ΔE (delta-E).

So the matching pipeline becomes:

  1. Camera frame → sRGB pixel
  2. sRGB → linear RGB (undo the gamma curve)
  3. Linear RGB → CIE XYZ (a 3×3 matrix, D65 illuminant)
  4. XYZ → Lab (a non-linear function with a cube-root term)
  5. Lab distance against every RAL color in the table → smallest wins

Steps 1–4 happen once per measurement. Step 5 runs against 213 RAL Classic entries — but only after we've pre-computed their Lab values once at app start. Matching at runtime is a 213-iteration loop over three floats. The camera preview costs more than the match.

fun rgbToLab(r: Int, g: Int, b: Int): FloatArray {
    val rl = srgbToLinear(r / 255f)
    val gl = srgbToLinear(g / 255f)
    val bl = srgbToLinear(b / 255f)

    // sRGB D65 → XYZ
    val x = rl * 0.4124f + gl * 0.3576f + bl * 0.1805f
    val y = rl * 0.2126f + gl * 0.7152f + bl * 0.0722f
    val z = rl * 0.0193f + gl * 0.1192f + bl * 0.9505f

    // XYZ → Lab (D65 reference white)
    val fx = labF(x / 0.95047f)
    val fy = labF(y / 1.00000f)
    val fz = labF(z / 1.08883f)

    return floatArrayOf(
        116f * fy - 16f,
        500f * (fx - fy),
        200f * (fy - fz)
    )
}
Enter fullscreen mode Exit fullscreen mode

srgbToLinear is the standard piecewise gamma function; labF is the cube-root-with-linear-tail. Both fit in five lines. The whole conversion is ~20 lines of Kotlin and runs in microseconds.

ΔE 76 vs ΔE 2000: why we kept the simple one

The ΔE formula has gone through three revisions. ΔE 76 is plain Euclidean distance in Lab. ΔE 94 weights chroma and hue. ΔE 2000 (CIEDE2000) adds rotation terms to fix asymmetries in the blue region and is the modern standard for graphic-arts work.

We tried 2000. We shipped with 76.

The reason is honest: the camera is a much bigger source of error than the metric. Auto white balance is shifting the color cast by ΔE 5–10 frame to frame. Sensor noise on a single pixel is ΔE 2–4. The difference between ΔE 76 and ΔE 2000 across the 213 RAL Classic colors is, in practice, a few edge-case calls — enough to matter for a print shop, not enough to matter when the user is holding a phone three feet from a couch under fluorescent light.

Spending CPU on CIEDE2000 to shave error the camera can't deliver was a misallocation. We'll revisit if we add tristimulus colorimeter input — but that's a different product.

Sampling: one pixel is a lie

A single pixel under the crosshair is too noisy to read straight. JPEG compression artifacts, sensor noise, and the fact that real-world surfaces have sub-pixel grain mean that two consecutive frames on the same target can return colors that are visibly different.

Our v1 averaged a 5×5 patch under the reticle. Better, but still wrong on textured surfaces — woodgrain, brushed metal, fabric weave. The patch averages across high-frequency detail and drifts toward gray.

The current heuristic: take the central pixel and a few neighbors at a fixed offset, drop the highest and lowest L* outliers, then average. It's not statistically rigorous — it's a cheap noise filter that respects the user's intent ("I pointed at that spot, not that spot blurred with its neighbors"). The CameraX preview pipeline gives us the frame as a YUV_420_888 image; we sample directly from the Y/U/V planes rather than converting the whole frame to RGB and then sampling, which would be order-of-magnitude wasteful for a 5-point read.

The next iteration on the v1.5 backlog is multi-point sampling — three sample points in a small triangle under the crosshair, with the user able to see the auxiliary points. Average the three Lab values, return the match. It trades a small UI complication for noticeably more stable readings on textured material. The matching code already accepts a list of Lab samples, so the change is mostly UI.

The RAL table is small enough that everything is free

213 colors. Pre-converted to Lab at app start, that's 213 × 3 floats = ~2.5 KB in memory. The full RAL Classic name table with multilingual names is the larger asset — DE/IT/FR/EN/TR labels per code, around 30 KB in resources.

Match cost: 213 × (3 subtractions + 3 squarings + 1 sqrt) per measurement. On a Pixel 6 it's well under a millisecond. The bottleneck for us is camera preview throughput (~30 fps), not the matcher.

This is worth saying because it would be easy to over-engineer. A k-d tree in Lab space sounds clever and would be slower than the linear scan at this size — branch mispredictions and cache misses on a tree traversal lose to a tight loop over a contiguous float array. Bigger RAL sets — RAL Design (1625 colors), RAL Effect (490) — are on the v1.5+ backlog. Even at 1625, a linear scan over pre-computed Lab values is sub-millisecond. We won't reach for spatial indexing until we have a reason.

RAL → HEX/RGB/CMYK: the inverse problem (sort of)

In v1.4 we added a detail sheet: tap any RAL Classic card and you get HEX, RGB, and CMYK values for that code, each one-tap copyable.

HEX and RGB are direct — RAL Classic ships with sRGB equivalents in its specification. CMYK is harder, and we ship it with a disclaimer.

The honest answer: there is no single "correct" CMYK for a RAL code without a target ICC profile. CMYK is a process color space; the actual output depends on the printer, the ink set, the paper, and the rendering intent. What we ship is an indicative sRGB-to-CMYK conversion using a standard naive transform — useful for "I need a starting CMYK in Illustrator," not for "I'm going to color-manage a press run from this." The detail sheet shows the disclaimer next to the CMYK row. A designer who needs a press-accurate CMYK will run the swatch through their own ICC pipeline. We're not pretending to replace that.

CameraX, not Camera2

Native Camera2 gives more control but the wiring is a small project. CameraX with the Preview + ImageAnalysis use-case binding is one screen of setup, handles lifecycle automatically, and gives us the YUV_420_888 analysis frames we need without writing a state machine.

The one rough edge worth flagging: when you bundle play-services-ads (we added an AdMob banner in v1.3) alongside CameraX, Gradle dependency resolution will try to merge com.google.guava:guava:31.x (pulled by ads) with CameraX's com.google.guava:listenablefuture stub. Build fails with Cannot access ListenableFuture.

The fix is to use play-services-ads-lite instead of the full play-services-ads. lite drops the mediation transitive deps, including the conflicting Guava artifacts. Banner ads work fine; mediation networks would need the full dependency, but we don't run mediation. One-line fix in build.gradle.kts, but it took an evening to find — leaving it here so the next person doesn't burn the same evening.

R8 is off, on purpose, for now

We deploy from a 12 GB VPS. R8 minification under the standard Android Gradle Plugin defaults runs out of heap on this box — the daemon dies mid-pass, the build dies with it. AAB without R8 is ~13.8 MB; with R8 it would be 4–5 MB. Play's dynamic delivery softens the install-size impact, but it's still a real cost.

Two ways out, both on the v1.5 backlog: build on a beefier machine (WSL on a 32 GB workstation, or GitHub Actions with the Play SA key as a secret), keep deploying from VPS via SCP. We'll likely move to the CI route — the 30-second AAB upload from VPS is fine, the 4-minute Gradle build is not.

What's on the backlog

The next two cycles are aimed at things that move the matching needle, not surface area:

  • Multi-point sampling (described above) — bigger stability win than any algorithm change at this point.
  • Broader RAL sets — Classic is 213 colors and covers the most common asks; RAL Design (1625) is what professional painters actually want. Same pipeline, larger table, same sub-millisecond match.
  • Color export — CSV and Adobe .ase swatch export from history + favorites. Designer/painter workflow. The data structure is already there; it's a serializer + share intent.

We've avoided the temptation to chase ΔE 2000 or k-d trees. The error budget on this app is dominated by the camera, not the math, and the math is small enough that "linear scan over pre-computed Lab" is the right answer for the foreseeable future.


RAL Picker landing page: kavela.pro/apps/ralpicker (Play Store link + screenshots there). It's a single-purpose tool — camera in, RAL code out, no account, no upload, no telemetry beyond the AdMob banner. The full RAL Classic table works offline.

If you build perceptual color tools and want to compare notes on sampling strategies or larger RAL sets, drop a comment.

Top comments (0)