The "16.7 million colors" number on every monitor's spec sheet is technically true and substantively misleading. It counts the number of unique color codes you can address, not the number of physical colors you can produce. The relevant number — how much of human-visible chromaticity your display can actually reproduce — is somewhere closer to a third.
So I want to make a more useful claim about color reproduction. Three primaries are not enough. They have never been enough. Wider gamuts shift the gap; they do not close it. And no realistic future display does either, for a reason that's geometric, not technological.
The horseshoe and the triangle
Plot the colors a human eye can distinguish on a 2D plane and you get a horseshoe-shaped region called the CIE 1931 chromaticity diagram. The curved boundary traces pure spectral light from violet around through green to red. The interior holds every mixture. The whole region is, definitionally, every chromaticity you can perceive.
A monitor reproduces colors by mixing three primaries — red, green, and blue subpixels of fixed wavelengths. The set of chromaticities you can produce by mixing three points is the triangle whose vertices are those three points. Any color outside the triangle is unreachable. The geometry argument is hard to escape: a triangle inscribed in a convex curve must omit most of the curve. There is no way to pick three points such that the triangle covers the whole horseshoe.
This is why Sharp built TVs with a yellow fourth subpixel in 2010, why color-critical professional displays have shipped with extra primaries, and why several display companies have lost money trying to do better than three.
How small is sRGB, really
sRGB is the gamut your monitor probably claims to cover. According to TFTCentral's analysis, sRGB covers about 35.9% of the CIE 1931 xy chromaticity diagram. Roughly two-thirds of the area inside the horseshoe is outside the sRGB triangle. The "65%" in the headline of this piece is approximately right — the number depends on which standard you use and how you measure, but the order of magnitude isn't in dispute.
Wider gamuts do better. Adobe RGB covers 52.1%. DCI-P3, the digital-cinema standard, covers 53.6%. Rec. 2020, the UHDTV standard, covers 75.8% — and Rec. 2020 cheats, in a sense, because its primaries sit on the spectral locus itself (red at 630 nm, green at 532 nm, blue at 467 nm). It's the widest triangle you can draw inside the horseshoe with three monochromatic primaries.
It still misses ~24% of the diagram. Not because the engineers picked badly. Because three points cannot enclose a curved region.
Why three primaries at all
The reason three primaries works as well as it does is biological, not optical. Human retinas have three types of cones, each sensitive to a broad range of wavelengths with peak sensitivities in roughly long, medium, and short wavelengths. The brain doesn't have direct access to a wavelength spectrum; it has access to three numbers — the response from each cone type. Anything that produces the same three numbers produces, perceptually, the same color. That phenomenon is called metamerism, and it is the entire reason a screen can convince a brain that two red phosphors and one green phosphor are "yellow."
A different visual system would punish three-primary displays brutally. The mantis shrimp has twelve to sixteen photoreceptor types; a screen that satisfies a human's three-cone system would produce visible spectral artifacts to a stomatopod. Most animals are luckier than humans here, not less.
A brief history of trying to do better than three
Three is the minimum to support color via metamerism. More than three lets you cover more of the horseshoe. More than three is also a content-distribution nightmare, because every video, photograph, and game is encoded for three.
Sharp's Quattron line, launched in 2010, added a yellow subpixel and made George Takei the spokesperson. The technical case is sound: yellow-green is one of the parts of the visible spectrum sRGB does poorly, and a fourth primary helps. The practical case never quite worked out, because RGB content has to be mapped into RGBY by inferring the yellow channel from the existing three. The inference is solvable in principle but not unambiguously, and the visible improvement on real video — as opposed to test gradients — was modest.
Multi-primary displays exist for color-critical professional work; they are expensive, the content pipeline is bespoke, and they aren't aimed at consumer compatibility.
Canon and Toshiba's SED program was one of the more expensive failed attempts. Surface-conduction electron-emitter displays were essentially flat CRTs: each subpixel had a microscopic electron emitter, with phosphors chosen freely, and the prototypes were well-received in the press at the time. Canon and Toshiba showed prototypes at CEATEC 2006. A 2006 lawsuit from Applied Nanotech and the corresponding patent fight derailed the program; Canon bought out Toshiba's stake in 2007, and the consumer SED program was formally cancelled in 2010, as LCD prices fell during the same window.
Laser projectors do close in. Christie's RGB laser cinema projectors reach over 90% of Rec. 2020 because lasers' spectral linewidths are sub-nanometer, against roughly 30–50 nm for LED phosphors. Lasers also produce speckle, which is the artifact a substantial body of projector-engineering work has been spent on, with rotating diffusers, MEMS mirrors, and low-coherence design tricks.
HDR is not the same axis
If you've heard "HDR" and "wide gamut" used together, they are doing different work. HDR raises the brightness range — a 2,000-nit HDR display produces brighter highlights and deeper shadows than a 100-nit SDR display. It does not, by itself, expand the chromaticity triangle. A 2,000-nit display with sRGB primaries is brighter; it does not show new colors. A 100-nit display with DCI-P3 primaries shows more colors at lower peak brightness. The marketing campaigns for "HDR" and "wide color gamut" run at the same time because consumer-grade panels gained both at the same time, but they are independent properties of the panel.
The asymptote
You cannot fix this with more bits. 10-bit and 12-bit panels carve the same triangle into finer slices; they do not extend the triangle. You cannot fix it with HDR, which moves brightness, not chromaticity. You cannot fix it with three primaries, period — the geometry is binding.
You can fix it with more primaries, narrower-spectrum primaries, or both. Each one costs content compatibility, manufacturing complexity, or both. None is on a clear path to consumer rollout.
The colors you can see and the colors a monitor can show are different sets, and the difference is most of the horseshoe. Most consumer marketing is structured to keep that distinction blurry. The CIE 1931 diagram is not.
The next time a spec sheet promises "true color" or "16.7 million colors," read those numbers as a count of addresses rather than as a count of colors you'll see. The colors you can see are bounded by physics — the wavelengths your three cone types respond to. The colors your screen produces are bounded by geometry — three primaries, one triangle inside a horseshoe. Those two regions have never coincided, and no path on the consumer roadmap closes the gap.

Top comments (0)