DEV Community

Cover image for Every JPEG Is a Model of Your Retina
Arthur
Arthur

Posted on • Originally published at pickles.news

Every JPEG Is a Model of Your Retina

There is, sitting on every disk and every phone and every web server in the world, a thirty-year-old standard that contains, layer by careful layer, an engineering description of how the human visual system works. The standard is JPEG. The description is mostly invisible because the engineers who wrote it did their work in 1992 and then everybody else moved on to writing CRUD apps. The interesting fact about JPEG is not that it compresses images. The interesting fact about JPEG is that it does it by quietly running a model of the user's eye.

I want to take that observation seriously rather than file it under "history of imaging." The thirty years between the JPEG specification's CCITT approval on 18 September 1992 and the present moment have produced WebP, AVIF, JPEG XL, and a long tail of also-rans, every one of which is a refinement of the same trick. The trick is biology. The codecs are getting better at doing what JPEG did first, which is to say, they are getting better at modelling the user. There is something worth saying about an industry whose dominant image format is, by design, a working theory of perception that almost nobody outside imaging research thinks of as such.

What JPEG does first

When a JPEG encoder receives an image, the very first thing it does is throw away the colour space the image arrived in. RGB is how cameras capture light and how monitors emit it. JPEG converts to a different space called YCbCr, defined for video and digital still imaging by ITU-R Recommendation BT.601, in which Y is luminance — how light or dark a pixel is — and Cb and Cr encode the colour as two difference channels: blue minus luminance, red minus luminance.

The conversion formula is the part that gives the game away. The luminance channel is calculated as Y = 0.299 × R + 0.587 × G + 0.114 × B. Three coefficients, summing to one, weighted heavily towards green and only lightly towards blue. The coefficients are not arbitrary. They track the photopic luminosity function of the human eye, which peaks around 555 nanometres in the yellow-green region of the spectrum, the same wavelengths a primate retina is most efficient at distinguishing. Evolution shaped that curve to help an arboreal ancestor pick ripe fruit out of foliage. The 1992 standards committee copied it.

There is a second reason for the conversion. The retina itself is asymmetric. The human eye contains roughly 120 million rod cells, which carry luminance information, and roughly six million cone cells, which carry colour — a twenty-to-one ratio of light-sensitive hardware to colour-sensitive hardware. RGB treats the three channels as equally important. The retina does not. The very first thing JPEG does is split the signal along the same line your eye splits it.

Throwing away seventy-five percent of the colour

After the conversion, the encoder has the option to store the colour channels at lower resolution than the luminance channel. The default mode for most encoders is called 4:2:0 chroma subsampling, in which each two-by-two block of luminance pixels shares a single colour pixel. Three quarters of the colour information, by storage volume, is discarded.

Almost nobody notices. The reason is that the cone cells are not distributed evenly across the retina but concentrated in a small central region called the fovea, which subtends about two degrees of the visual field. Outside the fovea, peripheral colour vision is dramatically worse than peripheral luminance vision. The visual system has, in effect, been doing 4:2:0 subsampling on its own input for the entire history of vertebrate vision. JPEG is not throwing away the colour data the user sees. It is throwing away the colour data the user's retina did not sample in the first place.

A reproducible experiment, easy enough to perform in any image editor that supports the YCbCr colour model: take a sharp photograph, split it into Y, Cb, and Cr channels, and view each in isolation. The luminance channel looks like a clean black-and-white version of the image. The two colour channels look like blurry, low-contrast smudges. The user's first reaction is usually that the photo "could not really have been like that." It was, in fact, exactly like that. The visual system was assembling the appearance of a sharp colour image from a sharp luminance signal and two soft colour signals all along.

The discrete cosine transform and the visual cortex

The compression part of JPEG, the part most engineers can name without looking it up, divides the image into eight-by-eight blocks and applies a discrete cosine transform to each block. The DCT decomposes the block into sixty-four spatial-frequency components, ranging from a smooth gradient (the lowest frequency) to fine alternations of light and dark (the highest). The mathematics of the DCT was introduced in a 1974 paper by Ahmed, Natarajan, and Rao. The reason JPEG uses it is older than the maths.

In 1959, David Hubel and Torsten Wiesel inserted a microelectrode into the primary visual cortex of an anaesthetised cat and projected various patterns onto a screen in front of the animal. They were trying to understand how V1 neurons responded to visual stimulation. They discovered that individual neurons in the primary visual cortex, the structures they later called simple cells, fired most strongly in response to oriented bars and edges at specific spatial frequencies. Different cells preferred different orientations. Different cells preferred different frequencies. The cortex, they argued, was performing a localised decomposition of the incoming image into something resembling a Fourier basis. The work earned Hubel and Wiesel the 1981 Nobel Prize in Physiology or Medicine, shared with Roger Sperry.

The DCT is not the same operation as the one V1 simple cells perform. It is, however, the closest computationally cheap approximation that fit on the hardware available in 1992. The JPEG committee did not have the model of V1 we have now. They had the psychophysics of spatial-frequency perception, fifty years of contrast-sensitivity experiments, and the practical observation that an eight-by-eight block of frequencies behaved, when reassembled, like a unit of perception. The choice of basis was, structurally, a guess at what the visual cortex was doing. It was a good guess.

The quantization table is your contrast sensitivity function

After the DCT, JPEG has sixty-four coefficients per block. To compress, the encoder divides each coefficient by an entry from a quantization table and rounds the result. Coefficients corresponding to perceptually important frequencies are divided by small numbers and survive almost intact. Coefficients corresponding to frequencies the eye barely resolves are divided by large numbers and round to zero.

The standard luminance quantization table that ships with most JPEG encoders is shaped, when laid out as an eight-by-eight grid, like a low-pass filter. The top-left corner — DC and the lowest frequencies — has small entries. The bottom-right corner — the highest frequencies — has the largest. The shape of that filter mirrors the human contrast sensitivity function, which peaks somewhere between three and six cycles per degree of visual angle, falls off steadily below one cycle per degree, and falls off rapidly above roughly sixty cycles per degree — the upper limit of human spatial resolution. The numbers in the table were established by psychovisual experiments on real people, not derived from first principles. They are an engineering reading of the species' visual response curve.

The quality slider, which most users have moved at some point without thinking about it, multiplies the entries in the quantization table by a scaling factor. Higher quality means smaller divisors and finer preservation of high-frequency detail. Lower quality means larger divisors and more aggressive zeroing of frequencies the user is unlikely to see. There is a soft cliff in the curve, sometimes placed somewhere around quality 75 or 80, where the algorithm crosses from "throwing away frequencies you cannot see" into "throwing away frequencies you can." Below the cliff the artifacts become visible, and the visibility is itself a measurement of the contrast sensitivity function. The cliff is the boundary of human spatial vision, mapped in JPEG units.

Annex K and the thirty-year refinement

A widely-noted footnote in the JPEG specification is that Annex K, which contains the example quantization tables, presents them as illustrative rather than as defaults. Every encoder is permitted to substitute its own. MozJPEG takes that permission seriously and computes per-image quantization tables, spending CPU at encode time to fit the perceptual model more tightly to the specific photograph. The quality at a given file size is measurably better. The compression target has not moved. The model of the user has been sharpened.

The successor formats follow the same pattern. WebP and AVIF use better entropy coders, support more colour primaries, and produce smaller files at equivalent perceptual quality, but their compression rationale is the same one JPEG had: throw away the parts of the signal the visual system never resolves. AVIF goes further with a feature called film grain synthesis, which decodes film-style noise from a small parameter set on the receiver instead of storing the noise pixel by pixel. The justification is biology. The brain does not record the specific positions of grain particles in a photograph; it records their character. AVIF stores the character.

JPEG XL, finalised in 2022, goes furthest. Its compression engine includes Butteraugli, a perceptual difference metric Google built to model the human visual system, which scores potential reconstructions on a per-block basis against the original and chooses the one whose difference is least visible to a simulated human observer. The codec is, more or less explicitly, running a model of vision for every block it encodes. Chrome added JPEG XL support behind a flag in 2021 and then removed it in 2023, citing insufficient ecosystem traction — a quiet, sad story that is not the subject of this essay but is part of the long tail in which compression formats live and die on platform politics rather than perceptual merit.

The thirty years between the JPEG committee and Butteraugli have not changed the engineering target. They have refined the model of vision the codec uses to hit it.

What the codec is actually saying

Every time a JPEG encoder runs, it is making a series of explicit statements about how the user's visual system is structured. It states that luminance matters more than colour, by a factor of about four. It states that colour resolution drops off quickly outside the fovea, so colour can be subsampled. It states that the cortex decomposes images into spatial frequencies, so frequency-domain coding is the right substrate. It states that the user is more sensitive to mid-frequency variation than to either very low or very high frequencies, so the quantization table can be unevenly graded. Every one of these statements is a falsifiable claim about the user's biology, and every one of them was experimentally verified before being soldered into the standard.

The standard does not feel like a falsifiable claim about biology, because it is a small dialog box in an image editor that has the word "quality" next to a slider. That is the camouflage. The claim is that the user's eyes are a particular kind of imperfect, and the algorithm exploits that imperfection in a particular kind of way. The claim has held up well enough that thirty years of better hardware and better mathematics have produced refinements rather than replacements.

What the trick was always about

JPEG is not an image-compression algorithm. It is a perception-compression algorithm. The interesting question about it is not how the maths works; the maths is well-described in textbooks and rarely revisited. The interesting question is why the 1992 specification got the model of vision right enough that nothing has displaced it. The successor formats are sharper instruments cutting along the same line. The line was drawn correctly the first time.

Every time a photograph is saved at quality eighty and uploaded to a web page that millions of strangers will scroll past, a small piece of software is making, in effect, a public statement about how those strangers' eyes work. The statement is right. The strangers do not notice the seams in the picture, because the seams have been placed where their visual systems were never going to look. There is something quietly remarkable about an industry whose most ubiquitous file format is, in its bones, a careful and accurate description of the people who use it.

Top comments (0)