DEV Community

ndesmic
ndesmic

Posted on

Exploring Color Math Through Color Blindness 3: Improving the Models

My original article got shared around it seems and a helpful person who knows a lot more than I commented on it. Apparently I got stuff wrong because I was still a novice and looking at how other people were doing it. However, it turns out I'm not the only one and a lot of people get it wrong, including the filters built into Chrome devtools. Therefore I suggest you take a look at this fantastic article about good and bad models for color vision simulation: https://daltonlens.org/opensource-cvd-simulation/#So-which-one-should-we-use. A very big thanks to @nburrus for writing this piece and helping me out!

So this time I thought I'd try to apply some of this advice and see what we get.

Since Last Time

Since last time I did a little bit of cleanup to the example page and also added WebGPU/WGSL implementations so you can see code for that too.

Getting some better comparisons

Since our comparison models were not very good in the original I actually gave some bad advice which was that the models should be interpolated in sRGB space. This was incorrect. I was directed to GIMP which has some CVD color modes. You need to load up and image and go to view -> display filters -> color deficient vision -> protanopia etc. This will produce a good simulation effect that we can use the compare against. Unfortunately, I don't think you can actually export the image like that, it's just a filter for editing so I grabbed some screenshots and will be using those instead.

Using Linear RGB Space

The transforms to LMS are supposed to happen in RGB-space, not sRGB space like I stated last time. This means at least for SVG all we need to do is change the color-interpolation-filter type to linearRGB. Even though by the spec this is supposed to be the default, there was a bug discovered in Chromium as it does sRGB by default. The outcome is different, on the left is the protanopia filter correctly applied in RGB-space, the right was our incorrect one applied in sRGB space:

Image description

For canvas-based operations things are murkier and documentation is harder to find. First off, what we can tell is that in canvas the operations took place in sRGB space because they match the SVG filters before we changed it. The tricky part then becomes how exactly do we deal with it? The first way I thought was to change the color profile of the canvas. This is fairly new feature at the time of writing but you can pass it in when you get the context:

const context = canvas.getContext("2d", { colorProfile: "srgb" }); //srgb is a the default
Enter fullscreen mode Exit fullscreen mode

The problem here is linearRGB doesn't seem to be an option (it would be nice if it was!). However using a very modern color space rec2020 instead of srgb it looks pretty close:

Image description

The problem here is I don't actually know why this works. Does rec2020 simply not have that non-linearity because it has way more space to work with? Or maybe I can't see the issues on my monitor which is not a calibrated super-ultra-pro display? Or maybe it doesn't work properly at all? I tried with the p3 space as well and it still looked off so this must be specific to rec2020. This didn't feel good at all and so I figured a manual approach was best.

Last time I gave simple conversion of raising each color component to the power of 2.2 to convert sRGB to RGB. But there are slightly better ones, in fact the one used in CSS are given here: https://www.w3.org/TR/css-color-4/#predefined.

We'll need both the conversion and it's reverse:

//color is an array of components (eg rgba)
export function srgbToRgb(color) {
    return color.map(component => {
        const sign = component < 0 ? -1 : 1; 
        const abs = Math.abs(component); 
        return abs <= 0.04045
            ? component / 12.92
            : sign * Math.pow((abs + 0.055) / 1.055, 2.4);
    });
}

export function rgbToSrgb(color) {
    return color.map(component => {
        component < 0.0031308
            ? component * 12.92
            : 1.055 * Math.pow(component, 1/2.4) - 0.055
    });
}
Enter fullscreen mode Exit fullscreen mode

We need to decode from sRGB, apply the transform and then re-encode as sRGB to get the final result:

const protanopia = [
    [0.1120, 0.8853, -0.0005, 0],
    [0.1126, 0.8897, -0.0001, 0],
    [0.0045, 0.0001, 1.00191, 0],
    [0, 0, 0, 1]
];
function srgbToRgb(color) {
    return color.map(component => {
        const sign = component < 0 ? -1 : 1; 
        const abs=Math.abs(component); 
        return abs <=0.04045 
            ? component / 12.92 
            : sign * Math.pow((abs + 0.055) / 1.055, 2.4); }); 
}

function rgbToSrgb(color) { 
    return color.map(component=> {
        component < 0.0031308 
            ? component * 12.92 
            : 1.055 * Math.pow(component, 1/2.4) - 0.055 }); 
}
const linear = srgbToRgb(color);
const linearResult = Matrix.crossMultiplyMatrixVector(linear, protanopia);
const srgbResult = rgbToSrgb(linearResult);
return srgbResult;
Enter fullscreen mode Exit fullscreen mode

Here's a comparison of some of the methods:

Image description

rec2020 and the srgb decoding both look about right. For a more precise comparison we can look at the final "red" color:

type value
Original rgb(255, 0, 0)
JS (sRGB) rgb(28, 28, 1)
SVG (linear) rgb(95, 95, 13)
JS (rec20202) rgb(90, 90, 21)
JS (p3) rgb(71, 71, 30)
JS (sRGB decode) rgb (95, 95, 15)

So if we take the SVG filter to be the most correct then sRGB decoding gets us the closest which is expected. What is interesting is that the color isn't the exact result. This could be a precision loss issue in the conversion, or it could be that SVG filters use slightly different values. Either way the result is satisfyingly close.

GLSL

GLSL is roughly similar but somewhat different to express. We could use loops and if statements like we did with javascript but this is very inefficient. Generally speaking branches in shader code is bad. However we can express this in a way that doesn't have branches which I found here: https://gamedev.stackexchange.com/questions/92015/optimized-linear-to-srgb-glsl

Two things to note about this equation versus the one we used in javascript:

1) It doesn't do absolute value stuff because it assumes all the color is positive, which should always be the case for us but if you have a case where that's not true you'll need to adapt the other code.
2) It doesn't actually work in WebGL's OpenGL ES implementation.

The second issue can be a little confusing. Basically full GLSL will accept mix(vec4, vec4, bvec4) but WebGL will not so we need to manually convert:

vec4 srgbToRgb(vec4 color){
    bvec4 bcutoff = lessThan(color, vec4(0.04045));
    vec4 cutoff = vec4(float(bcutoff[0]), float(bcutoff[1]), float(bcutoff[2]), float(bcutoff
[3]));
    vec4 higher = pow((color + vec4(0.055)) / vec4(1.055), vec4(2.4));
    vec4 lower = color / vec4(12.92);
    return mix(higher, lower, cutoff);
}
Enter fullscreen mode Exit fullscreen mode

Where bcutoff is a vector of booleans and cutoff is a vector of floats containing either 0.0 or 1.0. true is 1.0 and false is 0.0. The way this works is that we use the function lessThan to do the comparison across all components of the vector and get the result. We then need to precalculate both branches (I didn't actually benchmark but this is assumed to be faster than branching) and then mix (linear interpolate) between them. Since the t value is either 0 or 1 you get the highest or lowest value and hence can switch between the "branches."

The conversion back to sRGB is the same. Here's the whole shader:

precision highp float;
uniform sampler2D u_image;
varying vec2 vTextureCoordinate;
mat4 protanopia = mat4(
    0.1120, 0.8853, -0.0005, 0,
    0.1126, 0.8897, -0.0001, 0,
    0.0045, 0.0001, 1.00191, 0,
    0, 0, 0, 1);
vec4 srgbToRgb(vec4 color){
    bvec4 bcutoff = lessThan(color, vec4(0.04045));
    vec4 cutoff = vec4(float(bcutoff[0]), float(bcutoff[1]), float(bcutoff[2]), float(bcutoff
[3]));
    vec4 higher = pow((color + vec4(0.055)) / vec4(1.055), vec4(2.4));
    vec4 lower = color / vec4(12.92);
    return mix(higher, lower, cutoff);
}
vec4 rgbToSrgb(vec4 color){
    bvec4 bcutoff = lessThan(color, vec4(0.0031308));
    vec4 cutoff = vec4(float(bcutoff[0]), float(bcutoff[1]), float(bcutoff[2]), float(bcutoff
[3]));
    vec4 higher = vec4(1.055) * pow(color, vec4(1.0 / 2.4)) - 0.055;
    vec4 lower = color * vec4(12.92);
    return mix(higher, lower, cutoff);
}
void main() {
    vec4 source = texture2D(u_image, vTextureCoordinate);

    vec4 linearColor = srgbToRgb(source);
    vec4 linearTarget = linearColor * protanopia;
    vec4 srgbTarget = rgbToSrgb(linearTarget);

    gl_FragColor = srgbTarget;
}
Enter fullscreen mode Exit fullscreen mode

WGSL

Pretty much the same thing as GLSL but slightly different syntax. There is no lessThan function, normal comparison is done component-wise:

[[group(0), binding(0)]] var my_sampler: sampler;
[[group(0), binding(1)]] var my_texture: texture_2d<f32>;
struct VertexOut {
    [[builtin(position)]] position : vec4<f32>;
    [[location(0)]] uv : vec2<f32>;
};
[[stage(vertex)]]
fn vertex_main([[location(0)]] position: vec2<f32>, [[location(1)]] uv: vec2<f32>) -> VertexOut
{
    var output : VertexOut;
    output.position = vec4<f32>(position, 0.0, 1.0);
    output.uv = uv;
    return output;
}
fn srgb_to_rgb(color: vec4<f32>) -> vec4<f32> {
    var bcutoff = color < vec4<f32>(0.04045);
    var cutoff = vec4<f32>(f32(bcutoff[0]), f32(bcutoff[1]), f32(bcutoff[2]), f32(bcutoff[3]));
    var higher = pow((color + vec4<f32>(0.055)) / vec4<f32>(1.055), vec4<f32>(2.4));
    var lower = color / vec4<f32>(12.92);
    return mix(higher, lower, cutoff);
}
fn rgb_to_srgb(color: vec4<f32>) -> vec4<f32> {
    var bcutoff = color < vec4<f32>(0.0031308);
    var cutoff = vec4<f32>(f32(bcutoff[0]), f32(bcutoff[1]), f32(bcutoff[2]), f32(bcutoff[3]));
    var higher = vec4<f32>(1.055) * pow(color, vec4<f32>(1.0 / 2.4)) - 0.055;
    var lower = color * vec4<f32>(12.92);
    return mix(higher, lower, cutoff);
}
[[stage(fragment)]]
fn fragment_main(fragData: VertexOut) -> [[location(0)]] vec4<f32>
{
    var protanopia = mat4x4<f32>(
        vec4<f32>(0.1120, 0.8853, -0.0005, 0.0),
        vec4<f32>(0.1126, 0.8897, -0.0001, 0.0),
        vec4<f32>(0.0045, 0.0001, 1.00191, 0.0),  
        vec4<f32>(0.0, 0.0, 0.0, 1.0)
    );
    var source = textureSample(my_texture, my_sampler, fragData.uv);
    var linearColor = srgb_to_rgb(source);
    var linearTarget = linearColor * protanopia;
    var srgbTarget = rgb_to_srgb(linearTarget);
    return srgbTarget;
}
Enter fullscreen mode Exit fullscreen mode

Fixing Tritanopia

In my first post I lamented that could not obtain a usable results using the matrix math give in the papers I saw. It seems as if these were just wrong (rolls eyes) and that we might not even be able to deal with it using linear transformation. The tip I got was to use the results of the following paper: http://vision.psychol.cam.ac.uk/jdmollon/papers/Dichromat_simulation.pdf.

The main idea here isn't any different: we convert RGB to LMS, apply the tritanopia transform in LMS space and then convert back to RGB.

Once in LMS space we do some stuff (due to the bad formatting it's easy to confuse commas with prime symbols, lol math notation):

Tritanipia Transform, but it's hard to read from the paper because commas look like prime symbols

So to summarize, the left side has L_Q prime, M_Q prime and S_Q prime (lol math notion) which are how the person with tritanopia would perceive the stimulus in LMS space. L_Q, M_Q and S_Q (without prime) are the LMS values of the stimulus we converted from RGB. So we need to calculate a, b and c.

Image description

Now we need some new LMS values for points E and A.

E is not well explained. The paper says "E represents the brightest possible metamer of an equal-energy stimulus on this
monitor." That's fine and good but what is it? Given the description it's a neutral color that will stimulate all cones the same amount, that sounds like white. But it's not exactly white as produced by the monitor, it's white as perceived by the viewer and the graphs seem to suggest there's a lot of difference between true white and this point E. Still, I think we can just go with this as a close-enough value. So if we take white [255, 255, 255] (RGB white) and convert it to LMS we get [65.5178, 34.4782, 1.68427] which will be our E.

The paper gives us A (at least in terms of nanometer wavelength):

Image description

We need to convert this back to LMS. Strangely there does not appear to be a ready-made way to do this. Instead we need to use a set of observational data. This comes from a Stile Birch paper from 1959 where they tested a whole 10 people on color matching (there's been nothing better than this since?). Stockman Sharpe in 2000 later figured out an average observer from those 10 and that's what we're going to use. If you go to http://www.cvrl.org/ and click on "cone fundamentals" you can get a dataset and you want units to be linear. To prove to myself this data was what I think it is I made a chart (note that the S column ends and there is no more data but you can just replace the rest with 0s since the values were already pretty much 0):

Image description

This looks almost exactly like the chart on the Wikipedia page for LMS, so I think that's the data I want. In the end we find the two wavelengths:

660nm => .09.30085L, 0.00730255M, 0.0S
485nm => 0.163952L, 0.268063M, 0.290322S

So then we can put everything together an get:

const e = [65.5178, 34.4782, 1.68427, 1]; //RGB white in LMS space
function lmsColorToTritanopia([l, m, s, alpha]) { //input "Q"
    const anchor = (m / l) < (e[1] / e[0]) // anchor "A"
        ? [0.0930085, 0.00730255, 0.0] //660nm
        : [0.163952, 0.268063, 0.290322] //485nm

    const a = (e[1] * anchor[2]) - (e[2] * anchor[1]);
    const b = (e[2] * anchor[0]) - (e[0] * anchor[2]);
    const c = (e[0] * anchor[1]) - (e[1] * anchor[0]);
    return [l, m, -((a * l) + (b * m)) / c, alpha ];
}
Enter fullscreen mode Exit fullscreen mode

We just take L, M and alpha and spit them back out. Blue undergoes a transform depending on the ratio of Q_M to Q_L.

So what's actually going here? LMS color can be thought of like a 3d space with axes L, M and S. So vision deficiency is mapping that space onto a 2-d plane and we want to find that plane. As the paper explains they construct a line segment of neutral color, that is it goes from black to white with grey in the middle. This should be constant for vision deficiency, we all see grays the same and so that line will always lie in the plane. However the plane itself is not flat (I'm not sure how to explain this), it's slightly folded along that line and broken into 2 parts. This is why we need the branch condition, and without it the results will be a bit skewed.

Still the result is not satisfying (but it's a lot better):

Image description

Everything seems to have a more reddish hue to it. I checked over the math a couple times and it looked correct to me. The only thing that might have been different is the LMS conversion or if I perhaps picked a bad value for E.

LMS conversion

The two matrices I used for RGB to LMS conversion:

const rgbToLms = [
    [17.8824, 43.5161, 4.1193, 0],
    [3.4557, 27.1554, 3.8671, 0],
    [0.02996, 0.18431, 1.4700, 0],
    [0, 0, 0, 1]
];

const lmsToRgb = [
    [0.0809, -0.1305, 0.1167, 0],
    [-0.0102, 0.0540, -0.1136, 0],
    [-0.0003, -0.0041, 0.6932, 0],
    [0, 0, 0, 1]
];
Enter fullscreen mode Exit fullscreen mode

These were based on Smith and Porkony data. Apparently there are better ones like Hunt-Pointer-Estevez. In both cases you can derive the matrices by doing RGB to XYZ and then XYZ to LMS and inverting the matrix to go the opposite direction.

//Hunt-Pointer-Estevez
const rgbToLms2 = [
    [31.3989492, 63.95129383, 4.64975462, 0],
    [15.53714069, 75.78944616, 8.67014186, 0],
    [1.77515606, 10.94420944, 87.25692246, 0],
    [0, 0, 0, 1]
];

const lmsToRgb2 = [
    [0.0547, -0.0464, 0.0017, 0],
    [-0.0112, 0.0229, -0.0017, 0],
    [0.0002, -0.0019, 0.0116, 0],
    [0, 0, 0, 1]
];
Enter fullscreen mode Exit fullscreen mode

Using the new LMS conversion function I got what appeared to be a more accurate result:

Image description

You can see that the blue is now pushed toward the green part of the spectrum and a lot less blue which makes sense if you can't see short wavelengths. Tritanopia in general just seems hard to work with and I'm sure that's in part due to the smaller population so hopefully this is good enough.

Here's the full JS shader:

function srgbToRgb(color) {
    return color.map(component => {
        const sign = component < 0 ? -1 : 1; 
        const abs=Math.abs(component); 
        return abs <= 0.04045 
            ? component / 12.92 
            : sign * Math.pow((abs + 0.055) / 1.055, 2.4); 
    }); 
}

function rgbToSrgb(color) { 
    return color.map(component => {
        return component < 0.0031308 
            ? component * 12.92 
            : 1.055 * Math.pow(component, 1/2.4) - 0.055 
    }); 
}
const e = [65.5178, 34.4782, 1.68427, 1]; //RGB white in LMS space
function lmsColorToTritanopia([l, m, s, alpha]) { //input "Q"
    const anchor = (m / l) < (e[1] / e[0]) // anchor "A"
        ? [0.0930085, 0.00730255, 0.0] //660nm
        : [0.163952, 0.268063, 0.290322] //485nm
    const a = (e[1] * anchor[2]) - (e[2] * anchor[1]);
    const b = (e[2] * anchor[0]) - (e[0] * anchor[2]);
    const c = (e[0] * anchor[1]) - (e[1] * anchor[0]);
    return [l, m, -((a * l) + (b * m)) / c, alpha];
}
//Hunt-Pointer-Estevez
const rgbToLms = [
    [31.3989492, 63.95129383, 4.64975462, 0],
    [15.53714069, 75.78944616, 8.67014186, 0],
    [1.77515606, 10.94420944, 87.25692246, 0],
    [0, 0, 0, 1]
];
const lmsToRgb = [
    [0.0547, -0.0464, 0.0017, 0],
    [-0.0112, 0.0229, -0.0017, 0],
    [0.0002, -0.0019, 0.0116, 0],
    [0, 0, 0, 1]
];

const linearColor = srgbToRgb(color);
const lmsColor = Matrix.crossMultiplyMatrixVector(linearColor, rgbToLms);
const lmsTritanopia = lmsColorToTritanopia(lmsColor);
const rgbTritanopia = Matrix.crossMultiplyMatrixVector(lmsTritanopia, lmsToRgb);
const srgbResult = rgbToSrgb(rgbTritanopia);
return srgbResult;
Enter fullscreen mode Exit fullscreen mode

And the conversion to GLSL is straightforward:

precision mediump float;
uniform sampler2D u_image;
varying vec2 vTextureCoordinate;
vec4 e = vec4(65.5178, 34.4782, 1.68427, 1); //RGB white in LMS
vec4 lmsColorToTritanopia(vec4 lmsColor) {
    vec4 anchor = (lmsColor[1] / lmsColor[0]) < (e[1] / e[0]) // anchor "A" 
        ? vec4(9.30085E-02, 7.30255E-03, 0.0, 0.0) //660nm 
        : vec4(1.63952E-01, 2.68063E-01, 2.90322E-01, 0.0); //485nm 

    float a=(e[1] * anchor[2]) - (e[2] * anchor[1]); 
    float b=(e[2] * anchor[0]) - (e[0] * anchor[2]); 
    float c=(e[0] * anchor[1]) - (e[1] * anchor[0]); 
    return vec4(lmsColor[0], lmsColor[1], -((a * lmsColor[0]) + (b * lmsColor[1])) / c, lmsColor[3]);
}
vec4 srgbToRgb(vec4 color){
    bvec4 bcutoff = lessThan(color, vec4(0.04045));
    vec4 cutoff = vec4(float(bcutoff[0]), float(bcutoff[1]), float(bcutoff[2]), float(bcutoff[3]));
    vec4 higher = pow((color + vec4(0.055)) / vec4(1.055), vec4(2.4));
    vec4 lower = color / vec4(12.92);
    return mix(higher, lower, cutoff);
}
vec4 rgbToSrgb(vec4 color){
    bvec4 bcutoff = lessThan(color, vec4(0.0031308));
    vec4 cutoff = vec4(float(bcutoff[0]), float(bcutoff[1]), float(bcutoff[2]), float(bcutoff[3]));
    vec4 higher = vec4(1.055) * pow(color, vec4(1.0 / 2.4)) - 0.055;
    vec4 lower = color * vec4(12.92);
    return mix(higher, lower, cutoff);
}
//Hunt-Pointer-Estevez
mat4 rgbToLms = mat4(
    31.3989492, 63.95129383, 4.64975462, 0.0,
    15.53714069, 75.78944616, 8.67014186, 0.0,
    1.77515606, 10.94420944, 87.25692246, 0.0,
    0.0, 0.0, 0.0, 1.0
);
mat4 lmsToRgb = mat4(
    0.0547, -0.0464, 0.0017, 0.0,
    -0.0112, 0.0229, -0.0017, 0.0,
    0.0002, -0.0019, 0.0116, 0.0,
    0.0, 0.0, 0.0, 1.0
);
void main() {
    vec4 source = texture2D(u_image, vTextureCoordinate);
    vec4 linearColor = srgbToRgb(source);
    vec4 lmsColor = linearColor * rgbToLms;
    vec4 lmsTarget = lmsColorToTritanopia(lmsColor);
    vec4 rgbTarget = lmsTarget * lmsToRgb;
    vec4 srgbTarget = rgbToSrgb(rgbTarget);
    gl_FragColor = srgbTarget;
}
Enter fullscreen mode Exit fullscreen mode

And finally WGSL (WGLS doesn't support ternaries so you need to use a select function which is the same thing but with weird ordering):

[[group(0), binding(0)]] var my_sampler: sampler;
[[group(0), binding(1)]] var my_texture: texture_2d<f32>;
struct VertexOut {
    [[builtin(position)]] position : vec4<f32>;
    [[location(0)]] uv : vec2<f32>;
};
[[stage(vertex)]]
fn vertex_main([[location(0)]] position: vec2<f32>, [[location(1)]] uv: vec2<f32>) -> VertexOut
{
    var output : VertexOut;
    output.position = vec4<f32>(position, 0.0, 1.0);
    output.uv = uv;
    return output;
}
fn srgb_to_rgb(color: vec4<f32>) -> vec4<f32> {
    var bcutoff = color < vec4<f32>(0.04045);
    var cutoff = vec4<f32>(f32(bcutoff[0]), f32(bcutoff[1]), f32(bcutoff[2]), f32(bcutoff[3]));
    var higher = pow((color + vec4<f32>(0.055)) / vec4<f32>(1.055), vec4<f32>(2.4));
    var lower = color / vec4<f32>(12.92);
    return mix(higher, lower, cutoff);
}
fn rgb_to_srgb(color: vec4<f32>) -> vec4<f32> {
    var bcutoff = color < vec4<f32>(0.0031308);
    var cutoff = vec4<f32>(f32(bcutoff[0]), f32(bcutoff[1]), f32(bcutoff[2]), f32(bcutoff[3]));
    var higher = vec4<f32>(1.055) * pow(color, vec4<f32>(1.0 / 2.4)) - 0.055;
    var lower = color * vec4<f32>(12.92);
    return mix(higher, lower, cutoff);
}
fn lms_color_to_tritanopia(lms_color: vec4<f32>) -> vec4<f32> {
    var e = vec4<f32>(65.5178, 34.4782, 1.68427, 1.0);
    var anchor = select(
        vec4<f32>(0.163952, 0.268063, 0.290322, 0.0),
        vec4<f32>(0.0930085, 0.00730255, 0.0, 0.0),
        (lms_color[1] / lms_color[0]) < (e[1] / e[0])
    );

    var a = (e[1] * anchor[2]) - (e[2] * anchor[1]); 
    var b = (e[2] * anchor[0]) - (e[0] * anchor[2]); 
    var c = (e[0] * anchor[1]) - (e[1] * anchor[0]); 
    return vec4<f32>(lms_color[0], lms_color[1], -((a * lms_color[0]) + (b * lms_color[1])) / c, lms_color[3]);
}
[[stage(fragment)]]
fn fragment_main(fragData: VertexOut) -> [[location(0)]] vec4<f32>
{            
    //Hunt-Pointer-Estevez
    var rgb_to_lms = mat4x4<f32>(
        vec4<f32>(31.3989492, 63.95129383, 4.64975462, 0.0),
        vec4<f32>(15.53714069, 75.78944616, 8.67014186, 0.0),
        vec4<f32>(1.77515606, 10.94420944, 87.25692246, 0.0),
        vec4<f32>(0.0, 0.0, 0.0, 1.0)
    );
    var lms_to_rgb = mat4x4<f32>(
        vec4<f32>(0.0547, -0.0464, 0.0017, 0.0),
        vec4<f32>(-0.0112, 0.0229, -0.0017, 0.0),
        vec4<f32>(0.0002, -0.0019, 0.0116, 0.0),
        vec4<f32>(0.0, 0.0, 0.0, 1.0)
    );
    var source = textureSample(my_texture, my_sampler, fragData.uv);
    var linear_color = srgb_to_rgb(source);
    var lms_color = linear_color * rgb_to_lms;
    var lms_target = lms_color_to_tritanopia(lms_color);
    var linear_target = lms_target * lms_to_rgb;
    var srgb_target = rgb_to_srgb(linear_target);
    return srgb_target;
}
Enter fullscreen mode Exit fullscreen mode

But what about SVG? Well, at least on the outset it doesn't seem possible with just SVG filters. We could perhaps see if we could build a linear approximation or something which is what others have tried or maybe there's a clever way to structure the operations to get the "if" behavior. Not clear, but for now we'll have to concede it.

Code: https://github.com/ndesmic/cvd-sim/tree/model-improvement

Links

Top comments (2)

Collapse
 
lospi profile image
Roberto

Hi!

I'm one of the developers of an app focused on helping students with color blindness on seeing, for example, chemical tritation on labs. We used your article as a base for our correction. Can we cite your article as a reference? Would you like your real name on the credits?

Collapse
 
ndesmic profile image
ndesmic

Absolutely you can cite it. I'm not writing here under my real name so you don't need to add it.