I have been playing with glsl lately as part of an exploration of another project. One thing that is sorely lacking is debug and test tools that are modern and easy to use.
I have found that it is possible to assert values together and that thanks to glslify we can "export" and "require" code. That's 3 parts of the unit testing ingredients, modularity and assertion. But one thing missing is to evaluate the value of a test.
Historicaly developers have used colour to debug a shader, printing a bright colour to convey meaning, I am not a firefly, I don't understand this photometric communication, so what about text? It's hard to get data back into the CPU from the GPU, but what we can do is render a bitmap font. If that font rendered information about the evaluated value, line number and so on we would have an output as an image!
Ocr is a technology that can read text from an image, js already has a few libs that can do this. What I propose is that the generated test output is read by Ocr and then dumped to a test runner, probably a framework written in node that can run a headless browser.
Top comments (1)
Just a note that the ocr idea is a little complex.
Given that you would need 25 shades of any given colour to represent the western alphabet, I could encode a latter per pixel shade from there, snapshot and read the image data and translate this back into text, similar to hidden codes in images. Given there are 4 channels per pixel, more complex alphabets could be represented.