DEV Community

Cover image for Playing Bad Apple! by throwing Javascript errors
Andrea Roveroni
Andrea Roveroni

Posted on

Playing Bad Apple! by throwing Javascript errors

Inspired by all those beautiful videos playing Bad Apple! on, well, literally everything, I started thinking about contributing to the effort.
But were to play Bad Apple on? After some time I got the idea! Playing Bad Apple on a Node process (or multiple processes as I explain later), not by using random console.log, but rather by throwing errors and letting them display the stack trace on the terminal.

So there it is:

Bad Apple but it's Javascript errors

Bad Apple video played on terminal.

Note: if you want to follow along with the code, please check my repository on Github, and if you enjoyed this small project, consider leaving a star (no pressure I Promise 😄✌️. Ok that joke was terrible).
You can also check out this video I recorded.

Let's start

Each frame is a NodeJS process that tries to execute some bugged functions, throwing an Error. The stack trace is printed on the console, resulting in a single frame. By spawning multiple processes on different scripts, the complete Bad Apple video can be rendered.

How it works

When an Error is thrown, Node prints its stack trace on stderr, which normally is the console.
For example, this is a stack trace consisting of just one function call:

$ node
> function myBuggedFunction() { return null.toString()}
> myBuggedFunction()

Uncaught TypeError: Cannot read properties of null (reading 'toString')
    at myBuggedFunction (REPL10:1:42)
Enter fullscreen mode Exit fullscreen mode

We can nest it to create a stack trace with two rows:

> function myBuggedFunction2() { return myBuggedFunction()}
> myBuggedFunction2()

Uncaught TypeError: Cannot read properties of null (reading 'toString')
    at myBuggedFunction (REPL1:1:42)
    at myBuggedFunction2 (REPL2:1:38)
Enter fullscreen mode Exit fullscreen mode

By correctly naming each function, we can print some ASCII art on it:

> function ____$____(){return null.toString()}
> function ___$$$___() {____$____()};
> function __$$$$$__() {___$$$___()};
> function _$$$$$$$_() {__$$$$$__()};
> 
> _$$$$$$$_()
Uncaught TypeError: Cannot read properties of null (reading 'toString')
    at ____$____ (REPL1:1:34)
    at ___$$$___ (REPL2:1:23)
    at __$$$$$__ (REPL3:1:23)
    at _$$$$$$$_ (REPL4:1:23)
Enter fullscreen mode Exit fullscreen mode

The only adjustment needed is to change the maximum stack size that Node has to keep, in order to print all the rows needed:

// Allows to keep and print 50 function calls in the stack trace
Error.stackTraceLimit = 50;
Enter fullscreen mode Exit fullscreen mode

The main.js process loads all bmp files, and for each of them writes the corresponding js file consisting in a function for each image row,
It then spawns a child process via fork for each frame, piping its stderr to the console. It handles all the timings and clears the console between each frame

From bmp to error stack trace

Assuming we already have a decoded bmp file (representing a single frame), what we need is to generate some code that, when runned by Node, creates a stack trace that appears like the bmp frame and then throws an error (see example above).

So we need to name each function like the corresponding row in the bitmap.
For this, there are some constraints regarding which characters can be used as function names. I decided to go with the dollar sign $ for white pixels, and underscores _ for black ones. In order to have unique function names even when generating identical rows, a suffix is added to each function name.

So each row is converted like so:

function_name = <row_encoding>__<row_index>
Enter fullscreen mode Exit fullscreen mode

After all js scripts have been generated, the main process spawns multiple subprocesses (one for each script). Since spawning takes some time, childrens are created in advance and put in a queue. Each child then waits for a message from the parent process before calling the bugged functions chain. With the right timing, the parent process sends the start message to each child, resulting in the corresponding error thrown and frame printed onto the console.

Resolution VS fps

You can customize both the video resolution and framerate when using ffmpeg.

  • To customize the video resolution, change the scale=W:H property.
  • To customize the framerate, you need to adjust both the -r VALUE property and the target fps when calling the runAll() function. The first value changes the number of frames generated by ffmpeg, while the second one changes the frequency of which child processes are executed. You need to set both to the same value to keep the video at 1x speed.

Increasing resolution also increases the size of the buffer to print on console for each frame. Writing on the console takes some time, so when settings high resolution and high framerate, it might happen that the next frame starts before the previous one has been fully printed. This causes the console to clear in the middle of a frame print, causing some visual flickering on screen. The best solution would be delaying the next frame until the first one has been completedly printed, thus decreasing the video framerate, hovewer I didn't manage to find a reliable way to tell if the stderr buffer is completely empty and written onto the console, so for now you need manually decrease the fps if you experience flickering.

With my hardware, I found that a 50x50 resolution works well with 20 fps video, which is 1000ms / 20frames/s = 50ms between each frame. Lower resolutions (30x30 or 20x20) work good at 30 and even 60fps.

From .mp4 to .bmp (optimization needed)

Converting from mp4 to bmp is done via ffmpeg. Since each pixel is only black or white, I tried to generate bitmaps with less bits-per-pixel bpp as possible, like -pix_fmt monow that uses a single bit (allowing to represent just black or white for each pixel), or -pix_fmt gray that uses 8 bits (allowing for 0-255 colors for each pixel). Unfortunately, those bitmaps where generated with some sort of compression or encoding alghorytm which I didn't manage to decode, and I didn't want to use any kind of external library to help me.

So I switched to -pix_fmt rgb0, that uses one byte for each color channel (rgb, but not alpha) of a pixel. This pixel format it's not compressed (althought bitmap format spec says it might be) so I managed to make it work, but now I need to reserve 24 bits for storing just a single black or white pixel! This is surely something that can be optimized.

Top comments (0)