DEV Community

ndesmic
ndesmic

Posted on • Updated on

Building a Digital Synthesizer Part 1: Making Some Noise

A synthesizer has the foundations for digital sound that can be used to make also sorts of other things. We're gonna take raw waveforms and turn them into (somewhat) pleasant noise. I'll preface this post by saying I have exactly 0 musical knowledge. I've never played an instrument, I can't read sheet music so I'll probably make some mistakes especially with music terminology. Please excuse me. Also note that I don't take responsibility for any hearing or equipment damage you might encounter from bugs or anything else, I recommend starting at a low volume when testing and adjusting upward.

Audio Worklet

While there were previous ways to do sound generation like ScriptProcessorNode, that happened on the main thread. With the addition of the Audio Worklet we can now do sound generation off the main thread and we're all better for it. As such I will not talk about the old API.

Let's start by setting up an empty audio worklet:

//tone-processor.js
class ToneProcessor extends AudioWorkletProcessor {
  constructor() {
    super();
  }

  process(inputList, outputList, parameters) {
    return true;
  }
};

registerProcessor("tone-processor", ToneProcessor);
Enter fullscreen mode Exit fullscreen mode

First thing's first, this has to be a separate script as worklets cannot be written inline. I called it tone-processor.js. The class must extend AudioWorkletProcessor and it overloads one method process which takes 3 parameters and returns a boolean. The return value true is important. It basically says we're still using the worklet. If you return false it means you are no longer using it and it can be garbage collected. the registerProcessor then registers the worklet with a name in a very similar way to a custom element.

To use the worklet, back in the main script file, we need to make a new audio context.

this.context = new AudioContext();
Enter fullscreen mode Exit fullscreen mode

And then add the script to the context:

await this.context.audioWorklet.addModule("./js/services/tone-processor.js");
Enter fullscreen mode Exit fullscreen mode

The just means that the audio context can use it and has a registration. To actually use it we need to create a new audio node.

this.toneNode = new AudioWorkletNode(this.context, "tone-processor");
Enter fullscreen mode Exit fullscreen mode

A node needs the context it's a part of as well as the type which matches up with the name we registered in the worklet file when we called registerProcessor.

We'll worry about how exactly to use the AudioWorkletNode we created in a bit.

A sin wave

Perhaps the most basic thing to do is to create sin wave. This is given by a function:

function getSinWave(time, frequency){
  return amplitude * Math.sin(frequency * Math.PI * 2 * time)
}
Enter fullscreen mode Exit fullscreen mode

It's the sin wave you might have learned in geometry.

Generating audio

In order to actually use this function to generate noise we need to go to the worklet's process method. The first parameter in inputs which is an array of inputs. I think of an "input" like a device. Each input is an array of channels. A channel is simply one audio stream, specifically it's used for things like spatial audio, each speaker is one channel. Ordinarily we'd use input to mix incoming sound however, since we're generating sound, we don't need to concern ourselves with it. What we care more about is the second parameter outputs. Like inputs it's also an array of outputs with each being an array of channels. These will generally match up 1:1 with the inputs.

Each channel in turn is a buffer of floats. By default the buffer is 128 elements long, supposedly it can change so check the value, but in general that's what you'll get.

So our job here with sound generation is to fill the buffer with 128 samples of the sin wave.

process(inputs, outputs, parameters){
  output[0].forEach(channel => {
    for(let i = 0; i < channel.length; i++){ //channel is a buffer
      channel[i] = getSinWave(parameters.frequency[0], this.index / parameters.sampleRate[0]);
      this.index++;
    }
  });
  return true;
}
Enter fullscreen mode Exit fullscreen mode

process will be called automatically each time the sample buffer is exhausted.

Audio Worklet Parameters

We also need to define some parameters:

class ToneProcessor extends AudioWorkletProcessor {
    #index = 0;
    static parameterDescriptors = [
        {
            name: "sampleRate",
            defaultValue: 48000
        },
        {
            name: "frequency",
            defaultValue: 220
        },
        {
            name: "type",
            defaultValue: 0
        }
    ];
}
Enter fullscreen mode Exit fullscreen mode

This is a little weird at least to me. So much like custom element attributes we need to define what sorts of parameters we take in. These however can have minValue, maxValue (not shown) and defaultValue plus a name. As you might expect you can access them from process's third parameter parameters. What's less expected is that the value is an array, not a scalar value so when you see parameters.frequency[0] and parameters.sampleRate[0] that's why. The reason is because some parameters can vary by time, so you'll instead get an array of 128 values. Also note can't use non-numbers or Infinity as values for parameters.

Setting the parameters is equally odd.

this.toneNode.parameters.get("sampleRate").value = this.context.sampleRate;
Enter fullscreen mode Exit fullscreen mode

Instead of indexing like a lot of other APIs you use the get method using the parameter name and then you can set the value. Again, this is setting the scalar value. There's also setValueAtTime(value, time) which is how you set the time-varying type. We don't have use for this yet so let's ignore it.

At last we should have everything we need to play a sound:

//wc-synth-player.js
class WcSynthPlayer extends HTMLElement {
    static observedAttributes = [];
    #isReady;
    constructor() {
        super();
        this.bind(this);
    }
    bind(element) {
        element.attachEvents = element.attachEvents.bind(element);
        element.cacheDom = element.cacheDom.bind(element);
        element.render = element.render.bind(element);
        element.setupAudio = element.setupAudio.bind(element);
        element.play = element.play.bind(element);
        element.stop = element.stop.bind(element);
    }
    render() {
        this.attachShadow({ mode: "open" });
        this.shadowRoot.innerHTML = `
                <button id="play">Play</button>
            `;
    }
    async setupAudio() {
        this.context = new AudioContext();
        await this.context.audioWorklet.addModule("./js/worklet/tone-processor.js");

        this.toneNode = new AudioWorkletNode(this.context, "tone-processor");
        this.toneNode.parameters.get("sampleRate").value = this.context.sampleRate;
    }
    async connectedCallback() {
        this.render();
        this.cacheDom();
        this.attachEvents();
    }
    cacheDom() {
        this.dom = {
            play: this.shadowRoot.querySelector("#play")
        };
    }
    attachEvents() {
        this.dom.play.addEventListener("click", async () => {
            if (!this.#isReady) {
                await this.setupAudio();
                this.#isReady = true;
            }

            this.isPlaying
                ? this.stop()
                : this.play();
            this.isPlaying = !this.isPlaying;
        });
    }
    onKeydown(e){
        switch(e.code){
            default:
                console.log(e.which);
        }
    }
    async play() {
        this.toneNode.connect(this.context.destination);
    }
    async stop() {
        this.dom.play.textContent = "Play";
        this.toneNode.disconnect(this.context.destination);
    }
    attributeChangedCallback(name, oldValue, newValue) {
        this[name] = newValue;
    }
}

customElements.define("synth-player", WcSynthPlayer);
Enter fullscreen mode Exit fullscreen mode

The only interesting thing here is how we actually play sound. We do so by hooking the worklet node to the context's destination. destination is basically the current audio output device. The web audio API at it's core is a graph of nodes. You can chain them together with connect (and disconnect) and eventually you need to output the final waveform (samples) to the output device.

And the worklet:

//tone-processor.js
class ToneProcessor extends AudioWorkletProcessor {
    #index = 0;
    static parameterDescriptors = [
        {
            name: "sampleRate",
            defaultValue: 48000
        },
        {
            name: "frequency",
            defaultValue: 440
        },
        {
            name: "type",
            defaultValue: 0
        }
    ];

    process(inputs, outputs, parameters){
        const output = outputs[0];

        output.forEach(channel => {
            for(let i = 0; i < channel.length; i++){ //channel is a buffer
                channel[i] = getSinWave(parameters.frequency[0], this.#index / parameters.sampleRate[0]);
                this.#index++;
            }
        });
        return true;
    }
}

registerProcessor("tone-processor", ToneProcessor);

function getSinWave(frequency, time) {
    return 0.5 * Math.sin(frequency * 2 * Math.PI * time);
}
Enter fullscreen mode Exit fullscreen mode

You might wonder why I use #index for the time. This is because the time needs to be consistent across invocations of process, that is the last sample frame of the last call should flow directly into the first sample frame of the current call. If we did not do this you'd get all sorts of nasty distortion as each full sample would have a jagged transition.

Keyboad

All that's left to do is add keyboard support.

async onKeydown(e){
        if (!this.#isReady) {
            await this.setupAudio();
            this.#isReady = true;
        }
        switch(e.code){
            case "KeyA":
                this.play(220);
                break;
            case "KeyS":
                this.play(233); //A#
                break;
            case "KeyD":
                this.play(247); //B
                break;
            case "KeyF":
                this.play(261); //C
                break;
            case "KeyG":
                this.play(277); //C#
                break;
            case "KeyH":
                this.play(293); //D
                break;
            case "KeyJ":
                this.play(311); //D#
                break;
            case "KeyK":
                this.play(329); //E
                break;
            case "KeyL":
                this.play(349); //F
                break;
            case "Semicolon":
                this.play(370); //F#
                break;
            case "Quote":
                this.play(392); //G
                break;
            case "Slash":
                this.play(415); //G#
                break;
        }
    }
Enter fullscreen mode Exit fullscreen mode

This is pretty straightforward. I've set the middle row of keys like the keys on a piano starting at 220Hz which I'm told is A3. You could calibrate this as you'd like or use different keys. You might also notice the little initialization step at the top. That's because we can't just play audio, the browser doesn't allow this for security and annoyance reasons. We need to make a user gesture. Pressing a key is a valid gesture so we can use this event to setup the audio context if it doesn't already exist.

Instruments

Our instruments will be basic waves (oscillators) for the time being. We already have a sin wave, but what else can we do?

Sin

sin

Sin is what we already have and perhaps the most basic. I find it to sound rather shrill.

function getSinWave(frequency, time) {
    return 0.5 * Math.sin(frequency * 2 * Math.PI * time);
}
Enter fullscreen mode Exit fullscreen mode

Square

square

Square waves are my favorite. These give that distinctive chip-tune sound like you're playing with an NES. This is mostly because that's exactly how the sound hardware worked. It had mostly binary noise makers which produce square waves as sin requires either a larger lookup table or expensive math.

function getSquareWave(frequency, time) {
    const sinWave = Math.sin(frequency * 2 * Math.PI * time);
    return sinWave > 0.0
        ? 0.2
        : -0.2;
}
Enter fullscreen mode Exit fullscreen mode

All you do is threshold the sin wave. The 0.2 is the amplitude I gave it (because it gets loud).

Triangle

triangle

Triangles are a bit mellower than a square.

function getTriangleWave(frequency, time) {
    return Math.asin(Math.sin(frequency * 2 * Math.PI * time));
}
Enter fullscreen mode Exit fullscreen mode

It's the arcsin of the sin wave. Again multiply by the amplitude (I've kept it as 1).

Saw-tooth

saw

These are pretty harsh. Almost like digital trumpets. It also has the familiar 8-bit synth feel.

function getSawWave(frequency, time){
    return 2 * (frequency * Math.PI * (time % (1 / frequency)) - 1);
}
Enter fullscreen mode Exit fullscreen mode

This is just a modulus over the wave length adjusted with amplitude and pushed down by half the amplitude so it goes negative.

By the way, you can do a reverse saw tooth:

function getRSawWave(frequency, time) {
    return 2 * (1 - (frequency * Math.PI * (time % (1 / frequency)))) - 1;
}
Enter fullscreen mode Exit fullscreen mode

But it seems to sound identical.

Playing some music

We should have a semblance of an instrument now. I tried playing Mary had a Little Lamb:

E D C D E E E
D D D E G G
E D C D E E E
E D D E D C
Enter fullscreen mode Exit fullscreen mode

It's recognizable. Perhaps the key placement could be optimized but since I don't know what I'm doing we'll call it a day.

Code can be found here: https://github.com/ndesmic/web-synth/tree/v0.1

Sources

Top comments (0)