DEV Community

ndesmic
ndesmic

Posted on

Building a Digital Synthesizer Part 2: Octaves, Power and Chords

Octaves

Each piece of the hearable spectrum is divided into 12 part segments called octaves. The starting frequency of each octave is twice the frequency of the previous one. So if we start at A4 (440Hz), then A3 is 220Hz and A5 is 880Hz. Each of the 12 parts of the octave is called a semitone. I want to programmatically create the set so that we can increment or decrement octaves and get a little more range. The first thing I want to do is stop dealing directly in frequencies because even if I intuit them a little better it's not super useful for playing music.

Let's define some constants:

//wc-synth.js
const frequencyPowerBase = 2 ** (1 / 12);
const notes = {
    "A" : 0,
    "A#" : 1,
    "Bb" : 1, 
    "B" : 2,
    "C" : 3,
    "C#" : 4,
    "Db" : 4,
    "D" : 5,
    "D#" : 6,
    "Eb" : 6,
    "E" : 7,
    "F" : 8,
    "F#" : 9,
    "Gb" : 9,
    "G" : 10,
    "G#" : 11,
    "Ab" : 11
};
Enter fullscreen mode Exit fullscreen mode

We're mapping notes to semitone indices. Sharps # are the same as flats b of the following semitone which is just an idiosyncrasy of the western music scale.

And one instance property:

#frequencyBase = 440;
Enter fullscreen mode Exit fullscreen mode

I can then update play to play based on notes:

//wc-synth.js
async onKeydown(e){
    if (!this.#isReady) {
        await this.setupAudio();
        this.#isReady = true;
    }
    switch(e.code){
        case "KeyA":
            this.play("A");
            break;
        case "KeyS":
            this.play("A#");
            break;
        case "KeyD":
            this.play("B");
            break;
        case "KeyF":
            this.play("C");
            break;
        case "KeyG":
            this.play("C#");
            break;
        case "KeyH":
            this.play("D"); 
            break;
        case "KeyJ":
            this.play("D#");
            break;
        case "KeyK":
            this.play("E");
            break;
        case "KeyL":
            this.play("F");
            break;
        case "Semicolon":
            this.play("F#");
            break;
        case "Quote":
            this.play("G");
            break;
        case "Slash":
            this.play("G#");
            break;
        case "Digit1":
            this.changeInstrument("sin");
            break;
        case "Digit2":
            this.changeInstrument("square");
            break;
        case "Digit3":
            this.changeInstrument("triangle");
            break;
        case "ArrowLeft":
            this.#frequencyBase = this.#frequencyBase * frequencyPowerBase ** -12;
            this.dom.frequencyBase.textContent = this.#frequencyBase; //this is just writing to a div for display
            break;
        case "ArrowRight":
            this.#frequencyBase = this.#frequencyBase * frequencyPowerBase ** 12;
            this.dom.frequencyBase.textContent = this.#frequencyBase; //this is just writing to a div for display
            break;
        default:
            console.log(e.code);
    }
}
async play(note) {
    this.#isPlaying = true;
    const frequency = this.#frequencyBase * frequencyPowerBase ** noteIndex[note];
    this.dom.note.textContent = `Note: ${note} (${frequency.toFixed(2)}Hz)`;
    this.toneNode.parameters.get("frequency").value = frequency;
    this.toneNode.connect(this.context.destination);
}
Enter fullscreen mode Exit fullscreen mode

Voila! We can now use the left and right arrow keys to increase and decrease the octave... Although, if we're going to do all that may we can just move it into the ToneProcessor? Well, unfortunately due to the specific nature of the audio parameters we can't take in string values like "A#". Anything passed in has to be a number. We could create an integer mapping but that seems like way more trouble than it's worth to me.

Power

You might notice that some the instruments like the square wave sound a lot louder than others. This is due to the wave's power. If you look at the waveforms the square wave has a lot more area under the curve and so this will seem louder to us even though the amplitude is the same as all the rest (this is why I specifically gave it a lower amplitude). We can fix this though, we just need to make the area under the curve the same for all the instruments.

I think the easiest way to deal with this is to start with the sin wave. What is the area under the curve of a sin wave (sin(x))? If you remember your calculus we can integrate it and we get -cos(x) and we want to take it from 0 to PI. Why not 2*PI? This is because we don't really need to as all the functions have the same peaks we can just match them but also because the negative part will cancel out. I'm not going to take frequency into account as it will just scale accordingly. The area under the curve of sin(x) from 0 to PI is 1.

So what does that mean for a square wave? Well a square wave of the same frequency need to have an area of 1. So if the width is PI then the height must be 1 / PI.

And since the others are triangles (isosceles and right) the area under the curve is just 0.5 * h * w. So the the width is PI then the height must be 2 / PI. Keep in mind for the saw wave we need to make sure to set the right vertical offset, half the amplitude, on the saw wave to center it vertically.

The final equations looks like:

//tone-processor.js
function getSinWave(frequency, time, amplitude = 1) {
    return amplitude * Math.sin(frequency * 2 * Math.PI * time);
}

function getSquareWave(frequency, time, amplitude = 1) {
    const sinWave = Math.sin(frequency * 2 * Math.PI * time);
    return sinWave > 0.0
        ? amplitude / Math.PI
        : -(amplitude / Math.PI);
}

function getTriangleWave(frequency, time, amplitude = 1) {
    return (2 * amplitude / Math.PI) * Math.asin(Math.sin(frequency * 2 * Math.PI * time));
}

function getSawWave(frequency, time, amplitude = 1){
    return (4 * amplitude / Math.PI) * (frequency * Math.PI * (time % (1 / frequency))) - (2 / Math.PI  * amplitude);
}

function getRSawWave(frequency, time, amplitude = 1) {
    return (4 * amplitude / Math.PI) * (1 - (frequency * Math.PI * (time % (1 / frequency)))) - (2 / Math.PI * amplitude);
}
Enter fullscreen mode Exit fullscreen mode

And now they should all sound about the same in volume.

Chords

Chords are a few notes played at the same time. This is where a lot of the complexity of music comes from as different chords and progressions of chords can give different feelings. In order to make chords we need to be able to press multiple keys at the same time.

Keyboard Ghosting

This is an issue that's usually of interest for PC gamers but could come up if you are following along. Cheap keyboards typically only allow you to reliably press 3 keys at the same time. Others might allow more but might be centered on the WASD set of keys popular in games. You can test your keyboard here:

https://drakeirving.github.io/MultiKeyDisplay/

I'm using a gaming keyboard so it can handle a lot but if you're on a laptop or something you could be limited in how many keys will register.


Back to chords we now need a way to figure out all the pressed keys, pass all the notes to the tone processor and then generate the waveform.

Pressing multiple keys

So now instead of indiscriminately checking if a key is pressed or not we need to track the currently pressed keys. What we really want is to track by playing notes, not keys though. The easiest way I could think of was create a key to note mapping. The switch we had previously did this already but a map is even easier to modify if we want to change the key layout, bonus!

//wc-synth.js
const keyToNote = {
    "KeyA": "A",
    "KeyS": "A#",
    "KeyD": "B",
    "KeyF": "C",
    "KeyG": "C#",
    "KeyH": "D",
    "KeyJ": "D#",
    "KeyK": "E",
    "KeyL": "F",
    "Semicolon": "F#",
    "Quote": "G",
    "Slash": "G#"
};
Enter fullscreen mode Exit fullscreen mode

And we need to update our switch:

//wc-synth.js
async onKeydown(e){
    if (!this.#isReady) {
        await this.setupAudio();
        this.#isReady = true;
    }
    switch(e.code){
        case "Digit1":
            this.changeInstrument("sin");
            break;
        case "Digit2":
            this.changeInstrument("square");
            break;
        case "Digit3":
            this.changeInstrument("triangle");
            break;
        case "Digit4":
            this.changeInstrument("saw");
            break;
        case "Digit5":
            this.changeInstrument("reverseSaw");
            break;
        case "ArrowLeft":
            this.#frequencyBase = this.#frequencyBase * frequencyPowerBase ** -12;
            this.dom.frequencyBase.textContent = this.#frequencyBase;
            break;
        case "ArrowRight":
            this.#frequencyBase = this.#frequencyBase * frequencyPowerBase ** 12;
            this.dom.frequencyBase.textContent = this.#frequencyBase;
            break;
        default:
            if(keyToNote[e.code]){
                this.play(keyToNote[e.code]);
            } else {
                console.log(e.code);
            }
    }
}
Enter fullscreen mode Exit fullscreen mode

There's probably even more refactoring that could be do but we'll leave it like this for now. And while it's not finished since we have no way to pass into multiple notes here's the in progress play along with onKeyup and stop:

//wc-synth.js
onKeyup(e){
    if(keyToNote[e.code]){
        this.stop(keyToNote[e.code]);
    }
}
async play(note) {
    this.#playingNotes.push(note);
    const frequency = this.#frequencyBase * frequencyPowerBase ** noteIndex[note];
    this.dom.note.textContent = `Note: ${note} (${frequency.toFixed(2)}Hz)`;
    this.toneNode.parameters.get("frequency").value = frequency;
    this.toneNode.connect(this.context.destination);
}
async stop(note) {
    this.#playingNotes = this.#playingNotes.filter(n => n != note)
    if(this.#playingNotes.length === 0){
        this.toneNode.disconnect(this.context.destination);
    }
}
Enter fullscreen mode Exit fullscreen mode

The real change is that we're tracking the key by the note it plays. We add it to the #playingNotes array on play and remove it on stop.

Passing the data

As far as I can tell there are 2 ways we can do this. We can continue to use the existing audio params or we can resort to message passing. While I don't like the idea of message passing it's interesting because:

A) It's an aspect of the AudioWorklet we haven't covered.
B) We can make things more robust as it's not limited to what kinds and shapes of data we can pass in.

If we stuck with the first idea of using audio params then we'd essentially create a bunch of params that look exactly like frequency but we'd have a hard cap on the number of notes to be played because we need a defined audio param for the number of keys that can be pressed. That sounds worse overall so let's do message passing.

We'll need to add some things to tone-processor.js:

//tone-processor.js
#playingNotes = [];
constructor(){
    super();
    this.bind(this);
    this.port.onmessage = this.onMessage;
}
bind(processor){
    processor.onMessage = processor.onMessage.bind(processor);
}
onMessage(e){
    switch(e.data.type){
        case "playNotes":
            this.#playingNotes = e.data.notes;
            break;
    }
}
Enter fullscreen mode Exit fullscreen mode

We'll need to keep track of the notes to play, we'll now need a constructor in order to attach the event listener. The event is also weird and non-standard. Instead of the worklet itself we use an object on the worklet called "port" to attach listeners to. It also don't work to register addEventListener("message", handler) at least from what I experienced even though it inherits EventTarget so onmessage it is. Whatever. I also bind onMessage otherwise this will be the port object which is useless.

Back in the synth, we need to change play and stop to instead just postMessage on the port, and we'll do so with the #playingNotes array as a payload. In this way we don't need multiple message types, all that matters is if there is at least one note in the array.

Oh and there's been an unnoticeable bug this whole time that we now need to fix. On the onKeydown method we need a gaurd at the top:

//wc-synth.js
onKeydown(e){
  if(e.repeat) return;
  //previous code
Enter fullscreen mode Exit fullscreen mode

This lets us know if the key is auto-repeating when held down. In most setups it will be and so onKeydown is triggered constantly which is not what we expect and will flood the array with duplicate notes (since we used to handle just one at a time we never saw it). So by bailing out when we see the repeat property we can ensure we're only triggering a key update once.

Making the waveform

At this point we have a list of notes to play. And since we now deal in notes rather than frequency we'll start pushing more of the code into the tone-processor.js:

//tone-processor.js
#baseFrequency = 440;
onMessage(e){
    switch(e.data.type){
        case "playNotes":
            this.#playingNotes = e.data.notes;
            break;
        case "shiftBaseFrequency":
            this.#baseFrequency = this.#baseFrequency * frequencyPowerBase ** e.data.semitoneCount;
        default:
            throw new Error(`Unknown message send to tone-processor: ${e.data.type}`);
    }
}
Enter fullscreen mode Exit fullscreen mode

We now need a new event to change the base frequency. After mulling this a bit I felt it makes most sense to shift up and down by a count of semitones. However I will not be surprised if we need a specific setBaseFrequency event that lets the outside world set it directly. We can pull in more code:

//tone-processor
const frequencyPowerBase = 2 ** (1 / 12);
const noteIndex = {
    "A": 0,
    "A#": 1,
    "Bb": 1,
    "B": 2,
    "C": 3,
    "C#": 4,
    "Db": 4,
    "D": 5,
    "D#": 6,
    "Eb": 6,
    "E": 7,
    "F": 8,
    "F#": 9,
    "Gb": 9,
    "G": 10,
    "G#": 11,
    "Ab": 11
};
Enter fullscreen mode Exit fullscreen mode

This hasn't changed but we can delete it from wc-synth.

Now in the switch statement we can update the code that shifts octaves:

//wc-synth.js onKeydown
case "ArrowLeft":
    this.shiftBaseFrequency(-12);
    break;
case "ArrowRight":
    this.shiftBaseFrequency(12);
    break;
Enter fullscreen mode Exit fullscreen mode
//wc-synth.js
shiftBaseFrequency(semitoneCount){
    this.toneNode.port.postMessage({ type: "shiftBaseFrequency", semitoneCount });
}
Enter fullscreen mode Exit fullscreen mode

I also deleted the display for frequency and baseFrequency since we no longer have the means to calculate them in wc-synth. If it's still useful in the future we can always post a message back to wc-synth from tone-processor on the same port with the current frequency but now that we're getting into chords it's not going to be as useful. We can also remove the audio parameter for frequency from tone-processor.

Speaking of deleting parameters let's convert the instrument switching code as well:

onMessage(e){
    switch(e.data.type){
        case "playNotes":
            this.#playingNotes = e.data.notes;
            break;
        case "shiftBaseFrequency":
            this.#baseFrequency = this.#baseFrequency * frequencyPowerBase ** e.data.semitoneCount;
        case "changeInstrument":
            this.#instrument = e.data.instrument;
        default:
            throw new Error(`Unknown message send to tone-processor: ${e.data.type}`);
    }
}
Enter fullscreen mode Exit fullscreen mode

The rest of the code in wc-synth should be trivial to update, so I'm not going to bother showing it. I guess we can keep sampleRate as am audio param as it never changes anyway.

Now we can finally make the final waveform of a chord:

process(inputs, outputs, parameters){
        const output = outputs[0];

        let generatorFunction;
        switch(this.#instrument){
            case 1:
                generatorFunction = getSquareWave;
                break;
            case 2:
                generatorFunction = getTriangleWave;
                break;
            case 3:
                generatorFunction = getSawWave;
                break;
            case 4:
                generatorFunction = getRSawWave;
                break;
            default:
                generatorFunction = getSinWave;
        }

        output.forEach(channel => {
            for(let i = 0; i < channel.length; i++){

                const frequencies = this.#playingNotes.map(n => this.#baseFrequency * frequencyPowerBase ** noteIndex[n]);
                const outValue = frequencies.reduce((value, frequency) => value + generatorFunction(frequency, this.#index / parameters.sampleRate[0]), 0);
                channel[i] = outValue;
                this.#index++;
            }
        });

        return true;
    }
Enter fullscreen mode Exit fullscreen mode

All we do is take all the notes in the #playingNotes array, map them into frequencies using the instrument of choice, and then add up the values. That's it, just wave1 + wave2 + wave3 + ... + waveN. To be honest the raw sin wave doesn't sound great but the square wave shows off the effect a little more.

There's a final little bit of touch up to the UI to show all the playing notes instead of one:

async play(note) {
    this.#playingNotes.push(note);
    this.#playingNotes.sort();
    this.dom.note.textContent = `Note: ${this.#playingNotes.join(", ")}`;
    this.toneNode.port.postMessage({ type: "playNotes", notes: this.#playingNotes });
    this.toneNode.connect(this.context.destination);
}
Enter fullscreen mode Exit fullscreen mode

Nothing fancy but I sort them so they are always in alphabetical order rather than the order they were pressed as this leads to the display being unstable.

Anyway that was a bit meandering, I hope it was understandable. I want to show off a little of the thought process and refactoring that goes on behind the scenes because it's not just a useful skill but by documenting it, it will also help me understand why I went the direction I did if I have to check back in the future.

You can find the full code for this post here: https://github.com/ndesmic/web-synth/tree/v0.2

Top comments (0)