As a musician studying software development, I have been fortunate to research and discover some interesting resources available for building DAWs and sandboxes with the sole purpose of building audio through the browser. While most of these resources are focused on capturing audio and video from the user's microphone and camera, there are some sandboxes, or developer playgrounds, readily available for curious programmers such as myself to dive into understanding different and effective approaches to building out these sandboxes.
One such resource is known as react-music, and while this is not a library, I find that it is a helpful tool in building that understanding as previously mentioned. Not only is this repository open-source and free, it is also built out with a demonstration of what is capable of being constructed with the Web Audio API. If you are unfamiliar with this browser built-in tool, I highly recommend clicking on the link below and discover what this API is capable of. If you are familiar, then let's move on to discussing react-music and how it can help develop more intuitive audio programming.
So what is react-music?
React-music at its core is an open-source and free repository available for cloning, editing, and sandboxing with web browser audio. Released to the public in 2016, this repository was designed to present intuitive and well-structured audio programming through the use of the Web Audio API and React.js, if the name did not make that part obvious. Surprisingly I found this repository's main dependencies are not audio related, mostly react dependencies such as react-dom and react-router-dom. The only library imported for audio programming is tunajs, which is a JavaScript library built on top of Web Audio API to help alleviate some of the potentially complex workflow and helps developers understand web audio without having to focus too much on unfamiliar terminology that can potentially disrupt the focus on the programming itself.
As you read through the files, you will find that the components are designed with an established hierarchy that provides abstraction from the complexities of building out the DAW starting from the Song component to the Sequencer component and on to the instruments and the filters these instruments can be chained and processed through.
So what does that look like? Thankfully the developers provide an example ready to play back a sample that can either be edited or built from scratch with the pre-built components.
Syntax and Structure
Starting out
Before getting started, you will need to clone the repository into your local environment and then run the install command to obtain access.
npm install react-music
Keep in mind the components exist in different files and must be imported in order to access their functionality.
// for example
import {Song, Sequencer, Synth} from '../src';
Root structure
Next you will notice the Song component which acts as the controller for the entire beat or music excerpt you wish to program.
<Song tempo={120}>
</Song>
What is important to understand with the tempo property is that this dictates the speed at which the music will play and this can be edited to your preference. Tempo, in music, simply refers to beats per minute and in the above snippet I have it set to 120 which will play 120 beats per minute which is a standard starting point for most writing processes.
Now that we have our backbone established, next in the hierarchy chain is the Sequencer which takes in two props, resolution and bars. Resolution simply refers to the number of steps provided in the sequence array where musical notes can be specifically placed and bars indicates how many times the sequencer will play before it loops.
<Song tempo={120}>
<Sequencer resolution={16} bars={1}>
</Sequencer>
</Song>
Adding instruments
With the Song and Sequencer components in place, we are now able to add instruments and samples into our song to actually create audio to play back. One of the more common examples is the Synth component which is used to create an oscillator and is designed to play within the steps provided by the Sequencer component's resolution prop. The type prop refers to the type of audio wave played by the synth. There are four options such as: sine, square, triangle, and sawtooth. The specified notes can either be a single pitch played or an array of pitches if a chord is desired. Also if you wish to specify the instrument's settings on how the note should be played, the envelope prop gives the developer access to those features.
Keep in mind that other synth instruments such as Monosynth can only play one note at a time so depending on the harmonic structure you desire, this should help you make decisions on how you want your music to sound.
<Song tempo={120}>
<Sequencer resolution={16} bars={2}>
<Synth
type="triangle"
steps={[
// [ step, duration, note || [notes] ]
[0, 4, "c4"],
[6, 2, ["f4", "a4", "c5"]]
]}
envelope={{
attack: 0.1,
sustain: 0.3,
decay: 20,
release: 0.5
}}
</Synth>
</Sequencer>
</Song>
Effects
Assuming you want to chain your instrument through effects, react-music has refined effect components that behave similarly to the effect nodes found in the Web Audio API. The difference being the terms used to describe the audio effects in react-music are more straight forward and structured using those basic effect nodes.
The effects provided in the repository are Bitcrusher, Chorus, Compressor, Delay, Filter, Gain, Moogfilter, Overdrive, PingPong, and Reverb. For the sake of the blog, I'll provide an example of using just a couple of these filters. Keep in mind these filters have their own unique effects and, when chained together, create interesting interactions producing creative results in the audio.
To utilize these effects, how you structure their relationships will affect the result of the audio's pitch and how it is played. This is where sandboxing with this repository becomes quite interesting as there are many combinations of these effects that can help developers create more unique results in their programmed music.
For example
<Song tempo={87}>
<Sequencer resolution={16} bars={1}>
<Compressor>
<Reverb>
<Synth
type="square"
steps={[
[0, 2, "d#3"],
[4, 4, "a#3"]
]}
</Synth>
</Reverb>
</Compressor>
</Sequencer>
</Song>
In the above snippet, I set the first effect in the chain to be a Compressor which reduces the audio signal's dynamic range to help maintain a consistent volume level and reverb for provides an an acoustic effect that smooths out the audio pitch in the form of "reflections" further creating a form of ambiance to the sound. The interesting experimentation for this could simply be switching the order of the two effects, or adding on to the effects chain and experimenting with that order for different outcomes.
The convenience of the name conventions for the instrument and effects components helps reduce the amount of time spent researching what filters and audio waves are and help developers understand the purpose these components serve in manipulating audio waves. All of this combined with basic music theory understanding paves the way for fun exploration of what is possible with audio construction and manipulation in the browser.
Conclusion
In conclusion, while this repository has not been updated in a while and web audio has advanced over the last few years, react-music is a helpful starting point in working towards understanding how to build a DAW in the web browser and general musical understanding. While this is a considerable pro with the code base, there isn't much flexibility with time signatures, only 4/4 is supported. However, there shouldn't be much need for time signature complexities when studying web browser audio. If you are interested in checking out react-music, I left the links below that walk you through the basics of adding on the existing example or the option of building out your own creation.
Top comments (0)