We will talk about the nuances of using the Web Audio API and Web MIDI API for sound synthesis in the browser, the methods of databanding and sonification, the UX when using the keyboard and mouse for musical purposes, and why the ungoogled-chromium browser is better than Google Chrome.
Table of Contents
- Introduction
- Cultural studies or how I came to this
- Converting the binary code to hertz
- Nuances of the Web Audio API and a little bit about granular synthesis
- Nuances of the Web MIDI API
- The UX solution
- The best browser
- Conclusions
Introduction
For the last two and a half years, in my spare time, I have been developing a small instrument called Binary synth, which translates any file into sound or a sequence of MIDI messages. You can find the demo here, the source code here, and how it might sound on video playlist or Youtube and Bandcamp below:
Musically, I won't surprise you much, considering that there have long been terms like ambient or IDM, the meaning of which I already don't understand well, but in general, you can generate various sound textures with this thing.
While working on this instrument, I came across the nuances of the APIs mentioned above and gained some experience and specific knowledge that I would like to share.
Very briefly about the stack: all the code is completely on the Vue3 frontend, and all assets such as styles, scripts, icons and fonts are sewn into a single html file during Vite and Base64, so the instrument can be saved as index.html and it will work completely offline without the internet.
The article will not contain anything about the framework or the basics of the APIs used, but rather about the practice of creating an instrument and fully using browser technologies for musical purposes.
Cultural studies or how I came to this
I would like to immediately answer the question of why to convert files to sound and where this idea came from in the first place.
We can start with the idea that in all music, except for vocals, a musician needs an external technical object — an instrument. In a sense, all music is divided into vocals and everything else, because with vocals, you're like an instrument. In other cases, the musician needs some kind of technique, that is, some kind of object that was created by someone, maybe by himself, maybe by industry, maybe by corporations, maybe by enthusiasts. But one way or another, a musician depends on the instrument manufacturers and on their technical evolution, and so on. Even in the conservatory there are separate vocals, and there are separate instrumentalists, so-called. And there is also a choral and orchestral conductor.
At the same time, it is clear that you can play the same so-called note on very different instruments, that is, on the violin, on the piano, there are notes everywhere. But at the same time, of course, we always distinguish very clearly where one instrument sounds and where the other sounds through timbre. I would not say that the word timbre has any clear definitions, but I explain it this way. A note is just the frequency of a sound. But we have a physical instrument, and if, for example, it is also acoustic, then it consists of parts made of metal, wood, and so on. When this frequency is released, the entire instrument rattles. There are many different uncontrolled processes going on inside it, which manifest themselves as just another sounds. And this totality and sum is the timbre, which, in general, is appreciated. And in a sense, timbre is all that, roughly speaking, garbage that forms around a note, around a frequency. And the frequency, in fact, is just some kind of input pulse that generates some kind of chaos element inside.
And if we talk about instruments that depend on electricity, they can be analog or digital. And there is a big dispute about analog and digital, it is probably more or less known to everyone, but in short, it lies in the fact that there is an abyss in every resistor. That is, if you have an analog instrument, you just pass electricity through some components. You have an element in your hands that you're trying to control. And this is highly appreciated, because analog instruments can often behave unpredictably. And if you know, there is such an instrument as Polyvox, which has recently become very popular in certain circles. This is a late-Soviet synthesizer that was made from military spare parts. And the musicians really appreciate him because he's out of control, because he doesn't work very well in a sense. And in general, all analog instruments depend almost on fluctuations in the voltage in the outlet, on temperature, on age, and so on. And this is rather not a bug, but a feature, because this is what generates depth in all this, some complex timbres from this whole relationship and a different feeling from the process of creation.
But if we are talking about digital instruments, about digital synthesis, where we have a computer, it becomes very quickly obvious that digital synthesis and computer music in general usually sounds more sterile, as toothless, bland. Therefore, many digital synthesizers are trying to find a source of chaos that could diversify and enrich the timbre, make it more lively and interesting, less robotic. It is chaos that makes the sound more listenable, interesting, dramatic.
And one of the ways to extract this source of chaos and entropy is through databending. This is an artistic and musical practice that has spread since about the end of the 2000s. In short, the point is that we take, for example, some picture and open it in a text editor, and there a huge pile of some kind of mess falls on us, we make some point changes, and we get glitch effects on this picture. This is done with videos, photos, and music. For example, a common technique is to use the Audacity audio editor, which allows you to open graphic images and get an audio track. In fact, databending is the manipulation of files by a program that is not designed for this purpose. And a special case of databanding is sonification. This is the use of non-audio data to translate it into sound. I think many people have heard the playlist from NASA with "sounds of planets", they have a whole section about sonification. Well, the most pragmatic example is a Geiger counter.
Actually, the whole idea is to use files and their binary code as a source of chaos for sound synthesis using the methods of databanding and sonification.
Converting the binary code to hertz
Once again, briefly, I made an instrument that translates any files into sound. All files on a computer are just a set of zeros and ones. Essentially texts. Which have two letters in the alphabet: zero and one. When we look at the files in this way, they are all the same. That is, we sort of destroy the meaning of all files when we look at them under such a microscope. There are no more mp3, docx, videos, etc.
In fact, even if you take a photo with a digital camera twice in a row, it will always be slightly different in terms of zeros and ones. And exactly how, we don't know. That is, from a human point of view, we see the same thing, but in general, under a microscope, it is completely different.
The instrument implements something called live electronics. The computer synthesizes sound from the text of zeros and ones, or generates MIDI messages that control external devices that will already generate sound.
We can take any file and upload it to the instrument. And then the interface will look like this:
We have a control panel on the left, and a screen on the right, where the contents of the uploaded file are displayed.
And the principle of operation and translation is quite simple. We divide solid zeros and ones into words of 8 or 16 characters (one or two bytes) and convert them to a frequency in hertz and read them in turn. If we have reached the end, then we either finish or start over in a circle. We can adjust the frequency range over which, having 256 or 65,536 combinations of zeros and ones, the frequencies in hertz are evenly distributed. For example, if our range is 0-256, then in 8-bit mode it will work like this:
00000000 — 0 hertz,
00000001 — 1 hertz,
00000010 — 2 hertz,
…
We can get the binary code of the files using FileReader and Uint8Array / Uint16Array. You just need to remember that if there are an odd number of bytes in the file, you can get a Uint16Array by filling in the missing ones with zeros. This can be done using ArrayBuffer.transfer(), which is not very supported yet, but here is a polyfill:
// Polyfill for ArrayBuffer.transfer
if (!ArrayBuffer.transfer) {
ArrayBuffer.transfer = function (source, length) {
if (!(source instanceof ArrayBuffer)) throw new TypeError('Source must be an instance of ArrayBuffer')
if (length <= source.byteLength) return source.slice(0, length)
let sourceView = new Uint8Array(source)
let destView = new Uint8Array(new ArrayBuffer(length))
destView.set(sourceView)
return destView.buffer
}
}
const reader = new FileReader()
reader.addEventListener('loadend', async (event) => {
isLoading.value = false
if (event.target.result.byteLength <= 499) {
status.startAndEndOfList[1] = event.target.result.byteLength - 1
} else {
status.startAndEndOfList = [settings.fragment.from, settings.fragment.to]
}
// For files with an odd number of bytes we cannot create a Uint16Array
// So we can fill the missing with zeros
let binary8 = new Uint8Array(event.target.result)
let binary16 = null
if (event.target.result.byteLength % 2) {
let transferedBuffer = ArrayBuffer.transfer(event.target.result, event.target.result.byteLength + 1)
binary16 = new Uint16Array(transferedBuffer)
} else {
binary16 = new Uint16Array(event.target.result)
}
file.$patch({
binary8: binary8,
binary16: binary16,
loaded: true,
})
})
The frequency calculation is also trivial. We have the so-called continuous mode, when the frequencies are continuously translated, and tempered, when we reduce the frequencies to notes within the framework of a 12-step evenly tempered system. And we have an 8-bit and 16-bit mode. There are four modes in total. First, we calculate the coefficients for each mode, which are the difference in the frequency range divided by the number of combinations:
const frequencyCoefficients = computed(() => {
return {
continuous8: (settings.frequenciesRange.to - settings.frequenciesRange.from) / 256,
continuous16: (settings.frequenciesRange.to - settings.frequenciesRange.from) / 65536,
tempered8: (settings.notesRange.to - settings.notesRange.from) / 256,
tempered16: (settings.notesRange.to - settings.notesRange.from) / 65536,
}
})
And then we multiply the coefficient by the decimal representation of the binary word + the offset equal to the minimum value of the range; in the tempered mode, we simply get the ordinal number of the note from a pre-prepared array of all notes:
export function getFrequency(byte, bitness, mode, coefficients, minimumFrequency, minimumNote) {
if (mode === 'continuous') {
if (byte === 0) return 0.01 + minimumFrequency
if (bitness === '8') return coefficients.continuous8 * byte + minimumFrequency
if (bitness === '16') return coefficients.continuous16 * byte + minimumFrequency
}
if (mode === 'tempered') {
if (bitness === '8') return notes[Math.floor(coefficients.tempered8 * byte) + Math.round(minimumNote)]
if (bitness === '16') return notes[Math.floor(coefficients.tempered16 * byte) + Math.round(minimumNote)]
}
}
Looking ahead a bit, please note that when byte === 0, we necessarily add a very small value of 0.01. This is done so that the oscillator never goes into the generation of 0 hertz, as then there will be unpleasant clicks of sound.
In the MIDI generation mode, everything is somewhat different. In the tempered mode, we simply return the ordinal number of a note from a pre-prepared array of all notes. In continuous mode, since it is impossible to play all frequencies in MIDI, we find the nearest smaller note to our frequency and use pitch to raise it to the desired one:
export function getMIDINote(byte, bitness, mode, coefficients, minimumFrequency, minimumNote) {
// Note number + pitch is returned
// 1. Calculate frequency
// 2. Find the nearest lower note in the array to this frequency
// 3. Calculate the difference between this note and the original frequency
// 4. Convert this difference into a pitch value
if (mode === 'continuous') {
// 1.
if (byte === 0) frequency = minimumFrequency
if (bitness === '8') frequency = coefficients.continuous8 * byte + minimumFrequency
if (bitness === '16') frequency = coefficients.continuous16 * byte + minimumFrequency
// 2.
nearbyValues = getNearbyValues(frequency, notes)
// 3.
percent = toFixedNumber(((frequency - nearbyValues[0]) / (nearbyValues[1] - nearbyValues[0])) * 100, 1)
// 4.
// The pitch value in MIDI is from 0 to 16383, 8191 is the normal state (middle)
// 8192 divisions are two semitones, so one semitone is 4096 divisions
// We want to make a smooth transition between halftones, so we need to define a shift up to 4096
pitchValue = Math.floor((percent / 100) * 4096) + 8191
if (notes.indexOf(nearbyValues[0]) < 0) {
return [0, pitchValue]
} else {
return [notes.indexOf(nearbyValues[0]), pitchValue]
}
}
// The note number returned
if (mode === 'tempered') {
if (bitness === '8') return [Math.floor(coefficients.tempered8 * byte) + minimumNote]
if (bitness === '16') return [Math.floor(coefficients.tempered16 * byte) + minimumNote]
}
}
Actually, at this stage we can receive frequencies or note numbers. Next, we'll talk about how to use this data in the Web Audio API and Web MIDI API.
Nuances of the Web Audio API and a little bit about granular synthesis
There are many articles about the API, and after all, there is MDN, so there is no need to describe the basics here.
The Binary synth audio graph is very simple: an oscillator with several types of waves, a low pass filter, an LFO for amplitude modulation, and a panner for controlling panning between the left and right channels. The graph is quite trivial and the sound would be the same if it were not possible to increase the speed of reading the file and generate multiple tabs.
// Connection
filter.value
.connect(gain.value)
.connect(masterGain.value)
.connect(panner.value)
.connect(audioContext.value.destination)
lfoDepth.value.connect(masterGain.value.gain)
// The oscillator clings to the filter separately each time you press play
// I'll also add that all the elements of the graph are ref, so we use .value
There's a thing called granular synthesis, which is when we just take a sound, split it into small pieces, and do something with those pieces. We pour them out on the listeners, recombine them somehow, and so on.
And there is such a very interesting, entertaining thing about sound, related to the fact that sound takes some time to "take place". Any note is a frequency. Frequency is the number of repetitions per second. Accordingly, we have a certain period, that is, a certain fragment of how long it takes to shake once, so that this wobble is then repeated and perceived by us as a sound. And here you can take a look at the picture below:
The instrument has the ability to play not the entire file, but only a fragment of it, and we can make this fragment very small and repeat it quickly. Let's say we have two commands (i.e. frequencies) that are looped. And we have the readingSpeed time (essentially the reading speed) given to reproduce each frequency. And if this speed is less than the period of the wave, then the wave does not have time to fully take place. A piece of it remains, which is called a wavelet. And when we quickly repeat these pieces, we get a new timbre. Each element of the repeat can be called an acoustic pixel, using the terms of granular synthesis.
Moreover, the browser has a tab metaphor that allows you to run multiple instances of the instrument in parallel. And their sound streams will mutually influence each other, even if they generate frequencies in an inaudible range.
To optimize calculations, the instrument divides the file into chunks (lists) of 500 commands, which are scrolled through. The time until the next flip is scheduled via setTimeout, which is a very inaccurate thing for time planning, but in the context of drone-noise-ambient textures synthesis are very useful at high reading speeds.
However, there is a problem with the fact that setTimeout and setInterval at the blur event (if we have left the tab or minimized the browser) start executing at most once per second or so. For example, if we have an interval of 200ms, then in blur mode 5 intervals will be triggered at immediately once per second, that is, they accumulate and run in a batch. Is a big problem if you need to switch between the browser and the DAW. To solve this problem, I used the worker-timers package, which replaces native timers with their launch in workers that are not subject to such optimizations.
Switching between commands (from one frequency to another) can be immediate, linear, or exponential. To do this, the oscillator has methods setValueAtTime, linearRampToValueAtTime and exponentialRampToValueAtTime, as specified in the code below. Please note that with an instant change, we don't just set oscillator.frequency directly via assignment, but use the setValueAtTime method so that there are no clicks. Moreover, when you press play or stop, the volume knob is set to the target value after a short time, in order to again neutralize clicks due to the abrupt start and end of sound synthesis.
import { getFrequency } from '../assets/js/getFrequency.js'
import { getRandomTimeGap } from '../assets/js/helpers.js'
export function useOscillatorScheduler(settings, audioContext, oscillator, bynaryInSelectedBitness, frequencyCoefficients) {
function computeFrequency(binaryValue) {
return getFrequency(
binaryValue,
settings.bitness,
settings.frequencyMode,
frequencyCoefficients.value,
settings.frequenciesRange.from,
settings.notesRange.from
)
}
function scheduleOscillatorValue(command, targetTime) {
// At high reading speeds, there are unacceptable values
const isExponential = settings.transitionType === 'exponential'
const safeValue = isFinite(command) ? command : isExponential ? 0.01 : 0
switch (settings.transitionType) {
case 'immediately':
oscillator.value.frequency.setValueAtTime(safeValue, targetTime)
break
case 'linear':
oscillator.value.frequency.linearRampToValueAtTime(safeValue, targetTime)
break
case 'exponential':
oscillator.value.frequency.exponentialRampToValueAtTime(safeValue, targetTime)
break
}
}
function planOscillatorList(startOfList, endOfList) {
for (let binaryID = startOfList, index = 0; binaryID <= endOfList; binaryID++, index++) {
const command = computeFrequency(bynaryInSelectedBitness.value[binaryID])
const time = audioContext.value.currentTime + (index * settings.readingSpeed + getRandomTimeGap(settings.isRandomTimeGap, settings.readingSpeed))
scheduleOscillatorValue(command, time)
}
}
return {
getRandomTimeGap,
computeFrequency,
scheduleOscillatorValue,
planOscillatorList,
}
}
If you want to process the sound from an instrument in a DAW, you can use virtual cables, such as this one, for Mac it's a BlackHole. You need to output all system sounds to the virtual cable, and configure the input device as the output of the virtual cable in the DAW.
To remove the latency, you will have to install ASIO in Windows, the most popular is ASIO4All, but I recommend FlexASIO. There are no such problems on Mac.
Another interesting thing is the Sample rate. With different values of this parameter, you can get quite different sounds on the same settings:
3000 Hz
44100 Hz
250000 Hz
768000 Hz
The sampling frequency can be used as an additional synthesis parameter, but there is one problem. It is apparently impossible to find out the range of acceptable sampling frequencies from under JS. If any of the readers know how, then please let me know. So I wrote a little hack:
// There is probably no API for getting a range of possible sample rates.
// We are using a hack, intentionally creating an error (sampleRate = 1 cannot be),
// catching it and intercepting the error text,
// from which you can find out the range
// Returns the { minimumSampleRate, maximumSampleRate } object
function getSampleRateRange() {
let sampleRateRange = null
try {
new AudioContext({ sampleRate: 1 })
} catch (nativeError) {
const error = String(nativeError)
sampleRateRange = error
.slice(error.indexOf('[') + 1, error.indexOf(']'))
.split(', ')
.map((rate) => Number(rate))
}
return { minimum: sampleRateRange[0], maximum: sampleRateRange[1] }
}
We generate an error, catch it, and get an acceptable range from its text. The range on my laptop is up to 768000, which is in no way possible on my laptop's built-in Realtek chip, and not every audio card is capable of this, but from a pragmatic point of view, changing values in this range changes the sound, which is a good opportunity for creative search. Technically, the API allows you to generate inaudible frequencies and not give any errors, but in fact it's just aliasing.
Another caveat is that you cannot change the sampling rate on the fly, because you need to completely recreate the audio graph.
Nuances of the Web MIDI API
I say right away that in order to use MIDI on Windows, you need to install a virtual port, for example loopMIDI. There are no such problems on the Mac, you can easily find instructions on how to create a virtual MIDI through the settings. You can test MIDI through this monitor, and you can send MIDI messages to a virtual analog of the DX7.
Now to the code. Here, the scheduling of the list is entirely on timers, so the absence of worker-timers would be a disaster. Using the Web MIDI API involves some understanding of the MIDI protocol itself. You can find the official documentation on it, but I personally needed these commands:
// MIDI messages
export default {
noteOff(note, velocity, port, channel) {
port.send([0x80 + Number(channel), note, velocity])
},
noteOn(note, velocity, port, channel) {
port.send([0x90 + Number(channel), note, velocity])
},
pitch(value, port, channel) {
port.send([0xe0 + Number(channel), value & 0x7f, value >> 7])
},
allSoundOff(port, channel) {
port.send([0xb0 + Number(channel), 0x78, 0])
},
modulation(value, port, channel) {
port.send([0xb0 + Number(channel), 0x01, value])
},
}
Channels are quite convenient, because you can open several instances of the instrument in different tabs and control different MIDI devices in parallel. That is, you connect the browser to a virtual MIDI port, and some Ableton listens to this port. You have different instruments listening to different channels on different tracks.
It is essential to remember, that when we cling to a channel, we must supply the initial modulation value to it:
if (outputs.value[0]) {
settings.midi.noMIDIPortsFound = false
settings.midi.port = midi.outputs.get(outputs.value[0].id)
port.value = outputs.value[0].id
sendMIDIMessage.modulation(settings.midi.modulation, settings.midi.port, settings.midi.channel)
} else {
settings.midi.noMIDIPortsFound = true
}
The MIDI mode algorithm itself is also simple, we plan an array of timeouts by calling noteOn and noteOff one at a time:
function planMidiList(startOfList, endOfList, indexOffset = 0) {
clearMidiTimeouts()
for (let binaryID = startOfList, index = 0; binaryID <= endOfList; binaryID++, index++) {
commands.value[index] = getMIDINote(
bynaryInSelectedBitness.value[binaryID],
settings.bitness,
settings.frequencyMode,
frequencyCoefficients.value,
settings.frequenciesRange.from,
settings.notesRange.from
)
const timeoutedNote = playNote.bind(null, index)
const delay = ((index + indexOffset) * settings.readingSpeed + getRandomTimeGap(settings.isRandomTimeGap, settings.readingSpeed))
midiTimeoutIDs.value[index] = setTimeout(timeoutedNote, delay * 1000)
}
}
When working with MIDI in the browser, I came across an unpleasant thing. I have a MIDI controller, and in case the browser is connected to MIDI, the controller refuses to work, but if you connect the controller first and then the browser, then everything is ok. I haven't found a solution to fix this yet, and I've noticed the same problems on other web projects with MIDI.
The UX solution
Any normal instrument always has physical buttons or controls that create a special tactile sensation. And in the case of a computer, we only have a keyboard and a mouse. We can use some kind of controllers, but I wanted to think about how it could be controlled using only a keyboard and mouse. And I call my solution interactive input.
Many input fields have a keyboard shortcut. If you press the appropriate key, the focus is immediately brought to this input. And if you hold down this key and move the mouse at the same time, then we can change the value with the mouse slide. The problem with this is that there are quantities that need to be changed quickly, and there are quantities that need to be changed very slowly. And there's a multiplier for that. We press either Shift or Ctrl and we can multiply what we change by 10, 100, 1000 or 0.1, 0.01 and so on. And then we can dramatically increase the value with a small movement of our hand, or on the contrary, we can control it very subtly.
function activateInteractiveMode(event) {
if (!isInteractiveMode.value && event.code === props.keyCode && !event.ctrlKey) {
input.value.focus()
document.addEventListener('mousemove', mousemoveHandler)
isPressed = true
}
// Shift increase, Ctrl decrease factor
if (isPressed) {
if (event.code === 'ShiftLeft') inputValueFactor.value *= 10
if (event.code === 'ControlLeft') inputValueFactor.value /= 10
}
}
But there is a problem with the mouse that when it hits the edge of the screen, we can no longer increase the value of the input field. I used the Pointer Lock API to solve this problem. In short, we simply turn off the cursor and count only the delta from the initial position. This is used, for example, in games like shooters, where there is no cursor and you can rotate the camera endlessly.
if (!isMoved) {
if (!document.pointerLockElement) {
await input.value.requestPointerLock({
unadjustedMovement: true,
})
}
isInteractiveMode.value = true
initialInputValue = inputValue.value
isMoved = true
}
currentX += event.movementX
Interactive input is used like this:
<InteractiveInput
:validValue="settings.frequenciesRange.to"
@valueFromInput="validateFrequenciesRangeTo($event)"
step="0.1"
keyCode="KeyS"
letter="S"
/>
function validateFrequenciesRangeTo(newValue) {
if (isNaN(newValue)) {
return
} else if (newValue > (settings.midiMode ? 12543 : settings.sampleRate / 2)) {
settings.frequenciesRange.to = settings.midiMode ? 12543 : settings.sampleRate / 2
} else if (newValue <= settings.frequenciesRange.from) {
settings.frequenciesRange.to = settings.frequenciesRange.from + 1
} else {
settings.frequenciesRange.to = newValue
}
}
The values come from the component, which we validate externally and mutate the state.
Thus, the mouse becomes an important controller that controls the sound. However, it is advisable to have not the worst mouse. I don't believe in gaming mouses and I think it's marketing. So you just need a not the cheapest mouse, preferably laser and with a dpi of 3000-4000. Well, all the tactility from the instrument ends up on the table, so I bought an Ikea mouse pad.
The best browser
There will be more subjective feelings here than strict measurements, and I didn't bother with specialized software for measuring the CPU and used the Windows Task Manager and Performance monitor from DevTools, so if someone wants to measure something more intelligently, it will be cool. Here, rather, the results are offhand.
If you look at the Performance monitor, it's not entirely clear how this tool measures CPU. I noticed that when there is a figure like 70% CPU, in the Windows Task Manager consumption jumps by 7% and so on in this proportion. The measurement is further complicated by the fact that when DevTools is open, CPU consumption in the Task Manager starts jumping from 0 to 10%. Therefore, if you just focus on the metrics of the Task Manager, as well as the subjective feelings of using the instrument, the results are as follows.
Firefox is highly discouraged, with an average consumption of 4.2% CPU, but sometimes there are clicks and freezes in the sound, and the sound is slightly different from chromium-like ones.
Chromium-like browsers consume about 7-8% of the CPU at high speeds, but everything works subjectively better. Neighboring tabs and extensions are affected, so it's better to use incognito mode, where consumption drops to an average of 6% on my laptop.
But there is also such a nuance that if we want to use a computer as a musical instrument, especially for live electronics, that is, generating sound on the fly, then in order to increase stability and reduce possible latencies, it becomes important to save any computer resources, since some other software like Ableton can work in parallel.
Chrome, as you know, consumes a lot of resources, and I was trying to find some kind of chromium-like alternative and found the ungoogled-chromium project. In fact, this is open-source Chromium, from which all Google presence has been removed. By metrics, this browser consumes slightly less processor and significantly less RAM. So I can recommend it for musical needs (yes, I've reached the point where I have a separate browser for my instrument).
Conclusions
Is using Javascript for musical purposes — a perversion or not? I think it's obvious that for some strict, predictable audio tasks where maximum accuracy is needed, you need to use other languages and, of course, not depend on the browser. It seems like IRCAM has implemented the Web Audio API for Nodejs, and in theory you can opt out of the browser, but still the performance will not be top-notch, of course.
But since my instrument is designed for, let's say, artistic purposes, where "roughness" and some degree of unpredictability are allowed or even required in musical genres such as noise or ambient, the instrument is generally suitable for its purposes. Moreover, one of the advantages of such development is distribution — your application is launched simply by clicking on the URL, it is easy to download and does not require anything additional and works on any OS.
But if someone wants to implement something similar in other languages — welcome.



Top comments (0)