DEV Community

SEN LLC
SEN LLC

Posted on

Why setTimeout Is a Bad Metronome — and What to Use Instead

Why setTimeout Is a Bad Metronome — and What to Use Instead

I built a web metronome with setInterval(fireClick, 60000/bpm) and it drifted audibly within two bars. Here is why — and how the ~60-line scheduler I replaced it with stays sample-accurate forever.

📦 GitHub: https://github.com/sen-ltd/metronome
🎵 Demo: https://sen.ltd/portfolio/metronome/

Screenshot

The bug I kept running into

You want a click every beat. The naive version is three lines:

const ctx = new AudioContext();
const everyBeat = () => click(ctx, ctx.currentTime);
setInterval(everyBeat, 60000 / bpm);
Enter fullscreen mode Exit fullscreen mode

At 120 BPM that's a click every 500 ms. Put it next to a real metronome and inside of a dozen beats the two have visibly diverged. At 240 BPM the drift is obvious within four beats.

The reason is that setInterval's callback schedule has no connection to the audio hardware clock. The browser fires "around 500 ms from now" on the main thread, which is busy doing whatever else a browser does — layout, GC, another task, some extension content script. Each fire can arrive 5–50 ms late, and the error accumulates. The audio clock, meanwhile, advances exactly at the sample rate.

The fix, in one sentence

Don't let the main thread decide when each click plays. Tell Web Audio the exact currentTime at which you want each beat and let the audio thread render the sample there.

osc.start(t)  // t is on the AudioContext.currentTime clock, not wall time
Enter fullscreen mode Exit fullscreen mode

Whatever jitter the main thread has, the sample at offset t comes out at audio-clock time t. Period.

The lookahead scheduler

The pattern is Chris Wilson's A Tale of Two Clocks from 2013. Two knobs, one loop:

  • scheduleAheadSec — how far into the future you're willing to queue beats. I use 0.1 seconds.
  • lookaheadMs — how often the JS timer wakes up to peek. I use 25 ms.

The invariant: any beat that will play within the next scheduleAheadSec seconds has already been handed to Web Audio.

export function createScheduler({ audioCtx, bpm, beatsPerBar, onBeat }) {
  let nextBeatTime = 0;
  let beat = 0;
  let timer = null;

  const secondsPerBeat = () => 60 / bpm;

  function tick() {
    while (nextBeatTime < audioCtx.currentTime + 0.1) {
      onBeat({ beat, time: nextBeatTime });
      nextBeatTime += secondsPerBeat();
      beat = (beat + 1) % beatsPerBar;
    }
  }

  return {
    start() {
      beat = 0;
      nextBeatTime = audioCtx.currentTime + 0.05;
      tick();
      timer = setInterval(tick, 25);
    },
    stop() { clearInterval(timer); timer = null; },
    setBpm(next) { bpm = next; },
  };
}
Enter fullscreen mode Exit fullscreen mode

onBeat is where the actual sound goes:

function click(ctx, time, accent) {
  const osc = ctx.createOscillator();
  const gain = ctx.createGain();
  osc.frequency.value = accent ? 1500 : 1000;
  osc.type = "square";
  gain.gain.setValueAtTime(0, time);
  gain.gain.linearRampToValueAtTime(0.4, time + 0.001);
  gain.gain.exponentialRampToValueAtTime(0.0001, time + 0.05);
  osc.connect(gain).connect(ctx.destination);
  osc.start(time);
  osc.stop(time + 0.06);
}
Enter fullscreen mode Exit fullscreen mode

Note that time flows through. The JS timer could wake up 20 ms late, 40 ms late, 80 ms late — it doesn't matter, because every beat whose timestamp lies inside the 100 ms lookahead window is pre-queued. The audio thread has everything it needs and the main thread has 75 ms of slack before the next tick needs to do anything.

Why a window instead of scheduling everything up front

The obvious question: why not just pre-schedule the entire bar (or the next ten seconds)?

Because the user will change something. Nudge the BPM slider from 120 to 140 and you want the next beat to be the new tempo, not the one after ten pre-scheduled slow beats have already played out. The 100 ms window is short enough that tempo changes feel immediate and long enough to absorb any realistic main-thread jitter.

The part that surprised me

I wrote unit tests against a fake AudioContext and a fake setInterval:

function fakeCtx(startAt = 0) { return { currentTime: startAt }; }

test("beats are exactly 60/bpm apart", () => {
  const ctx = fakeCtx(0);
  const beats = [];
  const sched = createScheduler({ audioCtx: ctx, bpm: 120, beatsPerBar: 4,
                                  onBeat: b => beats.push(b),
                                  setIntervalFn: () => 1, clearIntervalFn: () => {} });
  sched.start();
  ctx.currentTime = 2;    // fast-forward the audio clock
  sched._tick();
  for (let i = 1; i < beats.length; i++) {
    const gap = beats[i].time - beats[i - 1].time;
    assert.ok(Math.abs(gap - 0.5) < 1e-9);
  }
});
Enter fullscreen mode Exit fullscreen mode

The gap is exactly 0.5 to machine precision, not "roughly 500 ms with some jitter". Because we never read the wall clock — we only do arithmetic on nextBeatTimethere's no jitter source to begin with. The test catches the class of bug where someone refactors and accidentally starts using Date.now() or performance.now() inside the scheduling math. The moment that slips in, the gap stops being exact.

This is the quiet upside of using AudioContext.currentTime: it is both the actual play time and the arithmetic you plan against, so your scheduling logic is trivially testable.

Tap tempo falls out for free

Given the above, tap tempo is a few lines:

const taps = [];
function tap() {
  const now = performance.now();
  taps.push(now);
  while (taps.length > 5) taps.shift();
  // Drop taps older than 2.5 s — user is starting a new measurement
  while (taps.length && taps[0] < now - 2500) taps.shift();
  if (taps.length < 2) return;
  const intervals = taps.slice(1).map((t, i) => t - taps[i]);
  const avgMs = intervals.reduce((a, b) => a + b) / intervals.length;
  setBpm(60000 / avgMs);
}
Enter fullscreen mode Exit fullscreen mode

Rolling average of the last 4 intervals, auto-expiring gaps. performance.now() is fine here because we're measuring user intent, not scheduling audio. The result feeds setBpm() and the running scheduler picks up the new cadence on its next beat.

What I would not do again

  • Don't use a Web Worker for the timer. You might read that workers are needed because setInterval is throttled in background tabs. For an audio context that's a non-issue — when the tab is backgrounded the AudioContext suspends anyway, and resuming it re-syncs currentTime from where it left off. The added complexity of a worker pays off for DAW-grade applications, not for a metronome.
  • Don't ignore ctx.state === 'suspended' on first click. Browsers suspend audio contexts until a user gesture. The first play button press needs await ctx.resume() or your scheduler ticks happily while nothing is audible.
  • Don't skip the 0.05 s "start in the near future" offset. If you call osc.start(ctx.currentTime) directly, the first click is often lost — by the time the audio thread picks it up, its timestamp is already slightly in the past.

Takeaways

  • setTimeout/setInterval drive is fine for UI polling. It is wrong for audio scheduling, because the JS event loop has no relationship to the audio clock.
  • AudioContext.currentTime is both the clock your samples actually play on and the variable your scheduler should do arithmetic against.
  • A two-clock, lookahead design gives you drift-free timing with a 25 ms polling loop and 100 ms of slack — roughly 60 lines of code.
  • The scheduler is trivially unit-testable because its only dependency is audioCtx.currentTime; stub that and you can fast-forward time in assertions.

Code (MIT, no framework, single static page): github.com/sen-ltd/metronome.

Top comments (0)