DEV Community

Cover image for Saving cost with timeout latches in JavaScript
Rahul Ramteke
Rahul Ramteke

Posted on

Saving cost with timeout latches in JavaScript

I maintain this app called:


which essentially live streams what song I am listening on spotify and lets users join in and sync with me.
It doesn't really stream audio buffers, just the song title.

Demo

And the architecture is pretty straight forward:

  • Poll spotify api in background
  • maintain state in memory
  • push state change to client via websockets

And it works quite well, acceptable delay, no rate limits hit(hard to when it's just my user getting polled).
But every month, I get an invoice of 25$ from Digital Ocean.

That happens because I use their k8s cluster and a load balancer. Which I know is an overkill for something like this.

And that's why I decided to not only move away to something simpler and cheaper but also explore can we only pay for the work that we do?

Architecture Overhaul

Firstly we need to move away from the "always on" mode to "on demand" mode.
Does the tree fall in the forest only when we observe it? In our case, yes it does. Because then we don't have to render/calculate anything unnecessarily.

So we remove the socket layer, and let the client poll our system.

Now we have two polls:

  • Client polling our system
  • System polling Spotify

Is it ok if we take the shortcut and couple these two polling? That is, call Spotify API only if client calls our system.

That can work in theory, but with this our rate of API requests to spotify gets coupled with number of clients. One client is ok, maybe 10 even, but 100? 100,000?

That's when we would have problems. Problems like, rate limit and quota exhaustion.
Also, let's say I am listening to a song for a few minutes, does it even make sense to call spotify API 100,000k times to check current track?

With this, we see that coupling those two systems isn't a good idea. So what do we do next?
We need to poll spotify but not unnecessarily, only when we have intent, but we don't want to tightly couple with incoming requests.

Weak coupling?
Let's say we have a timer with us for polling spotify, say 10s. Our system will poll spotify every second for 10s and then stop. Unless a request comes, then the timer gets reset to 10s and the cycle starts again.
If there are 100,000 requests coming to the system, the max a request can do is reset the timer, it doesn't affect the poll rate for spotify, just elongates the process.

It's best described visually here:

Timeout latch / iostreamer | Observable

It's a leaky bucket which leaks at constant rate unless reset by some action. In this example, clicking! Checkout the project here

favicon observablehq.com

Timeout latch

It's straight forward to model this with simple timers. For example, we start a setTimeout and whenever we want to reset, we clear that timeout and start again.

But I didn't want the overhead of creating and deleting timers just for resetting the clock.

So I created a custom scheduler, which ticks every 1ms. On the other hand, we have a latch, essentially just an object with a counter.
The scheduler's job is to decay the latches by decrementing the counters.
In this universe, scheduler creates time using ticks, and latches experience that time using associated counters.

This is fairly simple and straightforward, but as I mentioned before, I wanted to explore can we only pay for whatever actual work we do?

Running a scheduler indefinitely still means we are in an "always on" mode. What's the point of creating time if there are no entities to observe it?

And that's final thing that we needed to do, stop the scheduler if all latches are done, or cancelled.

If something new is added, or an old latch gets reset then start the engine once again.

The code is maintained here and published to npm as well.

GitHub logo iostreamer-X / timeout-latch

A simple timeout latch. Like a reverse leaky bucket.

timeout-latch

A simple timeout latch. Like a reverse leaky bucket.

It's a simple callback based mechanism to get notified when a timeout has occurred and reset that timeout if need arises Checkout the visual demo here

Why not plain timeout?

Plain timeouts might not work that easily. For example, one might start a timeout and manage state around it.

function run() {
    setTimeout(
        () => {
            // your callback       
        },
        3000
    )
}
function reset() { // when you want to reset the timer
    run();
}
Enter fullscreen mode Exit fullscreen mode

But we realise that resetting only works when the timeout has completed its work. If we reset before that then we simply have two timeouts!

To counter this we can clear the previous timer and start again. And it works but has the overhead of cancelling and creating new timers. Also, it's a timer…

Why is this better?

So we already established that in this case "on demand" is better than "always on".
And in "on demand" mode as well, we wanted to be truly "on demand", that is expend resources only when necessary.

With the current setup, especially with timeout-latch, we are in a state where nothing runs unless necessary, and it halts if the intent to run isn't there anymore.

This property is extremely beneficial if we look at serverless or edge functions.
That's a model where you do pay for whatever you execute.

Even for simple apps on Digital Ocean there's a minimum of 5$ you'd have to pay.
But for a platform that takes the "server" away from you, one can truly embrace the burstiness nature of work.

And this serves as an example where we transformed polling(very continuous) to something that's bursty and "on demand".

And that concludes my cost cutting process, the things you have to do in this economy 😁

Top comments (0)