Ever had that moment? You’re staring at a live stock ticker, perhaps a blinking crypto chart, and suddenly your computer’s fans kick into overdrive, a desperate mechanical sigh as if it’s being asked to calculate the meaning of life for every flickering pixel. It’s a sound many of us have come to associate with, well, the modern web. And frankly, it’s an indictment. We need to talk about the horrendously poor performance of browsers when it comes to conceptually simple drawing tasks.
Take, for a particularly egregious example, a site like crypto.com with its live tradingview for any individual ticker. The experience can be so sluggish that even your mouse cursor seems to wade through digital treacle. Pop open your developer tools, and you'll witness a horror show: multiple socket connections opening and a veritable flood of data – tens, sometimes hundreds of kilobytes – gushing in. Every. Single. Minute. And for what? A single ticker view. One can’t help but suspect that these sites, in their infinite wisdom, are just flinging a verbose, massive JSON blob at your browser, a digital buffet where 99% of the dishes go uneaten, destined only to clog the pipes. Even that remaining 1% is so verbose it could win awards for inefficiency.
Couldn't this data be, say, compressed with something like msgpack, turning text-based bloat into a svelte binary stream? Or, dare we dream, could they transmit just the changes – a diff, or for the truly ambitious, a diff of diffs (yes, that’s a thing)? It makes a perverse kind of sense when you’re viewing an exchange overview with every token under the sun. But 40KB every three seconds for one ticker? TradingView itself isn't entirely innocent here either, often exhibiting similar symptoms of digital gluttony.
It seems today's web platforms suffer from a severe, almost willful, misunderstanding of efficient data transfer and basic speed. Have we forgotten the simple joy of a snappy, responsive site, built on small, targeted data packets? What is this data frenzy that possesses developers, compelling them to bombard users with bytes, 99% of which are destined for the digital void? Sure, bandwidth isn't the bottleneck it once was, but that's no excuse for abandoning the fundamental principle of coding to perform and scale. It's like being given a firehose and deciding to water a single potted plant with it – overkill, messy, and slightly insane.
And the web isn't doing any better when it actually comes to displaying this onslaught. All these trading platforms are CPU resource hogs! And for what, ultimately? Displaying a damned colored rectangle, or a series of them. We were doing high-performance graphics for simpler tasks than this back in the 90s, probably on machines that would now be considered quaint doorstops.
Now, to be fair, there are some things browsers genuinely struggle with performantly, even if they seem simple. Try drawing a rectangle with a specific background and then fading only that background out. Technically, there's a fair bit happening under the hood for that animation. Do this a few times, say, animating cell backgrounds in a sprawling HTML table, and your CPU will spin up very quickly, very happily. From a browser's perspective, isolating a single table cell for truly independent, high-performance GPU-backed animation (beyond simple opacity or transform tweaks) can be notably challenging. While CSS tricks like promoting elements to their own compositing layer exist, tables have an intricate shared layout. An effect that might be trivial on a standalone element can become a performance headache in a table cell, pushing it closer to CPU-bound work or requiring careful, game-dev-like optimization that feels out of place for a simple HTML table. So yes, dear developers, I'll give you that one; it's an okay issue to not always have a perfect workaround for within the standard DOM.
However, a chart is a different beast entirely! Those are typically rendered using the <canvas>
element. And canvas is GPU accelerated. It has been for many years in all major browsers (even if "all major browsers" increasingly means "Chromium and its cousins"). While modern 2D canvas operations are generally GPU accelerated, it's not an unconditional free pass. Certain patterns or frequent readbacks can sometimes push rendering back to the CPU or introduce overhead. So, the excuse of browser limitation largely evaporates here, provided you know what you're doing.
Yet, even with the power of canvas, there are still myriad ways to screw things up. And so, people do. They screw up with an almost artistic dedication. For a bit of dark amusement, run the Chrome performance monitor on a seemingly static candlestick chart. Observe the furious background activity, the digital churning, even when the candle hasn't so much as twitched. This, right here, is where good developers distinguish themselves from the… others. A good developer will meticulously hunt down and eliminate that background noise, paring operations down to the absolute minimum. A bad developer? Well, just listen to your CPU. Hear that whine? That’s the sound of bad developer code being executed.
How would one efficiently update a candlestick chart on a canvas? Would you, for every single update, redraw the entire chart from scratch? I certainly wouldn't. It’s a bit of a trick question, as there are levels to this:
- The Brute Force: Update all your data values, then redraw the entire canvas. This is the CPU-heavy approach, the digital equivalent of repainting your entire house because one picture frame is crooked.
- The Promising Novice: Shift your dataset (e.g., old data out, new data in), then redraw the whole canvas. Better. It shows you're thinking, you have promise.
- The Lost Art: Besides shifting your dataset, you shift your view canvas. Imagine literally sliding the existing chart image one "candle width" to the left, then only drawing the new, now-empty, one-candle-wide slice with your latest data. Technically, you’d do this on an off-screen pixmap (a buffer) the size of your chart view – move, clip, update the new bit – and then
bitblit
(ordrawImage
in canvas parlance) the result onto your visible canvas. It's fast! Gloriously, beautifully fast. But it's undeniably harder to implement correctly than a full repaint. An off-by-one pixel error can lead to wonderfully bizarre visual artifacts. Perhaps for this reason, or the allure of simpler code, this meticulous approach often seems less common in everyday practice than one might hope, especially when performance isn't (initially) a burning fire.
The supreme irony here is that Canvas and WebGL in the browser are about as performant as you can get without rewriting the browser itself. Here's a nuance: while WebAssembly can scream for heavy computations, if your 'rendering loop' is primarily a sequence of calls into the browser's Canvas or WebGL APIs, using Wasm to make those same API calls via JavaScript interop isn't a magic bullet for the drawing commands themselves. The overhead of crossing that Wasm-JS bridge for each tiny draw call can negate benefits if the core logic isn't computationally bound. The browser's graphics APIs are already highly optimized C++. The problem? It's often the mentality and skill of today's web developers, who sometimes don't seem to give a flying fig about high performance when there's a new shiny bullshit feature to be rolled out the door by Friday. The tools are there! So fucking use them!
And how do we end up in this slow, simmering pot of inefficiency? Ideally, everyone would run a continuous integration pipeline, complete with regression tests and, crucially, performance test suites after every itsy bitsy tiny puny commit—or at least, far more frequently than 'almost never'. But, you know, you're forgiven for not doing that. Most don't. If they did, they'd see the insidious creep, the slow degradation as change after change makes everything just a little bit slower, a little more bloated. But hey, at least it has a lot of features! (That probably nobody will use.)
So, the next time your computer groans under the weight of a simple chart, remember: it’s often not the browser, nor the task itself, but a cascade of questionable data practices, rendering shortcuts, and a development culture that prioritizes speed of deployment over the speed of, well, anything else. And we, the users, are left listening to the fans.
Top comments (0)