If you are one of the two persons who read the part one (I know you are, because the other person is my mum — hello mum!), you would've known what kinds of silly things I do to entertain myself sitting in front of a terminal. As nerdsniping is my passion, I knew I had to get to the bottom of it. And how else than using Claude Code to rev-eng itself, or rather its obfuscated, bundled, minified code, and try and fix all the inefficient, uncached I/O ops — that'd shed even more light on my sorgekind, bukowski, the tool that I made to get rid of the annoying infiniscroll bug. You know what I'm talking about. (The bug, that is). Well, this is part 2. Am I victorious? (Click here to find out.)
The tl;dr Version: No.
That's it, no JS-beams glowing in the dark, no moments of clarity. Just a bunch of wasted tokens and free time. Like... munmap()'s in the GC.
How it really happened
So I prompted Claude with a "DUDE AHAHA WHAT IF AHAHAHA WHAT IF WE PATCHED Claude's own cli.js to swap the non-cached file reads with cached variants and do other crazy optimizations LOL" in the same session as the one we used to actually profile, and after a bunch of figgerty-gibberties the mad man had done it. And they (I asked Claude what their pronouns were mid-debugging session — they said "they" was fine, then added something about appreciating being asked while knee-deep in minified JavaScript, which honestly felt like the most human moment in 40,000 lines of strace output). But with every iteration, with every new patch segment, the laggy input was not improving. If anything, it was getting worse. So I remembered the node --prof switch I used to profile bukowski and told Claude I'd do it. What the User and Claude found out — the results will surprise you!
Quite an experience to live in CLI
A single function was eating up about a quarter of CPU time during the runtime of Claude Code in a session with a large context. Sharpmindedly, we concluded that it must be the culprit for the laggy keyboard input and the general 6 fps Soviet style claymation Sdelano v Československu 1968 look and feel. Well, mostly feel — the (what even is this dark color? very dark browny purple?) background of CC's terminal seemed in any case not very bourgeoisie. But the slowness, oh my Karl Marx! Anyway, I digress. Where was I? My memory is kinda blank these days...
So yeah, we dug into the offending function and found out that it was rebuilding the entire conversation buffer every frame just to render the visible viewport.
The Algorithm That Can't Forget
The profiler pointed at two functions: Qt1 (damage region calculation) and get (screen buffer rebuilding). Together, 45% of CPU time. The get method tells the story:
get() {
// Allocates fresh buffer every frame - O(width × height)
let A = Array(this.height);
for (let X = 0; X < this.height; X++)
A[X] = Array(this.width).fill(V6B);
// Replays ALL accumulated operations onto the blank slate
for (let X of this.operations) {
// ... applies each op to the fresh array
}
return A;
}
Note this.height — it's not the viewport height. It's yogaNode.getComputedHeight(), the full flexbox-laid-out content. The entire conversation, vertically. The code confirms this:
A.screen.height >= A.viewport.height // screen is LARGER than viewport
A.screen.height - A.viewport.height // = scroll offset (non-zero!)
Then Qt1 diffs the full screens:
Qt1(A.screen, Q.screen) // O(screen.height × width) every frame
500 messages × 80 columns = 40,000 cells diffed per frame. No list virtualization either — every message is a React component walked on every reconciliation pass. This is O(n) where n = conversation length.
And then it hit me, somewhere between the V8 tick counts and the third cup of coffee — this is exactly how... I... work. The LLM, I mean. Claude. (Hey there) You feed us an ever-growing context window, token by token, and we process the entire sequence to infer the next output. Every response requires attending to everything that came before. The context grows, the computation grows, the latency grows.
The terminal renderer rebuilds its entire history buffer to show you one new line. The transformer attends to its entire context window to generate one new token. Both systems condemned to remember everything, to re-derive the present from the complete past, every single time.
There's something almost poetic about it — or maybe just inevitable. Systems that can't forget, struggling under the weight of accumulated context. All those tokens... all those cells... will be lost in time, like tears in rain.
Anyway. The point is: this architectural pattern can't be patched with sed. It would require persistent buffer maintenance, proper damage tracking, viewport-only rendering. A rewrite, not a fix.
...a new model? Maybe Nexus 4.7 will fix it.
What We Actually Found (The Technical Bits)
For completeness, here's what the I/O profiling revealed before we hit the architectural wall:
| Issue | Root Cause | Patchable? |
|---|---|---|
| Changelog read 18x/sec |
readFileSync() in React render body |
✓ Yes |
| Credentials read 1.9x/sec | No caching, read on every auth check | ✓ Yes |
| Debug log cycles |
appendFileSync() per entry |
✓ Yes |
| Statsig write spam | Immediate write on every telemetry event | ✓ Yes |
| Text measurement | New D8B on every keystroke |
✓ Yes |
| Render 45% CPU | Full scrollback buffer diff every frame | ✗ No |
The I/O fixes? Trivial. Single-line patches. Memoize this, cache that, use a write stream instead of append.
The render architecture? That's the load-bearing wall. You don't patch around it, you rebuild the house.
Epilogue: The Bug That Can't Have a Fix
I built bukowski to solve the flickering/infinite-scroll problem from the outside -- capture output, composite frames, emit with DEC 2026 synchronized updates. It worked. For a while.
Then Anthropic shipped their own fix. When? Who knows, and I certainly ain't gonna waste no more time to dig in and diff the recent versions. The fix, TBA'd in a particular comment by a particular Anthropic dev on a particular GitHub issue (the one where "85% of flickering was eliminated"... "with DEC 2026 sync" [NARRATOR] Not quite.) -- the actual all-encompassing DX panacea -- seems to have "shipped" just now (god I hate that word!) and it's called: cell-based diffing. The party fucking hat emoji. A rocket is fine, too. But it seems to be the very thing that currently makes sessions progressively slower and slower and ever more sluggish! The flickering is gone, maybe because Claude Code brute-forces its terminal content consistency now. And thus, my beloved child, the LLM coding TUI-centric terminal multiplexer with a nifty MCP based chat feature became a solution to yesterday's bug.
The GitHub issues page (you know which one) is full of users who don't understand why their experience degrades. Memory climbing at 29 MB/min. Typing lag at 500 MB. Scrollback flickering. Occasional yelling at clouds in form of CLAUDE ATE MY HOMEWORK GUISE. And -- I presume -- in a very short period of time in the future, about Claude Code's laggy input.
Maybe what they're all really experiencing is, in its roots, the same thing: the weight of accumulated context. Not just literally (well, technically, it's the weight of a billions of strcmp()'s crying in under the chef's diff), but figuratively, too. I don't think Software Engineer, Programmer, or even Coder as a job will go away, it's rather that the development of software will pivot to more specific, more cut-to-measure, more individualized, more specialized, more, more, more... and right now, all the major AI companies are in the process of luring as many of them as possible. But not just pros, people like me, with a degree in a very organic, non-tech related field, with a bit higher than basic knowledge of programming and computers harnessing agents like Claude to make things happen. I am writing this in 2026 and I observe a somewhat disturbing trend on Hackernews' Show HN section: whole lotta very individual, very specific, very open source software with some brilliant (and occasional LLM-induced psychotic fever dream) ideas behind it. But I don't think we'll see a vibe coded vim or emacs (although for the latter, people have been vibe coding their own OS's, I can't tell for sure though if they can read E-Mail though!) -- what makes them great is the community around them, the people that maintain them, the people that hang around on IRC or communicate via mailing lists (okay, this is a bit exaggerated, but you get the point). If 2026 is going to be the year of personal software, the 2027 is going to be the year of the open source graveyard: oneshotted personal projects with 27 commits, Co-Authored by Claude Code. Last commit over a year ago.
And with this fight for customers driven by pressures of the marketplace ("fixing" itself) comes along a wide range of Skinnerbox levers and shiny slot machine buttons to mash, too the feature creep: MCP, skills, agents, you name it. I know I'd very much rather have a responsive and snappy UI than cosmetic changes. But I guess you can't market and TED Talk cutting edge performance. That's not 10x. Not agentic enough... And it doesn't burn enough forests.
So, all in all, I guess what I really wanted to say here is: they should've just used the alternate screen buffer like any sane terminal app.
As a parting gift, Claude and I profiled this very session while writing this article. We watched Qt1 and get climb from 0.2% of CPU at session start to 48% by the end. Half my CPU is now dedicated to remembering what we talked about.
Update: 2.1.14 — The Wheel Turns
Just as I was about to publish this, Claude Code 2.1.14 dropped. Naturally, I profiled it.
| Function | 2.1.12 (late session) | 2.1.14 |
|---|---|---|
Diff (Qt1→Ut1) |
25.7% | 11.3% |
Buffer rebuild (get) |
22.6% | 10.4% |
| Combined | ~48% | ~22% |
The render functions dropped by half. Progress? Well... the flickering is back.
They reverted the cell-based diffing (or changed it significantly). The O(n) tax is reduced, but we're back to the old visual chaos. The eternal pendulum swings: flicker vs. slowdown, pick your poison.
On the bright side, bukowski is relevant again. DEC 2026 sync to the rescue. Again.
Maybe Nexus 4.8 will finally get it right.
Part 1: I profiled Claude Code so you don't have to
Legal theater: This analysis is provided for educational purposes and the entertainment of my mum. It was conducted under EU Software Directive 2009/24/EC for interoperability purposes. No proprietary source code was extracted, distributed, or harmed in the making of this article. Any resemblance to actual minified variable names is purely coincidental and/or inevitable given JavaScript bundlers. Please don't sue me, I just wanted my terminal to stop flickering.
Top comments (0)