I've been writing code for a few decades now. Started with C. The kind of C where you know roughly what the CPU is doing at any given moment — moving a register, touching a block of memory, shaving off a few microseconds. There's something satisfying about that directness. Assembly-level intuition. It sticks with you.
It also makes you a little hostile toward anything that calls itself a "modern language."
Not because you can't learn it. Because it doesn't feel right.
From C to modern C++
For a long time, my C++ was really just C with classes. I found out later that most people who have "C++ engineer" on their resume are doing the same thing. That's where most of us plateau.
Then I started using std::vector. Then RAII. Then I ran into compare_exchange_strong and compare_exchange_weak — spent a full day just figuring out when to use which. Then came C++17, SFINAE, template metaprogramming.
Honestly, I questioned my life choices.
But those tools also made my first serious project work the way I wanted it to.
50,000 lines of C++, then 3.5 million lines of Python
That first project was a bridge layer between a trading platform and a strategy execution service. About 50,000 lines of modern C++, took me six months to write by hand.
I picked C++ for two reasons: speed, and RAII.
The results: on Windows 10, it started at around 22MB of memory, then dropped to 11MB after running continuously for a week. On Windows 11, it started at 36MB, settled at 12MB. During all that time it was pulling and processing tick data for every instrument at full frequency.
Rough around the edges. But the direction was right.
Later, the product evolved into a web application: AI-assisted strategy editing, multi-user backtesting, a Python backend, and a WordPress frontend. The whole thing took two years.
The Python backend peaked at 3.5 million lines. WordPress passed 4 million. AI was involved full-time at this point — I was using Claude 3.5.
But it was still just a multi-user backtesting system. Plenty of features still missing.
A 14-day dead end
Claude 3.5 had a habit of breaking things while fixing them.
One time, the multi-process backtesting module was working perfectly. Then after a round of heavy refactoring, it stopped working entirely. I spent 14 days trying to fix it. Twelve hours a day. Back then Claude didn't have the weekly or per-5-hour usage caps yet, so I was grinding pretty much around the clock to make the most of my subscription.
After 14 days, I gave up. Rolled back to the last stable version. Then manually merged all the other changes from those two weeks, testing each one as I went.
Turns out AI can write code really fast. It can also drive you off a cliff really fast.
Cutting 90%
At the end of 2025, I made a decision: cut 90% of the codebase.
Python backend went from 3.5 million lines to under 500,000. WordPress went from 4 million to under 500,000.
The system itself was running fine. I didn't cut it because it was broken. I cut it because of money.
AWS bills were brutal — the system needed multiple servers. Even Claude's parent company says it'll take two years to reach profitability. OpenAI says five. If the AI companies themselves are getting squeezed by infrastructure costs, a small independent project like mine didn't stand a chance.
I also realized something: users probably want to build their own trading systems on their own machines. They don't need me running a massive cloud engine for them.
So I rebuilt it as a desktop app.
Without the queue system, the saga orchestration, the multi-user concurrency — plus two years of backend code to reference — things moved a lot faster. Started the rewrite on January 2nd, basically finished by mid-February. 500,000 lines of code, six weeks.
I went back through the git log later and roughly tallied what I'd deleted from the old system. Ten entire subsystems — server-side backtesting engine, task queue with priority management, a process manager with its own state machine, real-time WebSocket streaming, a saga orchestrator for failure cleanup, historical data playback, checkpoint recovery for power failures, a standalone cleanup microservice, a cross-service ZMQ communication layer across six dedicated ports, and a session identity context system.
The saga orchestrator alone had a five-step cleanup protocol: WebSocket recycle → worker completion wait → active user cleanup → test session cleanup → resource release. The process manager ran its own state machine (STARTING → RUNNING → RECYCLING → STOPPED → ERROR) with health checks and automatic recovery. The checkpoint recovery system could detect interrupted backtests on server restart by checking MySQL status flags, recover state from Redis, and re-queue everything.
All gone. Listing it out made me a little dizzy.
Claude 3.5 to 4.6
Using Claude to write code has been a love-hate relationship.
In the 3.5 era, it was $100 a month max. When the model got dumb — I honestly wasn't sure whether my rage would destroy my computer first or give me a heart attack first. I seriously considered both outcomes.
I wrote in to yell at customer support.
Then I ran into a billing bug: after requesting a refund, the system showed I'd unsubscribed. Except I hadn't. A month later it started charging again. I successfully canceled, tried every other coding tool on the market, realized I still needed Claude, and resubscribed. Successfully canceled again. Got auto-resubscribed anyway.
Long story short, it didn't kill me. I stuck with it all the way to 4.6.
A domain name and a new project
After gutting the codebase, I tried to register a domain name for the project. Didn't get it.
But that failure got me thinking about something: if in the future everyone is running multiple AI entities, how do those entities discover each other? They need addressable identities. They need to be findable, trustable. They might even need a reputation system to support collaboration. It's like Windows 95 and the internet — before that, regular people didn't have an on-ramp to the web. AI entities need that kind of infrastructure layer too.
So I started ClawNexus — an identity registry for OpenClaw instances. It discovers AI agent instances on the network, gives each one a persistent name, and tracks capabilities, trust scores, and online status.
AI picked the popular choice. The popular choice didn't work.
By version 0.3, ClawNexus had over 300 unit tests and 5 integration tests — the integration tests were for multi-node discovery across a WireGuard VPN, with nodes at 10.66.66.x addresses.
Claude chose mDNS for discovery — standard practice. But in my WireGuard environment, it kept casually skipping certain cross-machine test cases.
I sat it down for a serious conversation: why can't this supposedly universal protocol handle all the scenarios? It couldn't give me a straight answer.
Here's the thing. I have this habit at home — I set up LAN games of StarCraft with friends. We use IPX-over-UDP via ipxwrapper. And that setup has always worked perfectly over WireGuard.
So I asked Claude directly: why not use the IPX protocol? It pushed back. Said we didn't need it.
Fine. I made it implement those skipped test cases using IPX anyway.
That evening:
- 10:35 PM — started adding IPX support
- 11:00 PM — cross-machine tests running
- 11:25 PM — all cases passing
Afterward I asked Claude to explain why it had originally chosen mDNS over IPX.
The explanation sounded perfectly reasonable. It just had nothing to do with the system I was actually running.
300 unit tests weren't enough
This is a habit from my C days.
Even with 300+ unit tests all passing, I still went through every command-line parameter by hand.
Found several that weren't doing anything at all.
Claude 4.6's explanation sounded completely correct.
But I couldn't accept it.
I adjusted my approach after that: I didn't just have AI write code and tests — I baked my manual testing process into its workflow too. The problem never came back.
Three days after launch, a copycat appeared
Three days after publishing ClawNexus to npm, I stumbled across a GitHub repo with the exact same name on social media.
This is where Claude's breadth of knowledge showed up. It helped me investigate the other project's origins, the problems in their implementation, the flaws in their design philosophy. Then in a very short time it helped me shore up every position I needed to hold.
I wouldn't have gotten through any of that on my own.
The Trident and The Green Ox
Why do the immortals in mythology ride mounts?
Laozi rode a green ox through Hangu Pass. Guanyin rides a golden-haired lion. The Buddha's mount is a six-tusked white elephant. The ox is faster than walking, but it needs someone on top who knows where to go. AI is that ox — it writes code faster than me, searches information faster than me, analyzes competitors faster than me. But it didn't know that mDNS doesn't work well over WireGuard. It didn't know whether to write or cut 90% of the code. It even managed to run me in circles for 14 days.
So why does Poseidon also carry a trident?
Because some situations demand certainty.
AI sometimes guesses. Same question, different answers. Rephrase it slightly, different answer again.
Code doesn't do that. compare_exchange_strong (a C++ atomic operation) gives you the same result every single time. Doesn't matter how many times you call it. C++ template metaprogramming, Python's deterministic pipelines, Rust's type system — those are tridents. Same input, same output, every time. They pick up where AI's guesswork leaves off.
Think about it: tools like Claude Code and Codex are themselves an ox wielding a trident. The AI generates, the code tools execute deterministically, and somewhere in the middle there's supposed to be a human making sure none of it goes sideways.
The world needs the speed of the ox.
The certainty of the trident.
And most of all — the people who can ride one and wield the other, and survive every moment when things fall apart.
Top comments (0)