This is my first post.
Not because I had nothing to say — I've been sitting on this for a while. But I don't write unless I have something worth reading. If you've been building software long enough, you know the difference between content and noise. I had no interest in adding noise.
Now I have something worth saying.
I hit a wall not long ago. The kind of wall where you question whether you're actually good at this job or just very good at pretending.
I was leading multiple engineering squads across different time zones. Reviewing PRs in Go in the morning, debugging PHP after lunch, writing TypeScript before dinner, and untangling Kubernetes configs at night. Every context switch felt like rebooting my entire brain.
I was using AI tools — same as everyone. Paste code, ask a question, copy the answer. It helped, but not enough. I was still drowning.
Then I remembered something.
I've been working with AI since before most people knew it existed — back when it was still an API, not a chatbot. Before the hype, before the Twitter threads, before everyone became a "prompt engineer" overnight. Back then, you had to actually think about how to communicate with a machine that had no memory and no context.
Years of that kind of thinking changes how you see these tools.
So I stopped using AI the way everyone uses it. I went back to first principles. And I built something.
I'm not going to tell you what it is. But I'll tell you what it did.
Before
The week that almost broke me: a tech lead on one of our squads resigned mid-sprint. Overnight I inherited a codebase I'd never seen, with business rules nobody documented, and a deployment pipeline held together with hope.
I was supposed to be reviewing PRs by Monday.
AI couldn't help. Not because it wasn't smart — it was. But smart without context is just noise. Ask it to review code and you get generic advice. "Consider error handling." Great, thanks. We have 200+ custom error types, but sure, let me consider error handling.
It didn't know us. It didn't know our patterns. It didn't know what good code looks like in THIS project.
Most engineers accept that. They shrug and move on.
I don't accept limitations well. Never have.
Something changed
I can't tell you exactly what I built. Partly because the how isn't the interesting part — the thinking behind it is. And partly because if I gave you the blueprint, you'd try to copy it, and it wouldn't work.
Not because it's complicated. It's embarrassingly simple, actually. But simple in the way that a chess master's opening move looks simple to a beginner. The move isn't the skill. The decades of pattern recognition behind the move is.
Here's what I will tell you:
The gap between how most engineers use AI and how it CAN be used is enormous. Not 10% better. Not 2x. I'm talking about a fundamental shift in what the tool becomes when you approach it differently.
Most people treat AI like a search engine with attitude. I turned mine into something else entirely. Something that knows my codebases, my team's patterns, my deployment pipelines. Something that catches the real bugs — the ones that are specific to how WE build software, not how Stack Overflow builds software.
The results (this part I'll share)
Code review: dramatically faster. Not because I skip things — because the first pass actually catches pattern violations that matter to our team. Not generic advice. Specific, contextual, useful catches.
Context switching: nearly gone as a problem. I still write four languages in a day and each one stays in its lane. The cross-contamination stopped. I don't write Go-flavored TypeScript anymore.
Onboarding: that codebase I inherited? I was reviewing PRs like a veteran by the next morning. A teammate asked how long I'd been on the project. "Since yesterday." They thought I was joking.
And the strangest part — my system accidentally became the best onboarding documentation our team has ever had. New engineers who encountered it got productive in days instead of weeks. Nobody had to explain anything.
What it's NOT
Let me save you some rabbit holes.
It's not fine-tuning. I'm not training custom models. I don't have GPU clusters in my closet.
It's not a chatbot. It's not an app. It's not something I can package and sell you (well — I could, but the packaging isn't the value).
It's not prompt engineering, at least not the way most people think about it. If you're still thinking in terms of "better prompts," you're thinking too small.
It's something in between. Something that lives in the gap between how these tools work and how engineers actually work. A gap that almost nobody is exploring because they're too busy arguing about whether AI will replace developers.
Spoiler: it won't. But it will separate engineers who know how to wield it from those who just use it.
Why I'm telling you this without showing you
Because I want you to think.
Not "think about how to copy this." Think about the actual problem:
Why does your AI assistant give you generic advice when it's clearly smart enough to be specific?
What is it actually missing?
And if you could give it what it's missing... what would that change?
Sit with those questions. The answer isn't hidden in a framework or a tool or a repo. It's hidden in how deeply you understand your own work.
Every engineer I've shown this to had the same reaction. First: "That's it? That's embarrassingly simple." Then, five minutes later: "Wait — why didn't I think of this?"
Because you were thinking like a user. Not like an engineer.
Engineers don't just use tools. They reshape them. They bend them. They make them do things the manufacturer never intended.
Start there. If you're good enough, you'll figure it out. And when you do — you'll wonder why it took you this long.
I'm a tech lead managing distributed engineering squads across Southeast Asia. I've been bending AI tools since before they were mainstream.
This was my first post. I don't plan on making it my last.
Top comments (0)