I genuinely believe programming languages are becoming assembler — and I don't mean that metaphorically.
C abstracted assembly. C++ abstracted C. JavaScript abstracted memory management. Each layer let humans think one level higher. The same leap just happened — and the new abstraction layer is English.
The language I work in now is English. Whether the target is TypeScript, Rust, or Python — source code is a lower-level artifact generated from higher-level intent. I write close to 0% of code by hand. I write documentation and sophisticated prompts.
I explored this in depth with a full walkthrough of the AfterPack case study. Here I want to focus on what actually changed in the workflow and what it means for anyone in software — whether you write code, lead teams, or make architecture decisions.
The Mental Model
Every AI tool boils down to one equation:
Input + Context → LLM → Output
The quality of output is directly proportional to the quality of input. I've watched the same LLM give a junior mediocre code and a senior production-ready architecture. The difference was entirely in the context provided. This applies to everyone — whether you're prompting for code, architecture decisions, or product strategy.
The Full Lifecycle Changed
If you're only using AI for code autocomplete, you're at maybe 10%.
Ideation and Research: Pressure-test ideas across multiple models (Claude, Gemini, GPT — trained on different data, different blind spots). Competitive landscape analysis. Demand validation with Google Ads keyword data. What used to take a team weeks now takes hours.
Planning and Architecture: This is where I spend the most time — significantly more than on implementation. Load context into the LLM: existing architecture, constraints, business requirements. Ask for multiple approaches. Have it argue against its own recommendations. Planning mode is king — days in planning, then execution in ~30 minutes. When I come to QA, it mostly just works when planned right.
Design: I skip Figma for many use cases now. Give the LLM your existing UI and design system — 15 minutes later, new pages match your patterns. Pro tip: Gemini Pro creates better initial visuals, but Claude handles long-term code evolution far better.
Implementation: The fastest part of the cycle now. Refactor 100+ files by new design patterns — one-shot in 10 minutes. Previously a week of tedious work.
Testing and QA: AI writes tests, runs them, interprets failures, fixes code. But on real products, you spend most time here — verifying outputs. Taste and judgment are still yours.
Your Brain Goes TikTok
Here's what nobody warned me about. When AI handles implementation this fast, you become the bottleneck. So you try to keep up — I run 3-5 Claude terminals in parallel. One is auditing a codebase, another implementing an API, a third writing product specs. You're constantly jumping between contexts, reviewing outputs, catching mistakes.
It's incredibly brain-intensive. Coding used to be more relaxing — you'd hold one idea for 10-20 minutes, type it out, debug, move on. Now you're a rapid context-switcher. I joke that TikTok was training us for AI-assisted engineering all along.
The Bottleneck Is Human Synchronization
Your role becomes judge, quality controller, architect deciding what to build and why. But the deeper bottleneck isn't you individually — it's human synchronization. Meetings, handoffs, miscommunication, waiting for approvals. When execution takes 30 minutes, spending 3 days aligning on requirements becomes the dominant cost.
Who thrives now:
- Solo builders shipping entire products alone
- Small teams with clear ownership and less meetings
- Senior generalists who can be PM, PO, QA, and engineer all in one
One person carries the work that previously required a team. For larger orgs, you still need role separation — but each person's blast radius is dramatically larger.
The AfterPack Story
What this looks like in practice: I'm building AfterPack — a Rust-based JavaScript obfuscator. I spent three weeks on core architecture without writing a single line of Rust.
- Documentation first. 20+ spec files. Different AI agents worked on different parts — no Rust, just careful system design.
- JavaScript prototyping. Prototyped obfuscation transforms in JS first. Multiple iterations. Agents tried to break the prototypes and suggested improvements.
- Adversarial testing. Claude Code running overnight: "keep iterating until a fresh Claude instance with no context can't deobfuscate the output."
I'm writing production Rust — a language I'd barely touched. My actual language is English. I focus on architecture, data structures, algorithms. The LLM handles the Rust.
The previous generation of JavaScript obfuscators took 2-3 years. I'm shipping something more advanced in a fraction of that time.
Working Effectively with AI
- Context is everything. Write documentation before anything else. What you want, why, constraints, what "good" looks like.
- Start in planning mode. Multiple approaches. Model argues against itself. Prototype in a simpler language first.
- Use multiple models. Claude, Gemini, GPT — different training data, different perspectives. Critical for planning and review.
- Own the architecture. AI implements any architecture you describe. Your job is describing the right one — that's the part that won't be automated anytime soon.
Where to Start
- New to software: get an AI assistant, describe what you want in plain English. You'll learn from watching it reason.
- Not using AI yet: build your next feature entirely AI-assisted. Planning, implementation, testing — all of it.
- Already using AI daily: focus on context quality. Better documentation, more planning mode.
- Team lead or exec: the biggest productivity gains aren't in individual speed — they're in reducing coordination overhead. Rethink whether every process and meeting is still necessary when execution is this fast.
Originally published on nikitaeverywhere.com

Top comments (0)