Last week, I hit a personal milestone that forced me to rethink how we define developer productivity. In just three days, I opened a Pull Request containing 10,000 lines of code.
To be clear: this wasn’t boilerplate garbage. It was fully functional, battle-tested code, complete with unit tests. When I looked at that number, I realized two things:
- I could never have done this manually.
- The definition of "coding" has fundamentally changed.
There is a narrative circulating in our industry right now that AI coding assistants don't actually increase productivity—that they just generate bugs faster. I strongly disagree. AI massively amplifies productivity, but with a caveat: your output is no longer limited by your typing speed, but by your attention span.
The ceiling of your productivity is now defined by how many AI agents you can effectively supervise, review, and guide simultaneously. We are transitioning from writers of code to architects of logic and managers of synthetic intelligence.
If you are an engineer or a founder, here is how I’ve adapted my workflow to survive and thrive in this new reality, along with the specific rules and tools I use to stay ahead.
The Philosophy: Acceptance vs. Obsolescence
Whether we like it or not, the future has arrived. The role of the programmer as a "writer of syntax" is shrinking rapidly.
There is a segment of the industry currently in denial—people rejecting these tools out of fear, skepticism, or a bad experience three months ago (which, in AI time, is a decade). The harsh reality is that the "human-only" loop is becoming uncompetitive.
Humans aren't going anywhere, but the required competency is shifting. We are moving away from syntax memorization toward AI orchestration. The sooner you accept this and start mastering the tools, the more secure your professional future will be.
The Practical Guide: How to Actually Control Agents
Using AI agents effectively isn't about typing "build me a website" into a chatbox. It requires a rigorous methodology. Over years of trial and error (and being an early adopter of tools like Cursor), I’ve distilled my process down to two non-negotiable rules.
Rule #1: Set Strict Boundaries ( The AGENTS.md Method)
The biggest mistake developers make is assuming the agent "knows" the context. It doesn't. And worse, no agent will stop you to ask for boundaries—it will just guess, often incorrectly.
You must create a "box" for the AI to work within.
I enforce this by maintaining a file often called AGENTS.md in my repositories. This file contains:
- The Tech Stack: Strict versions of languages and frameworks.
- Methodology: How we handle errors, logging, and state management.
- Structural Know-How: Patterns specific to this project that a generic model wouldn't know.
- Constraints: What the agent is forbidden from doing.
This file acts as a reusable instruction manual. Major players are now adopting this pattern (see https://agents.md ), allowing you to standardize the AI's behavior across the team. Never let an agent touch your code without first ingesting these boundaries.
Rule #2: Be Verbose and Concrete (The "Vacation" Metaphor)
I have found that ambiguity is the enemy of AI performance. To get SOTA (State of the Art) results, I treat the agent like a junior engineer I am handing a task to before I leave for a two-week vacation without internet access.
If I don't explain the edge cases, the prerequisites, and the potential pitfalls now, the task will fail.
To achieve this level of detail without slowing down, I use Voice Dictation.
Instead of typing short, vague prompts, I dictate comprehensive instructions. I explain:
- The exact context and goal.
- The prerequisites.
- The boundary conditions.
- Anything I would tell a human colleague to ensure they don't mess up while I'm gone.
Modern flagship models are excellent at parsing semantic meaning from long, unstructured spoken rambles. A one-page prompt generated from 2 minutes of speaking will yield infinitely better code than a two-sentence typed command.
The Tooling Landscape: What to Use Right Now
The market is flooded, and loyalty to a single tool is a weakness. You must use what is currently State of the Art. Here is my current loadout:
The Models:
We are seeing a golden age of inference.
- The Heavy Hitters: OpenAI Codex Max, Google Gemini 3 Pro, and Anthropic Claude 4.5 Opus/Sonnet.
- The Challengers: Do not sleep on Chinese open-source and proprietary models. Kimi K2 (Thinking), GLM 4.6, and Minimax M2 are catching up rapidly and offer incredible performance per dollar.
The IDEs:
- Cursor: The current king of AI-native editors.
- Windsurf & Zed: Excellent alternatives focusing on speed and flow.
- Google Antigravity: A new, promising IDE built on top of VS Code.
- Kiro IDE: A strong contender for specific workflows.
- Visual Studio Code (Classic): If you prefer the vanilla experience, you must augment it. I recommend the Claude Code extension, GitHub Copilot (which has become surprisingly powerful and cost-efficient recently), and plugins for Codex.
- JetBrains Ecosystem: If you are locked into IntelliJ/PyCharm, look at their internal agent Junie, or bridge the gap with the Claude Code SDK.
Infrastructure & Terminal:
- Inference Providers: For speed and alternative models, I look to Cerebras (insanely fast inference) and Zhipu AI (z.ai) coding plans. OpenRouter is essential for aggregated access.
- Terminal: I use Warp. It’s an intelligent terminal that integrates seamlessly with this workflow.
The Verdict
If you tried coding with ChatGPT six months ago and were unimpressed, your data is expired. The rate of evolution here is staggering.
Don't be the developer who refuses to use the power drill because they are "good enough" with a screwdriver. The "Code Typist" is dying, but the "System Architect" has never been more powerful.
Set your boundaries, speak to your agents, and don't be afraid to switch tools as the landscape shifts. The future belongs to those who adapt.
Top comments (0)