DEV Community

Nikolai Noskov
Nikolai Noskov

Posted on

I Don't Write Code Anymore. My Income Tripled. Here's the System Behind It.

Note: This article was originally written in Russian and adapted for international audience. Original version on Habr

Let me start with something that might get me roasted: I don't read code. I don't write it either, except for occasional debug logs. Everything I ship is generated by LLMs.

Before you close this tab, hear me out. This isn't another "AI will replace developers" hot take. It's the opposite, actually.

Over the last six months, my income nearly tripled. I brought on two junior developers. I'm handling bigger projects than ever. And paradoxically, I'm doing more engineering work than before, not less.

People keep calling this "vibe coding." I hate that term. What I do requires more discipline, more architectural thinking, and more systematic rigor than when I wrote every line myself.

Let me show you the system.


Quick background

I'm a freelance developer, 2 years in. Mostly Telegram bots and mini-apps, sometimes landing pages, smart contracts, or pentests. I work through freelance platforms, think Upwork/Fiverr equivalents.

Six months ago, I went all-in on LLM-assisted development. Not as an experiment. As my entire workflow.

Here's what happened:

  • Income: ~3x growth
  • Team: Found 2 junior devs who now work the same way
  • Project size: Went from small gigs to multi-month enterprise projects
  • Client satisfaction: Not a single complaint about quality

But getting here wasn't straightforward. I burned through a lot of failed approaches first.


The evolution: why chat-based AI didn't scale

Phase 1: The naive approach

Like everyone else, I started using ChatGPT for isolated tasks. Generate a function here, debug an error there. It worked... for small stuff.

As projects grew, this fell apart fast. I was constantly copy-pasting context, reminding the AI what files existed, re-explaining project structure. It was like having a brilliant colleague with amnesia who needed everything re-explained every 10 minutes.

Phase 2: Cursor, better but still broken

Switching to Cursor (AI-native IDE) solved the context problem. The AI could see my files. Great.

New problem: project structure started drifting. The AI would fix one thing and break another. Old bugs came back. Without explicit guidance, the codebase slowly turned into spaghetti.

That's when it clicked: the AI needed constraints. It needed a source of truth that wasn't the code itself.


The system: documentation as foundation

Here's the insight that changed everything: documentation isn't just for humans anymore. It's the primary interface between you and the AI.

I now create three layers of documentation for every project:

Layer 1: Client-facing specs

Stages broken down by what the client can actually see and touch. Not "backend first, frontend second", clients can't evaluate a backend. Instead: "User authentication flow," "Dashboard with live data," "Export feature."

Each stage has a price and timeline. Transparent. Clients love it.

Layer 2: Technical specs

Architecture decisions, data flows, integration points. Written in plain language, no code examples. This matters.

Why no code? Because code will change constantly. If your documentation contains code snippets, it becomes outdated the moment the AI adapts something. You end up with conflicting sources of truth.

Everything described in words. If I can't explain it without code, I don't understand it well enough.

Layer 3: Stage-specific deep dives

For each stage, I create ~4 documents:

  • Stage objectives and mechanics
  • Error log (more on this later)
  • API documentation (external services we're hitting)
  • Data structures and sources

For a 2-month project with 5 stages, I end up with ~20 markdown files. Sounds like overkill? It's not. This is what keeps the AI, and my own memory, on track.

Here's what a component checklist looks like in practice:

## Implemented Components

### Backend:
- `Cabinet` model with `spreadsheet_id` field
- `ExportService` with table update methods
- API endpoints for saving and updating
- Celery tasks for automatic export
- Google Sheets API integration

### Bot:
- "Export to Google Sheets" button in main menu
- Table binding handler
- FSM states for workflow
- Manual table updates
- User instructions

### Infrastructure:
- Celery Beat for scheduled runs
- Service Account configured
- Environment variables set
- Logging for all operations
Enter fullscreen mode Exit fullscreen mode

The AI references this constantly. When it drifts, I point it back here.


The secret weapon: error documentation

This one technique probably saved me dozens of hours.

When a bug doesn't resolve in 2-3 iterations, and I see the AI starting to loop, trying the same fixes, going in circles, I stop and create an error document.

## Bug: Export fails silently on large datasets

### Known facts:
- Works fine for <1000 rows
- Fails without error message for >5000 rows
- Memory usage spikes before failure

### Attempted solutions:
- Increased timeout (no effect)
- Batch processing with 500-row chunks (partial fix, still fails at 8000+)
- Streaming approach (testing now)

### Hypotheses:
- Memory limit on serverless function?
- Google Sheets API rate limiting?
Enter fullscreen mode Exit fullscreen mode

This document becomes the reference point. The AI won't suggest already-failed solutions because they're documented. I won't forget what we tried. The loop breaks.

I keep these even after fixing the bug. They save time on future projects with similar issues.


TDD: your safety net when you don't read code

Here's my time breakdown on a typical project:

  • 5-10% actual code generation (AI does this fast, but it's always buggy)
  • 25% writing documentation
  • 15% writing tests
  • 50% fixing errors, manual testing, client feedback

Notice something? Tests are the second source of truth, alongside documentation.

Documentation describes what should happen. Tests verify what actually happens. When these two align, I have confidence the code works, even though I haven't read it line by line.

TDD isn't new. But with LLMs, it becomes necessary. You're not reviewing every line of code. Tests catch what your eyes don't.


Context management: the art of fresh starts

LLMs degrade over long conversations. They start hallucinating, forgetting earlier decisions, contradicting themselves. You've seen this.

My solution: aggressive context compartmentalization.

  • New stage = new chat/agent
  • New feature within a stage = new chat
  • New bug = new chat
  • Architecture decisions = dedicated "architect" chat

Creating new chats is painless when you have documentation. Just feed the relevant .md files to a fresh agent, and it's up to speed in seconds.

The loop-breaking technique:

When an AI gets stuck in a loop, it's usually because it's churning through the same context over and over. The fix isn't better prompting, it's injecting new context.

Options:

  1. Google the problem, paste in a StackOverflow answer or docs page
  2. Ask a different LLM for suggestions, then feed those to your main chat
  3. Throw in a wild hypothesis, even if it's wrong, it can shake loose new thinking

The more atomic your work units, the less this happens. Small, focused tasks = fewer catastrophic loops.


My current toolstack

For documentation and planning:

  • ChatGPT, Claude, Grok, I use whatever's available
  • Long conversations to hash out specs before touching code
  • Output: .md files that become project foundation

For implementation:

  • Claude Code (current favorite), works well in terminal, renewable limits
  • Cursor + Gemini CLI, Gemini is free but loops more often; sometimes it just... gives up mid-task
  • Cursor with Claude Sonnet for complex stuff, auto-select for simple tasks

Hot take on LLM spending:

If you know how to work with LLMs effectively, don't cheap out. My rough numbers: 80% increase in LLM costs led to 60% increase in income. Other factors contributed, sure, but the correlation is real.

Good tools pay for themselves fast.


What still sucks

Let's be honest about the limitations.

Smart contracts, especially TON: LLMs struggle hard here. Not enough training data, too many edge cases. I still have to dig through documentation manually and find examples myself.

TypeScript vs Python: Models are noticeably weaker with TypeScript. More hallucinated types, more incorrect generics.

The 1-in-5 problem: About 20% of my projects hit these rough spots. They all reach release eventually, but some require significantly more manual intervention.

My workarounds:

  • Keep documentation from old projects, reuse solutions that worked
  • Maintain a personal library of code patterns for tricky areas
  • Know when to stop fighting the AI and just write the code yourself

The team angle

I found two juniors who now work this way. One is still ramping up. The other has been with me for three months and it's working great.

Funny thing: at first he was embarrassed about using LLMs. Felt like "cheating." Then I showed him the documentation system, the testing discipline, the context management. He realized this isn't easier, it's just different.

Now he thinks like an engineer and architect, not a code typist. That's the shift.

What clients think: Not once has a client asked if I use AI. I don't hide it, but they don't care. They care that the product works, it's delivered on time, and the quality is solid. All true.


The real skill: engineering hasn't gone anywhere

Let me be clear: this methodology doesn't work if you don't understand development.

You still need to know:

  • How databases actually work
  • How requests flow through a system
  • Why one architecture is better than another for a specific use case
  • What trade-offs you're making

LLMs don't replace this knowledge. They leverage it. The AI is an amplifier, if you feed it garbage thinking, you get garbage code at scale.

The difference? I'm no longer a human compiler, translating logic into syntax. I'm a full-time architect who happens to have a very fast (and somewhat unreliable) junior dev typing for me.

If you don't understand what's happening under the hood, no amount of documentation or Claude subscription will save you. The system works because there's an engineer behind it.


So why isn't this "vibe coding"?

"Vibe coding" implies casual, low-effort, go-with-the-flow development. Vibes.

What I actually do:

  • Maintain 20+ documentation files per project
  • Write tests for code I haven't read
  • Constantly manage and reset AI context
  • Architect systems at a higher level than ever before
  • Debug through systematic elimination, not intuition

This requires more concentration than traditional coding, not less. The mental overhead of keeping documentation accurate, asking the right questions, catching AI drift before it compounds, it's exhausting in a different way.

You know what actual vibe coding is for me now? When I have free time, no deadlines, and I just... write code. By hand. For fun. Feel the keystrokes. Watch the logic flow.

That's become the luxury. The meditative escape from real work.

Traditional coding went from "the job" to "the hobby."

I'm not sure how to feel about that yet.


TL;DR

  1. Documentation is your contract with the AI, no code examples, just clear descriptions of logic and architecture
  2. TDD isn't optional, tests are your eyes when you're not reading code
  3. Compartmentalize aggressively, new task = new chat, feed it relevant docs
  4. Document your errors, break loops by creating a reference point for failed solutions
  5. You need engineering fundamentals, AI amplifies competence, including incompetence
  6. Invest in good tools, if you're effective with LLMs, premium subscriptions pay for themselves

Let's connect

I write more about LLM-assisted development and building a freelance dev business on my Telegram channel: @post_devcore (content is in Russian, but I'm considering English posts too).

What's your experience with AI-assisted development? Have you found workflows that actually scale, or is it still chaos? Drop a comment, I'm genuinely curious how others are handling this.


First time posting on dev.to, feedback on the article itself is also welcome!

Top comments (0)