DEV Community

Mangabo Kolawole
Mangabo Kolawole Subscriber

Posted on

I think you should let AI write your code

I've been skeptical about AI coding since it started gaining traction in 2022. And then, I changed my mind: I went from manually writing most of my code to having AI generate 90% of it. And honestly, I like the velocity.

In 2023, I was generating approximately 40% of my code using ChatGPT. The experience was frustrating. I'd spend more time explaining my codebase to ChatGPT than actually coding. The LLM would give me generic solutions that barely fit my project's structure. I mainly copied and pasted boilerplate and tweaked values.

However, ChatGPT excelled at backend work. When I built my first Python package, drf-api-key, ChatGPT handled about 60% of the work. It figured out Fernet encryption, structured the code properly, and saved me hours of research.

The real breakthrough came during my 8-month sabbatical when I discovered Cursor. I was introduced to the tool, which helped me quickly create boilerplate startup backends and handle infrastructure work. Those first sessions felt like magic.

Cursor understood existing codebases in ways ChatGPT never could. It could read patterns, maintain consistency, and actually improve code instead of just generating it.

But even Cursor had limits. It struggled with newer frameworks and couldn't access updated documentation (this is now possible with Context7). I'd still write configurations manually because Cursor would generate completely wrong setups.

Then November 2024 happened. MCP servers launched, Anthropic improved their coding models, Claude Code arrived, and Gemini 2.5 is actually very efficient. Now 90% of my code comes from AIs, and I've found ways to use them beyond just writing functions.

If you're not fully leveraging AI for coding yet, here's why you should start.

Before we dive in: I write about AI-assisted development, building products, and software engineering. If you want more content like this, subscribe to my newsletter.

AI coding feels like magic, but you are the magician

After 12 months of using AI for coding and watching other developers struggle with it, I've realized something uncomfortable: your experience with AI coding is a mirror of how good you actually are as a software engineer.

Steve Harvey Shakes Head GIF - Steve Harvey Scared Nope - Discover & Share  GIFs

And I'm not just talking about your coding skills. I'm talking about everything:

  • If you're not methodical, your instructions to the AI won't work. The AI will generate garbage because your requirements are garbage.
  • If you don't understand your own codebase, you can't guide the AI to maintain consistency. You'll get code that works in isolation but breaks everything else.
  • If you skip preparation and planning, the AI will build the wrong thing perfectly. You'll waste hours debugging solutions to problems you never properly defined.
  • If you're impatient and want instant results, you'll give up after the first failed prompt instead of iterating toward the right solution.
  • If you have weak code review habits, you'll ship AI-generated bugs because you assumed the AI got it right.

I started with "magic eyes", thinking AI should know what I wanted. That led to frustration. You'd prompt the LLM once, get mediocre results, then blame the tool.

The reality: AI agents are like junior developers who are extremely fast but need clear direction. You wouldn't tell a junior dev "make this feature work" and walk away. You'd explain the codebase, show them patterns, and give specific requirements.

The 80/20 rule applies perfectly here. Software engineering is generally 80% preparation, 20% coding. Most developers skip the preparation and jump straight to prompting. Then they wonder why the AI generates garbage.

My workflow flips this: I use AI to help with the 80% (planning), which makes the 20% (coding) trivial.

When I get a task, I don't jump straight to coding. I involve the AI in the preparation phase. I give it context about the problem, the codebase, and the constraints. Then I let it help me think through the approach by drafting a detailed plan.

I let the agent draft detailed PRDs (Product Requirements Documents) for me. I don't write the PRD myself – I provide the agent with context, and it creates the plan. This forces me to think through what I actually need, and the agent structures it in a way that makes the implementation obvious.

My workflow:

  1. Give the agent your ticket description, Figma designs, and documentation.
  2. Have it draft a PRD and implementation plan.
  3. Review and refine the plan together.
  4. Let the agent cook, and iterate on the work.

PRD drafting workflows

The preparation phase is where you're actually being the magician. You're teaching the AI what you need, how you think, and what good looks like. The coding becomes almost mechanical after that.

The tools that changed everything for me

My current AI coding stack:

Cursor

My primary coding environment. It excels at understanding large, existing codebases.

I use Claude Sonnet 4.5 for actual coding and Gemini 2.5 Pro for PRD drafting or debugging complex issues. Where Cursor struggles for some reason, such as large-scale refactoring across multiple files, it can change code but doesn't always consider the broader architectural implications.

MCP Servers

This is the game-changer for 2025. MCP (Model Context Protocol) connects LLMs to external tools and APIs, giving them real capabilities beyond text generation.

My MCP setup:

  • Figma MCP - The agent reads designs directly and understands component structure.
  • Context7 - Gives agents access to up-to-date documentation for any framework.
  • AWS CloudWatch, Sentry, Resend - For infrastructure monitoring and notifications.

Claude Code

It handles infrastructure and deployment work.

I keep a CLAUDE.md file in my project root that defines my infrastructure standards and deployment workflow. Here's what the top looks like:

# Deployment Workflow
- Create timestamped database backup in /root/.backups-db/
- Pull latest changes from git (main branch)  
- Rebuild and restart Docker containers
- Run Django migrations and collect static files
- Verify all containers are healthy and services respond

## Critical Rules
- Never expose secrets in code or commits
- Always backup before deployment
- Use placeholders in documentation
Enter fullscreen mode Exit fullscreen mode

When I deploy, Claude Code reads this file and follows the workflow exactly. It backs up the database, deploys the necessary updates, runs health checks, and emails me a report.

I also have a cron job running health checks every 6 hours. The entire observability pipeline is managed by Claude code and markdown instructions.

Where AI excels (and where it struggles)

Backend: AI's sweet spot

LLMs handle backend development exceptionally well. APIs, database schemas, business logic: these are structured problems with clear patterns, which is precisely where AI excels.

The key is to give your AI agent complete context about your codebase. Document your patterns, coding standards, and architecture decisions. When the agent understands how you structure projects, it maintains consistency across features.

For integrations, I feed the API documentation to my coding agent. It reads the docs, understands the authentication flow, and implements the integration. If the vendor provides an MCP server (like Sentry does), the agent can research implementation patterns directly. Otherwise, Context7 gives agents access to documentation for any framework.

The caveats

Always review what the agent generates. It might create code that works but doesn't scale, or overengineer simple tasks when explicit constraints haven't been set.

Tell the agent what to do, but more importantly, what NOT to do. Otherwise, they tend to become overly complex and generate unnecessary solutions. Be specific about your constraints: "Use the existing authentication middleware, don't create a new one," or "Keep this under 50 lines".

Frontend: Great, but not that great yet

Frontend development with AI is complicated. I code frontend 5x faster now, but the risk of creating unmaintainable code is higher.

My approach

I start small. I have the agent build individual components, not entire features. Build a button component, understand how it works, then compose it into larger structures.

For complex features, I use Figma's MCP server to provide the agent with visual context. But don't expect it to understand complete design systems from a single screenshot. Break designs into components and implement them piece by piece.

Frontend with AI requires more iteration than the backend. You need to review the generated code carefully and be willing to refine your prompts as needed. If that sounds tedious, stick to building components manually.

Infra: Surprisingly powerful

This surprised me the most. AI agents excel at infrastructure tasks.

When I ask Claude Code or Cursor to create VPCs, configure security groups, or write CloudFormation templates, it gets the details right more often than I do manually. Infrastructure is complex, and even experienced engineers rarely get deployments right on the first try.

What I've actually built

An automated trading system (7 hours)

I've been studying market movements since 2018 and recently took courses on algorithmic trading. The real work in algo trading is developing your own strategy: understanding indicators, backtesting parameters, and finding what actually works in live markets. That takes months of focused study.

But I wasn't going to wait 6 months to start trading.

Here's my approach: build the infrastructure first (money management, execution, monitoring), use third-party signals as training wheels, then swap in my own algorithm once I've developed it.

The problem with building trading infrastructure manually: the sheer number of integrations. Telegram API, message parsing, MetaAPI execution, email monitoring, database tracking, and deployment automation. Each one could take days to implement correctly.

With AI, I built it all in 7 hours. Now, while the system runs with external signals, I'm studying technical indicators and developing my own strategy. When I'm ready, I just plug in my algorithm, and the entire infrastructure is already battle-tested.

I'm building the boring parts now so I can focus on the interesting part: the actual trading strategy.

What it does:

  • Reads Telegram messages from trading channels.
  • Parses signals using an OpenAI agent into a format I want.
  • Validates trades before execution.
  • Places trades automatically via MetaAPI.
  • Monitors positions and adjusts stop-loss/take-profit levels.
  • Sends health reports every 6 hours.

My trading application architecture

How AI helped

Cursor built the Django backend (PostgreSQL + Celery) in a few hours. I provided the architecture documents and requirements, and it generated the API, database models, and background tasks.

Claude Code handles deployments and monitoring. I have a cron job that runs health checks:

claude --dangerously-skip-permissions --print "Perform a health check for the trading application and send an email report. Read /root/MONITORING_EMAIL_PROMPT.md to understand the format and information required. Use the sender email onboarding@resend.dev and recipient koladev32@gmail.com. Important: Approve all file read operations and email sending automatically."
Enter fullscreen mode Exit fullscreen mode

Here's an example email I received:

Email received from Claude code

Claude Code reads the monitoring instructions, checks system status, and emails me detailed reports. The entire observability pipeline runs without manual intervention.

I even built an MCP server hosted on Gram. I just needed my OpenAPI document with the endpoints I wanted to expose, and Gram generated the MCP server.

Gram Toolsets

Now I can monitor or place trades directly from Claude Desktop.

Claude Desktop response

What I had to fix

Letting Claude Code handle deployments created some interesting issues. I receive signals from Telegram using Telethon, which creates a session file. On each deployment, that file was being deleted, causing my cron jobs to fail when trying to read messages.

It turns out that Claude Code was cleaning the file to maintain a clean state. Lost about an hour debugging that one. My solution: move the session file outside the project directory and copy it in during container builds.

Other issues I hit:

  • Hallucinated functions: Cursor invented non-existing validation functions. I had to force it to read the actual docs from Context7 and fix it.
  • Wrong environment config: Claude set USE_SQLITE=true in production, so my data was going to a file instead of PostgreSQL. Thankfully, I run backups before every deploy and restored the data.

MCP configuration generator (45 minutes)

I built a lot of MCP servers and got tired of manually writing configuration files for different tools.

MCP configuration generator project

I gave Cursor the MCP specification and examples of configs I'd written. It generated a tool that outputs valid configurations for any MCP server I need. 45 minutes of work that saves me 15-20 minutes every time I start a new project.

The rules I never break

I use AI for everything: building POCs for technical writing, developing backends and frontends, and deploying infrastructure. Here are the two rules I always follow:

Rule 1: Understand the project, codebase, and requirements deeply

You can't give good directions if you don't know where you're going. AI agents amplify your understanding; they don't replace it. If you don't understand your architecture, your coding agent will generate code that works today but breaks tomorrow.

Rule 2: Never give AI agents destructive permissions

I configure IAM policies so my coding agents can create and update resources, but never delete them. When something goes wrong (and it will), the agent will try several solutions first. But after multiple failed attempts, it defaults to the nuclear option: delete and recreate. That's how you lose data.

I make one exception: sudo access on servers for infrastructure automation. It's risky but necessary for deployment scripts. Never in production, though, where I create specific user roles and define permissions.

Final thoughts

Writing code has never been my favorite part of software engineering. Building solutions to problems is what I care about. AI lets me spend more time on that and less time on boilerplate.

I spent 7 hours building a trading system that would've taken me weeks manually. Now I spend my time refining the strategy, not debugging Django models. That's what AI coding actually gives you: less busywork, more focus on the stuff that matters.

Pick something you've been putting off because it feels like too much setup work. Let AI generate the structure. Then make it yours. You'll figure out pretty quickly where AI excels and where you need to step in. The tools are here. The question is whether you're willing to change how you work.

And if you're already using AI to write a significant portion of your code, please share your experience in the comments: your approaches, tools, and what you've learned.

If you enjoyed this article and want more insights like this, subscribe to my newsletter for weekly tips, tutorials, and stories delivered straight to your inbox!

Resources

Tools mentioned in this article:

  • Cursor – AI-powered code editor.
  • Claude Code – Infrastructure automation with Claude.
  • Context7 – Documentation access for AI agents.
  • MCP Protocol – Model Context Protocol specification.
  • Gram – MCP server hosting.

Further reading:

Top comments (0)