DEV Community

Learn AI Resource
Learn AI Resource

Posted on

The AI Coding Workflow That Finally Clicked

The AI Coding Workflow That Finally Clicked

I tried using AI for coding for six months. Mostly failed.

Not because the tools were bad. Because I was using them wrong.

I'd ask ChatGPT to "write me a function" and get code that looked right but broke in weird ways. I'd paste errors into Claude and get explanations that didn't actually fix anything. I'd generate entire components that I had to rewrite anyway.

Then I changed one thing about how I used AI, and suddenly it actually worked.

The shift: Stop asking AI to write code. Start using it to think through problems.

What I Was Doing Wrong

Before: "Write me a React component that fetches user data and displays it in a table with sorting."

What I got: 200 lines of code that:

  • Used old syntax
  • Had no error handling
  • Didn't match my existing patterns
  • Broke when I tried to integrate it

What I did next: Spent 30 minutes fixing it. Could've written it from scratch in 20.

That's not productivity. That's just extra steps.

The Workflow That Actually Works

I stopped treating AI like a code generator. Started treating it like a senior developer I could rubber duck with.

Step 1: Explain the Problem First

Instead of asking for code, I explain what I'm building.

Bad prompt:

Write a function to validate email addresses
Enter fullscreen mode Exit fullscreen mode

Good prompt:

I'm building a signup form. Need to validate emails before sending to API.
Requirements:
- Block obviously fake emails (test@test.com)
- Allow + in email addresses
- Should feel fast (no external API calls)
- Needs to work with our existing form validation library (Yup)

What should I consider before implementing this?
Enter fullscreen mode Exit fullscreen mode

What AI gives me:

  • Edge cases I forgot (internationalized domains, subaddresses)
  • Tradeoffs (regex vs library)
  • Security considerations
  • Actual recommendations

Now when I write the code myself, I avoid 5 bugs I would've hit later.

Time saved: 2 hours of debugging next week.

Step 2: Use AI for Architecture, Not Implementation

Ask "how should I structure this?" before asking "write this for me."

Example: Building a file upload feature with progress tracking.

My question:

I need to add file uploads to our app. Users upload large files (100MB+), 
need progress bars, should handle failures gracefully, and allow resume.

What's the best architecture for this? What are the gotchas?
Enter fullscreen mode Exit fullscreen mode

What AI tells me:

  • Use chunked uploads (didn't know that was a thing)
  • Need backend presigned URLs (security)
  • Client-side: track chunks, handle retries per chunk
  • Consider libraries (Uppy, Resumable.js)
  • Suggests specific AWS S3 multipart upload pattern

What I do: Read the recommendations. Pick Uppy. Read their docs. Implement with their patterns. Done in 2 hours instead of spending a day figuring out chunked uploads from scratch.

AI didn't write the code. It showed me the path. I walked it.

Step 3: Debug by Asking "Why" Not "Fix"

When something breaks, don't paste the error and beg for a solution.

Bad:

Getting error: "Cannot read property 'map' of undefined"
Fix this code: [paste 100 lines]
Enter fullscreen mode Exit fullscreen mode

Good:

Getting "Cannot read property 'map' of undefined" on this line:
  data.users.map(user => ...)

The API returns: { success: true, data: { users: [...] } }
My code expects: data.users

Why is 'users' undefined? What am I missing about the response structure?
Enter fullscreen mode Exit fullscreen mode

What AI does:

  • Points out I'm accessing data.users but should access data.data.users
  • Explains the nested structure
  • Suggests adding console.log to verify
  • Recommends optional chaining: data?.data?.users?.map()

What I learn: The actual mental model of the problem. Next time I see similar issues, I fix them myself in 10 seconds.

Step 4: Use AI for Code Review

After writing code, I paste it and ask for review.

My prompt:

Review this function for bugs, edge cases, and improvements:

[paste code]

Context: This runs in a Next.js API route and handles user authentication.
Enter fullscreen mode Exit fullscreen mode

What AI catches:

  • Missing error handling
  • Race condition with async operations
  • Suggests using try/catch
  • Points out I'm not validating input
  • Recommends rate limiting (didn't even think about it)

Time saved: Catching bugs before they hit production.

Step 5: Learn Patterns, Not Solutions

When AI shows me something new, I don't just copy it. I ask why.

Example: AI suggested using AbortController for fetch requests.

My follow-up:

Why use AbortController here? What problem does it solve?
When should I NOT use it?
Enter fullscreen mode Exit fullscreen mode

What I learned:

  • Cancels in-flight requests when component unmounts
  • Prevents memory leaks in React
  • Avoids updating state on unmounted components
  • Not needed for server-side requests

Now I recognize when to use it. That pattern becomes part of my toolkit.

What Changed

Before: Ask AI to write code → get messy code → spend time fixing → slower than doing it myself

After: Ask AI to explain concepts → understand the problem better → write cleaner code → catch issues early → actually faster

The difference is subtle but huge.

Real Example: API Integration

I needed to integrate with a payment API (Stripe).

Old approach:

  • "Write Stripe checkout integration"
  • Get 400 lines of example code
  • Copy-paste
  • Doesn't work because it's generic
  • Spend 2 hours debugging

New approach:

Me: "I'm integrating Stripe checkout. Need to handle subscriptions, multiple price tiers, and trial periods. What's the recommended architecture?"

AI: Explains webhook pattern, suggests using Stripe checkout sessions, recommends storing customer IDs in DB, warns about webhook replay attacks.

Me: Reads Stripe docs based on recommendations. Implements step by step. Asks AI specific questions when stuck ("Why does Stripe recommend using webhook secrets?").

Result: Working integration in 3 hours with proper security. Understand how it works. Can debug it myself.

The Pattern

  1. Explain context → get better advice
  2. Ask for architecture → understand structure
  3. Write code yourself → maintain quality
  4. Use AI for review → catch bugs early
  5. Ask "why" → learn patterns

AI is not a code generator. It's a thinking partner.

Stop trying to skip the learning. Use AI to learn faster.

Tools I Actually Use

For quick questions: ChatGPT (GPT-4) or Claude

For code context: GitHub Copilot (accepts/rejects suggestions, doesn't generate from scratch)

For exploration: Perplexity (search + AI combined)

For review: Paste into Claude with full context

What I don't use: AI to write entire features. That never works.

Try This Tomorrow

Next time you're about to ask AI to "write me a function," stop.

Instead, ask:

  • "What should I consider when building this?"
  • "What's the standard pattern for this problem?"
  • "What are common mistakes people make here?"

Then write the code yourself.

Use AI's knowledge. Keep your craftsmanship.

That's the workflow that finally clicked.

Want More Like This?

I write about real AI workflows, productivity techniques, and developer tools that actually work.

Subscribe to LearnAI Weekly — practical tips delivered every week. No fluff, just stuff you can use immediately.


What's your AI coding workflow? Still generating full functions or have you found something better? Drop a comment.

Top comments (0)