DEV Community

Tack k
Tack k

Posted on

From ChatGPT to Claude Code — How My AI Workflow Evolved Over 2 Years of FiveM Dev

It started with a frustration

Two years ago, I wanted to build custom scripts for my FiveM roleplay server. I knew exactly what I wanted — a custom billing system with quantity-based invoicing, a police reward distribution tool, an in-game arcade with real playable games, vending machines that could track stock and revenue by job.

The vision was clear. The problem was execution.

I don't write Lua. I don't write JavaScript. I'm a systems designer — I think in data flows, user interactions, and edge cases. The "how it should work" part comes naturally. The "typing the actual code" part does not.

So I turned to AI. And what followed was two years of learning what these tools could and couldn't do — and watching that ceiling get higher every few months.


Phase 1 — ChatGPT (2 years ago)

ChatGPT was the obvious starting point. Everyone was talking about it. I figured if it could write essays and answer questions, it could write Lua.

It could. Sort of.

The code it produced had frequent errors. FiveM-specific APIs, QBCore patterns, the quirks of how server-side and client-side Lua communicate — it got a lot of it wrong. Not always. But enough that every session felt like a battle.

The workflow looked like this:

  1. Describe what I want
  2. Get code pasted into the chat window
  3. Manually copy it
  4. Create the file myself
  5. Test it, hit an error
  6. Paste the error back into chat
  7. Get a fix
  8. Hit a new error
  9. Repeat

Sometimes step 6-8 would loop five or six times before something actually worked. And every time I started a new chat, the context was gone. I'd have to re-explain what the script was supposed to do, what framework I was using, what had already been tried.

I stuck with it for about a year. Not because it was great — because there wasn't a better option I trusted. I don't use Google products as a rule, so Gemini was never a consideration for me. And for a while, GPT was simply the only serious game in town.

During that year I still managed to build things. The early versions of the billing system, some basic job scripts. But the process was exhausting. The ratio of "time thinking about the problem" to "time wrestling with AI output" was not in my favor.


Phase 2 — Claude desktop app (~10-12 months ago)

When I switched to Claude, the difference showed up almost immediately.

The error rate dropped. Claude seemed to actually understand what I was building — not just the current request, but the context around it. When I said "this needs to work with QBCore's job system," it didn't just nod along and produce generic code. It produced code that actually accounted for how QBCore structures job data.

But the change I noticed most wasn't about code quality. It was about output format.

Claude could produce properly structured files — not just code blocks pasted into a chat window, but actual file-ready output I could use directly. That might sound like a small thing. It wasn't. The copy-paste-create cycle that had been eating time every single session started to shrink. The feedback loop got tighter. I could describe something, get a working file, test it, and move on.

This was the point where something shifted in how I thought about AI. Before, it felt like using a very powerful search engine that could also write code. After Claude, it started feeling more like working with a collaborator — something that could hold context, reason about problems, and produce results I could actually use.

I started giving Claude names. Opus became "おぷちゃん." Sonnet became "そねちゃん." Not just as a quirk — but because the relationship felt different. These weren't tools I was operating. They were teammates I was working with.

The motto I built my whole approach around: AI to tomo ni — working alongside AI, not just using it.


Phase 3 — Claude Code (more recently)

Then Claude Code arrived and changed everything again.

The jump from GPT to Claude desktop was significant. The jump from Claude desktop to Claude Code was a different category of change entirely.

With the desktop app, I was still doing the file work myself. Claude would produce the code, I would take it and put it where it needed to go. That was already a massive improvement over the GPT era. But there was still a gap between "AI produces output" and "work actually gets done."

Claude Code closes that gap.

I give it a path. It goes there. It reads the existing files, understands what's already been built, figures out where the new code fits. It makes the modifications. It creates new files. It builds folder structures. It handles the whole thing — not just the code, but the actual act of putting the code in the right place.

The first time I used it, I remember thinking: I've never seen anything like this.

Before Claude Code:

  • Describe what I want
  • Get code in a chat window
  • Copy it manually
  • Create the file myself
  • Handle folder structure myself
  • Come back to chat for the next piece
  • Repeat

After Claude Code:

  • Describe what I want
  • Done

That's not an exaggeration. The scripts in this entire series — the billing system, the police reward distribution tool, the in-game arcade, the admin management tools, the vending machine system — were all built with this workflow. I design the system, work through the edge cases in my head, describe it clearly, and Claude Code handles implementation.

It's not about saving keystrokes. It's about where your attention goes. When the implementation work is handled, I can focus entirely on the part that actually requires human judgment: what should this system do, how should players experience it, what happens when things go wrong, what did I miss.

That's the work I want to be doing. That's the work I'm actually good at.


What changed — and what didn't

Two years of AI-assisted development taught me a few things.

The tools got dramatically better. The gap between GPT two years ago and Claude Code today is enormous. Anyone who wrote off AI coding tools based on early experiences should take another look.

The bottleneck shifted. Early on, the bottleneck was AI quality — you'd spend most of your time fighting errors and unclear output. Now the bottleneck is specification quality. The clearer and more precise your description of what you want, the better the result. Garbage in, garbage out still applies — it's just that the bar for "good input" is now about clarity of thinking, not technical expertise.

The collaboration model matters. I've seen people treat AI like a vending machine — type a request, get output, complain when it's wrong. That's not how I work. I treat it like a teammate. I push back when something looks wrong. I explain the reasoning behind what I want. I ask for alternatives. The quality of what comes back is directly related to the quality of how you engage.

You don't need to write code to build real things. This might be the most important one. If you're a designer, a product thinker, or someone who understands systems but has never learned to code — the tools exist now to let you build real, working software. The gap between "I have an idea" and "this is running on my server" has never been smaller.


What's next

This series documented two years of building custom FiveM scripts. Five volumes, five scripts, one significant evolution in how I work.

But FiveM was never the end goal. It was the practice ground.

The same workflow — design-first, AI-implemented, human-reviewed — is what I'm bringing to web applications, PWAs, and freelance projects through my business Tack and K. The tools are the same. The philosophy is the same. The scale is just different.

If you've been following this series and want to see what comes next, follow me here on Dev.to.

And if you're a developer — or a non-developer who wants to build things — and you have questions about this workflow, drop a comment. I'm happy to talk through it.


"AI to tomo ni" — working alongside AI, not just using it.

Top comments (0)