DEV Community

Cover image for Using Claude to Move Faster Without Cutting Corners
Michael Masterson
Michael Masterson

Posted on • Edited on • Originally published at m2s2.io

Using Claude to Move Faster Without Cutting Corners

There's a version of AI-assisted development that looks like this: you describe a feature, the AI writes the code, you paste it in, it mostly works, you move on. That's one way to use it. It's not the most valuable way.

I use Claude daily — for architecture decisions, code review, writing, debugging, and building features. The productivity gains are real, but they don't come from offloading thinking. They come from having a capable collaborator available at every point in the process, one that never gets tired of context, never judges a half-formed question, and can hold an entire codebase in mind while you work through a problem together.

Here's what that actually looks like in practice.

Claude as a thinking partner, not an autocomplete engine

The frame that changed how I use AI tools: Claude isn't a faster way to type. It's a faster way to think.

When I'm designing a new system, I'll describe the constraints and ask for pushback on my assumptions before writing a line of code. When I'm debugging something subtle, I'll walk through what I know and ask what I might be missing. When I'm about to make an architectural decision, I'll ask for the tradeoffs I haven't considered yet. None of that is code generation — it's structured thinking with a collaborator who has broad context and no ego.

This distinction matters because developers who treat AI as autocomplete get autocomplete-level value. Developers who treat it as a thinking partner get something closer to a senior engineer always available in the room.

What actually works

Architecture and design. Before writing code, I describe what I'm building — the constraints, the scale, the tradeoffs I'm aware of. Claude is good at surfacing tradeoffs I haven't considered and poking holes in approaches that seem solid but have hidden failure modes. This has saved me from several decisions that would have been expensive to reverse.

Boilerplate and scaffolding. The parts of development that are mechanical — setting up a new Lambda handler, wiring a new route, writing a test harness — Claude handles quickly and correctly. This isn't the interesting part of the work, and not spending mental energy on it means more for the parts that matter.

Code review and refactoring. I'll share a function and ask what I'm missing — edge cases, error conditions, performance issues, readability problems. It catches things I'm too close to see. It's not infallible, but the hit rate is high enough that I treat it as a mandatory step before pushing anything significant.

Writing. Documentation, commit messages, technical explanations, posts like this one. Having a collaborator who can help sharpen an argument or tighten a paragraph without changing the voice is genuinely useful.

I don't vibe code

There's a style of AI-assisted development that's become popular — describe what you want, accept whatever comes out, repeat. I don't work that way.

My process is closer to: prompt, plan, review. Before any significant piece of work, I'll use Claude's plan mode to lay out the approach — what files change, what the logic looks like, what the edge cases are. I read the plan. I push back on parts that don't look right. I ask questions. Only once I understand and agree with the direction do I let it execute.

The same applies to output. Every piece of code Claude produces gets reviewed before it lands. Not a quick skim — an actual read. I want to understand what it did and why. If I can't explain it, I ask until I can. This isn't distrust, it's ownership. Code I don't understand is a liability I'm carrying.

The developers who get burned by AI tooling are usually the ones who skipped this step. The ones who get consistent value treat the AI output as a strong first draft from a collaborator — not a finished answer from an authority.

The skill that actually determines your results

Prompting is a real skill, and it compounds. The developers getting the most out of Claude aren't the ones with the most AI experience — they're the ones who communicate precisely, provide relevant context, and ask specific questions instead of vague ones.

"Fix this function" gets worse results than "This function is supposed to do X, it's currently doing Y in edge case Z, here's what I've already tried." The more precisely you can describe what you know, what you want, and what constraints matter, the better the output.

The fastest way to improve: treat every unsatisfying response as a prompt quality problem first. Reframe the question, add context, be more specific about what you actually need. You'll get better at it quickly.

The meta point

The platform this blog runs on — the Angular frontend, the Go Lambda API, the CDK infrastructure — was built with Claude as a constant collaborator. Not generated wholesale, but developed in a genuine back-and-forth: I set the direction and made the decisions, Claude helped me move faster and caught things I missed. The result is code I understand completely and can maintain confidently.

That's the model I'd advocate for. AI in service of your judgment, not a replacement for it.

At M²S² Engineering Group, we build with the best tools available — including AI. If you're figuring out how to integrate these workflows into your team's practice, let's talk.


Originally published at m2s2.io

Top comments (1)

Collapse
 
jill_builds_apps profile image
Jill Mercer

using claude as a thinking partner is the only way i've found to keep context drift from killing a project. cursor has more features than i know what to do with—still finding my way around it. moving fast is easy—shipping something that doesn't fall apart in a month is the real trick. austin taught me: just start the thing.