DEV Community

Cover image for AI Didn’t Make Me Faster. It Made Me Think Better.
Micah Breedlove (druid628)
Micah Breedlove (druid628)

Posted on

AI Didn’t Make Me Faster. It Made Me Think Better.

I wrote recently about some concerns I have with how our industry is starting to treat AI.

This isn’t that.

Because the truth is, AI has also made me better at what I do. Not faster. Not necessarily more productive in the way people like to measure it.

Better.

And the more I’ve thought about it, the closest analogy I’ve found isn’t in software at all. It’s in the forge.

When I was learning blacksmithing, I wasn’t just being shown how to hit steel. I was learning how to think about it.

How heat changes behavior. How structure forms over time. How small decisions early on show up later in ways you can’t undo. And more than anything, I learned by asking questions. Not just "how do I do this?" But "what would happen if I did it this way instead?"

Sometimes the answer came from the person teaching me.

Sometimes the answer came from trying it and seeing what happened.

But the learning came from having a place where those questions could exist.

Most of my career has been spent in backend engineering — working in ecosystems like Symfony, .NET, Spring. Environments where structure matters, where decisions compound, and where “we’ll fix it later” usually turns into “we’re still dealing with this six months from now.”

A lot of the real work in that world isn’t typing code.

It’s thinking.

It’s weighing trade-offs. It’s deciding which problems you’re solving today and which ones you’re intentionally leaving for later.

And historically, that kind of thinking has been… a little isolated.

You either grab time with another senior engineer, or you just sit with it yourself.

AI changed that for me.

The best way I can describe it is this:

AI hasn’t felt like a replacement. It’s felt like something closer to an apprenticeship mirror.

A way to throw ideas out and see them reflected back — sometimes clearly, sometimes distorted — but almost always enough to make you stop and think.

There’s something incredibly valuable about being able to just… nerd out over a design.

No meeting. No calendar. No worrying about whether you’re taking up someone’s time.

Just throwing ideas out there:

"What if I structure it this way?" "What happens if this scales?" "Is this clever, or is this going to bite me later?"

And getting something back that isn’t trying to impress you — just trying to be useful.

It’s not just solutions, either.

Some of the most useful conversations I’ve had haven’t been about solving a specific problem at all. They’ve been about exploring different approaches, frameworks, and even languages.

Just nerd-talking through things.

Comparing trade-offs. Challenging assumptions. Asking "why not this?" instead of defaulting to "this is how I’ve always done it."

That’s led to something I didn’t expect.

Honestly, some of my favorite conversations I’ve had with AI have been around those exact moments — revisiting ideas, technologies, or approaches I had already written off.

It’s made me more willing to reconsider tools and approaches I might have dismissed in the past.

Not because AI told me I was wrong — but because it gave me a space to actually think through those decisions instead of relying on instinct or habit.

There’s another angle to this that I didn’t expect.

In the past, when I was working through a design or a tricky decision, the best thing you could do was grab another engineer — or a couple of them — and effectively red team it.

"Alright, poke holes in this."

And if you had good people around you, that was incredibly valuable.

The problem is, time is expensive.

Everyone’s busy. Calendars are full. And pulling someone into that kind of deep, focused thinking isn’t always easy.

What I’ve found is that AI fills that gap surprisingly well.

Not as a replacement for good engineers — but as a low-friction way to pressure-test your thinking. You can throw an idea out there and say:

What am I missing?

Where does this break?

What happens if this assumption isn’t true?

And you’ll get something back that isn’t trying to be clever. It’s not trying to win the argument. It’s just trying to work through the problem.

It’s not perfect. It won’t catch everything.

But it’s fast. It’s available. And it’s often enough to surface the kinds of questions you should be asking anyway.

One thing I’ve come to appreciate is how often the best answer isn’t the most clever one. It’s the one that’s easiest to understand six months from now.

More than once, I’ve ended up in a conversation where the feedback essentially boils down to:

Future you is going to hate current you in six months if you do this.

And it’s usually right.

That’s the part that’s been the most valuable. Not the answers. The feedback loop.

Sometimes the best part of using AI isn’t the answer it gives you — it’s the question it makes you ask. Or the hesitation it introduces. Or the moment where you stop and think:

"Why am I doing it this way?"

That kind of interruption in your own thinking is hard to come by when you’re working alone.

None of this replaces experience.

If anything, it makes experience more valuable.

Because the better your understanding is, the better you can evaluate the feedback you’re getting. The better you can separate useful pushback from noise. The better you can recognize when something is technically correct… but practically wrong.

And sometimes, just wrong.

That’s something I think is important — especially for engineers coming up behind me.

It’s okay to push back on AI. It’s not always right.

AI is useful — but only if you stay in charge of the thinking.

I ran into this just yesterday. I moved a conversation into a new chat, and in doing that, a lot of the context got lost. The suggestions I got back were clean. Reasonable. Technically sound.

But they were based on assumptions that weren’t true anymore. If I had just taken that answer and run with it — no pushback, no clarification — it would have worked just well enough to get through the moment…

…and then turned into an absolute nightmare a year or two down the road.

It only got better once I pushed back.

That won’t work here because…

You’re missing this constraint…

This part of the system behaves differently…

And once the context was corrected, the answer changed. That’s the difference. The value isn’t in getting an answer.

It’s in being able to challenge it.

Back at the forge, the person teaching me didn’t just give answers. They gave me space to ask better questions. To try things. To think things through. To understand not just what worked — but why.

That’s what this feels like.

I still think there’s real risk in how our industry is starting to position AI. I still think we need to be careful about what we think it is. But at a personal level, the impact has been hard to ignore.

AI hasn’t replaced my thinking.

It’s made me more deliberate about it.

It’s made me question decisions a little more.

It’s made me a little less attached to being “right” and a little more focused on getting it right. And in a field where small decisions compound over time…

That might be the most valuable thing it can do.

In the forge, iron sharpens iron. We make better tools through pressure, repetition, and refinement. I think there’s something similar happening here.

We can make AI better by how we use it. But we have to be just as intentional about letting it make us better in return. Because at the end of the day, the tool doesn’t define the craft.

The craftsman always has.

Crossposted: LinkedIn | Medium

Top comments (0)