<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: James Guidaven</title>
    <description>The latest articles on DEV Community by James Guidaven (@jkguidaven).</description>
    <link>https://dev.to/jkguidaven</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jkguidaven"/>
    <language>en</language>
    <item>
      <title>I Worry I'm Not Developing</title>
      <dc:creator>James Guidaven</dc:creator>
      <pubDate>Mon, 20 Apr 2026 02:02:27 +0000</pubDate>
      <link>https://dev.to/jkguidaven/i-worry-im-not-developing-5eeb</link>
      <guid>https://dev.to/jkguidaven/i-worry-im-not-developing-5eeb</guid>
      <description>&lt;p&gt;A few weeks ago I fixed a bug in CI that had been bothering me for a while. Intermittent failures. Only on some runners. The kind of thing where you can't reproduce it on demand and you can't rule out flakiness.&lt;/p&gt;

&lt;p&gt;It turned out to be a pnpm caching issue. We were caching &lt;code&gt;node_modules&lt;/code&gt; directly, but pnpm uses hardlinks into its content store, and those hardlinks don't survive being cached and restored across different runners. The fix was to cache the pnpm store instead and let pnpm rebuild the links on each run. Small change. CI went green. I moved on.&lt;/p&gt;

&lt;p&gt;A day later I was thinking about it and realized I couldn't remember the moment it clicked. I remember having a conversation with AI about it. I remember the answer being there at some point. But the actual click — the "oh, it's the hardlinks" moment — I couldn't find it in my head.&lt;/p&gt;

&lt;p&gt;That bothered me more than the bug did.&lt;/p&gt;

&lt;p&gt;And it made me admit something I'd been avoiding for a while: I'm not sure I'm still developing as an engineer.&lt;/p&gt;

&lt;p&gt;Not in the "I need to learn a new framework" sense. In the deeper sense. The one where you ship stuff, people use it, it works, and you still have this quiet feeling that you didn't really do the work. That the AI did, and you were along for the ride.&lt;/p&gt;

&lt;p&gt;I don't think I'm the only one feeling this. A lot of engineers using AI heavily feel some version of it, and mostly don't say it out loud. Partly because the tools are genuinely good and complaining sounds ungrateful. Partly because saying it sounds like admitting you might be a fraud.&lt;/p&gt;

&lt;p&gt;So I want to write about it honestly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The takes I don't buy
&lt;/h2&gt;

&lt;p&gt;One take says AI is making engineers obsolete. Anyone can ship anything now. Skill is dead.&lt;/p&gt;

&lt;p&gt;I don't believe this. I watch AI confidently suggest architectures that would fall over under real load. I watch it hallucinate APIs. I watch it fix symptoms and miss causes. The difference between AI as a collaborator and AI as an autopilot is judgment — which is supposedly the thing that's dead.&lt;/p&gt;

&lt;p&gt;The other take, the one I hear more often from people like me, says: AI is just a tool, stop overthinking it. You wouldn't feel guilty about using a compiler. Ideas come from somewhere for everyone.&lt;/p&gt;

&lt;p&gt;This one is closer to right, but I think it brushes off the worry too fast. Because there's a part of the worry that I think is actually correct.&lt;/p&gt;

&lt;h2&gt;
  
  
  The part that's right
&lt;/h2&gt;

&lt;p&gt;Skills develop through struggle. The friction of not knowing, trying something, failing, figuring it out. When AI removes that friction, the output still ships, but the skill doesn't build.&lt;br&gt;
This is most dangerous in a specific zone.&lt;/p&gt;

&lt;p&gt;For problems way below your skill level, AI is fine. No development was happening anyway. For problems way above your skill level, AI is scaffolding you need. You'd be blocked without it.&lt;/p&gt;

&lt;p&gt;The risk zone is the middle. Problems you could solve yourself with effort, where AI makes it easy enough that you skip the effort and skip the growth.&lt;/p&gt;

&lt;p&gt;So the real question isn't "am I using AI too much." It's "am I using AI in the zone where I should be struggling?"&lt;/p&gt;

&lt;p&gt;Honestly, I don't always know. That's what makes this hard.&lt;/p&gt;

&lt;p&gt;Take the CI bug. I don't know if I would have figured it out alone. Maybe I would have. Maybe it would have taken me a day instead of two hours, and I'd have learned something permanent about how pnpm works. Or maybe I'd have given up and asked someone. I'll never know, because I didn't run the experiment.&lt;/p&gt;

&lt;p&gt;That unknowability is the actual cost. Not that you're not learning. That you can't tell whether you are.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tests that actually work
&lt;/h2&gt;

&lt;p&gt;I've been trying to find ways to measure this that aren't just "do I feel like I'm growing." A few hold up:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Are problems that stumped me six months ago obvious now?&lt;/strong&gt; If I'd diagnose the same thing faster today, I developed — even if AI helped the first time. Retention is the evidence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I predict what AI will suggest before it suggests it?&lt;/strong&gt; When I'm actually learning, I start anticipating the answer. When I'm just consuming outputs, every suggestion feels new. Shifting from the second to the first is what growth looks like in this era.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Am I taking on harder problems than I used to?&lt;/strong&gt; My ceiling should be rising. If this year's ambitious project is meaningfully harder than last year's, that's development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I teach it?&lt;/strong&gt; This is the hard one. If I can explain the mental model to another engineer — why it works, when it wouldn't — I own the knowledge. If I can't, I borrowed it.&lt;/p&gt;

&lt;p&gt;The first time I ran that last test on the CI fix, the answer was basically no. I could describe what happened. I couldn't confidently teach someone else how hardlinks interact with CI caches without looking it up. The AI and I had solved the problem. Only the AI had kept it.&lt;/p&gt;

&lt;p&gt;That wasn't a great feeling. But it was useful data.&lt;/p&gt;

&lt;h2&gt;
  
  
  The discipline
&lt;/h2&gt;

&lt;p&gt;The failure mode I actually want to avoid isn't "using AI." It's using AI to avoid sitting with not knowing.&lt;/p&gt;

&lt;p&gt;That discomfort of being stuck — no answer, feeling slightly dumb, cursor blinking — is where development happens. If I reach for AI the second that feeling shows up, I'm buying relief from a state I should be sitting in.&lt;/p&gt;

&lt;p&gt;The fix is mechanical. Before I ask AI anything on a problem that matters, I try to form my own answer first. Write it down, even if it's wrong. Then ask. Then compare.&lt;/p&gt;

&lt;p&gt;I'm still bad at this. The instinct now is to ask, and the instinct has to be retrained. But when I do it — when I sit with the problem for twenty minutes before typing anything — two things happen. The AI's answer feels different, because I have something to compare it to. And sometimes I don't need the AI at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I actually think
&lt;/h2&gt;

&lt;p&gt;After sitting with this for a while, here's what I believe:&lt;/p&gt;

&lt;p&gt;The engineers who survive this era won't be the ones who used AI the most or the least. They'll be the ones who keep asking whether they're still developing, and who are honest about the answer.&lt;/p&gt;

&lt;p&gt;The ones who stop asking are the ones in trouble. Not because they're using AI, but because they've stopped noticing what it's doing to them. You can't fix a drift you're not tracking.&lt;/p&gt;

&lt;p&gt;So the worry I started this post with — I used to think it was a problem. Now I think it's the thing keeping me honest. The day I stop feeling it is probably the day I should worry most.&lt;/p&gt;

&lt;p&gt;I'm going to keep feeling it. And once a year, I'm going to take one meaningful project with AI deliberately turned off, just to find out what I can still do alone. Not as a stunt. As a measurement.&lt;/p&gt;

&lt;p&gt;If you're using AI heavily and you've been feeling some version of this, I'd suggest the same. Not to prove anything to anyone else. To prove it to yourself, so you stop wondering.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>productivity</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
