DEV Community

Vatsal Trivedi
Vatsal Trivedi

Posted on

My Pinch of Salt: Did AI and LLMs Kill the "First-Time Correct" Engineer?

I remember a time, not too long ago, when the goal was to be First-Time Correct. It was a badge of honor. It meant you spent hours whiteboarding, meticulously planning your logic, and debating data structures before a single line of code was written. Committing something broken to the main branch was a walk of shame. The aim was craftsmanship.

Today, something feels different. We're all about velocity. The pressure is to push, to merge, to keep the CI/CD pipeline green and buzzing. And our new AI coding buddies—our Copilots and LLM assistants—are more than happy to help.


The Chatbot Workflow

Think about how we interact with a tool like ChatGPT. We ask a question. It gives an answer. It's not quite right. We tweak the prompt. Ask again. Still not there. We rephrase, add context, and ask a third time. Voilà, the perfect solution.

Now, look at our Git history. Does it look familiar?

A commit is pushed. The build fails. A quick fix is pushed. It passes tests but breaks something downstream. Another commit is pushed. This cycle continues, a rapid-fire conversation with the codebase, until we stumble upon the correct solution. We're essentially mimicking our chatbot interactions in our development process. The feedback loop of writing code is instantaneous, but the feedback loop of shipping working software is getting bigger, messier, and riddled with the ghosts of failed attempts.


Redefining "Versions"

Some argue that we can never be "First-Time Correct," and I agree. Perfection is a myth. But we used to have a system for imperfection. We had v1, v2, and so on. These were deliberate releases, accompanied by clear notes on what was working, what was failing, and what was intentionally skipped. It was a structured approach to iteration.

Now, our versioning feels… different. It's becoming:

  • v1: The code the AI generated in five seconds.
  • v1.1 through v1.9: The flurry of commits I had to make myself to fix all the things the AI missed, misunderstood, or hallucinated.

Our measure of success is subtly shifting from "Is it correct?" to "How fast did the AI help me fix the things it got wrong?" We've traded the discipline of forethought for the convenience of AI-powered hindsight. We're becoming expert editors of machine-generated code rather than architects of robust systems.

We gained speed, no doubt. But in our race for velocity, are we forgetting the art of being right? 🤔


So here's my open-ended question for you: Are we trading the enduring satisfaction of craftsmanship for the fleeting high of a passing pipeline?

Top comments (0)