DEV Community

Cover image for Strategies to combat adverse effects of AI coding for human engineers
Coco
Coco

Posted on

Strategies to combat adverse effects of AI coding for human engineers

Generative coding is burning out engineers. One of the many ways that manifests is in the very people who built last year's software not having the motivation to build this year's software. In the software engineering community there is discussion of how the rise of generative ai coding can have a dampening effect on the reasons to build that brought us here in the first place.

How is it happening?

A subtle effect of generative AI within the realm of AI assisted software engineering goes like this:

One of the stated goals of generative coding was to make coding easier, faster, stronger.

  • AI has made creating code by volume easier.
    • Creating code is easier, but evaluation and analysis including code review, security review is not easier and takes longer than before.
    • The amount of code being created in teams (& by side agents) has increased by an order of magnitude or by a factor of 100.
    • The difficulty of hand-coding has not increased, but the motivation to hand code vs generative coding is impacted.

So now human engineers have a choice on their hands:
to code by hand or generative code, with the opportunity cost that hand coding takes much longer initially, and even perceptually that hand coding starts to look daunting, and the lure of fast initial prototypes that look like success as long as you don't look too closely at them.

So there is a motivational bias towards generative coding because of ease if nothing else, and in engineering - as in art - the loss of motivation or drive is toxic to what essentially is a creative act in a creative field.

The Cost

So what is the cost? What did hand coding get us really anyway?

  • Learning: We learn better when we do it outselves.
  • Slower: Ironically, a limitation of doing it slower almost certainly means that we create more carefully.
  • Circumspection: Being slower and building piece by peice, we are more likely to build something that we can examine the security of and can carefully consider the ramifications of each change to.

I may be preaching to the choir here; many engineers have voiced the phenomenon:

  1. After a period of initial personal enthusiasm about vibe coding/AI coding, it can be demotivating and confusing to our desire to code ourselves.
  2. What if the AI can copy and paste code that is better than what we could write from scratch?
  3. How do we justify the 1000 minutes it would take us to hand-code vs the 3 minutes to vibe code it?
  4. Once the work has been right shifted into the realm of review volume, that review and QAing becomes the new bottleneck.

To briefly use cooking as an analogy, the advent of the microwave brought a time-saving into the kitchen: why cook in the oven for an hour as opposed to putting a tv dinner in the microwave for 7 minutes? But what room is there to be a creative chef when you are hitting four buttons on a microwave?

So what do we do, how do we change our thinking, what strategies can we use to counteract that ai effect on our engineering motivation, learning, and sustainability?

Awareness first

First, there is a constant and evolving conversation that we should be having with other people and internally with ourselves on a daily basis to develop better "coping" strategies. Being realistic and pragmatic about our skills and the limits on our drive and cognitive load. If we imagine an ai-agent pulling some of the smartest engineer's code somewhere off of github, combining it in a custom way, and then throwing that at our forebrain in a extremely high-complexity pull request, it should be obvious why the cognitive load or complexity fatigue is going to be extreme. All of a sudden instead of reviewing other team member's PRs, we're

So we need to think of our motivation and creative energy as a limited resource to be guarded.

Reframing of hand coding as high level self training

Professional software engineers generally think of coding as the work, the output of what we do. When we then compare our output to machine output at an order of magnitude scale more, we think something is broken. "I can write 30 lines in the time it can generate 100,000?" Stop playing chess against computerized chess bots while expecting to win, and stop playing the game of output against ai-assisted coding. The lines of code game has been won, and it has been won by the machines.

So you need to play to your strengths, and reframe your own hand coding as training and learning, and be willing to argue for your need to train and learn more slowly and deliberately. The AI has no body, it cannot speak to the people using the software, it really doesn't care if the app works or not. You don't need to write the code, but you do need to understand the code.

"What advantage is there to our coworker coding vs. AI?"
"Well, our coworker can learn."
"Do we have time to incorporate learning and training of the employee into our process?"
"Well, if we don't take that time, we will never truly move forward, we'll just build a tower of bricks destined to topple."

Review as play

Reviews of incoming code can be a chore. The ratio of generated to code to speed of reviewer reading just got really really skewed in the direction of generated code. Because AI isn't rate limited, reviewing AI additions is often a firehose at a teacup for reviewers. There is an opportunity to make the reviewing a game. Not just being scathing and direct in feedback, but in making a simple abstract point system for finding projects, for finding mistakes.

Personal projects as play

I hesitate to even mention this, because I find that engineering culture usually has a tradition of personal projects as a way to dubiously "unwind". In this case it makes sense that having areas, features, or at least personal projects circumscribed as AI free can help bring the human play instinct back into your process. Have one project to create silly prototypes, and one projects to painstakingly hand code as the objective. Or even silly projects with no loss condition.

Grading on quality as a teacher would

It has always been an element of working on a team that reviewing other team member's code has been tricky. There's a saying to "write your code so you won't hate yourself when you read it in 3 months". Now we have that added difficulty

It's time to start figuring out your own personal tooling for checking quality. Build tooling to quantify and surface quality metrics and put them into the flow of CI-CD. If you are interested I have another article about that topic as a whole that I will cross-link. Suffice it to say here that we all get to start acting as the teachers of a classroom of 8th graders have been acting for centuries. Develop your own personal grading rubric and start giving PRs a letter grade. Start trying to build checks for the intangibles now, for clarity, so that you start to have a quality check in the short term.

Management of complexity at a high level through qualitative analysis tooling and summarizing in the context of incoming PRs.

Finally

Finally, I have a confession to make. Part of the reason I am writing this article is because I want to hear what you do to combat this, and if you are experiencing this type of burnout at all. I don't have the - uhhhh - final solution to this subtle problem, but I expect it is a problem that many of us in the software development space need to combat. So collectively we need to take a breath and start developing very human and humane ways to be kind to ourselves and realistic, ways to counteract the new type of burnout that it causes, and acknowledge that like most time-saving devices, initially during this rocky adjustment period it can create more work and stress instead of less.

Top comments (0)