DEV Community

morganllewellynjones
morganllewellynjones

Posted on

AI writes pretty good code these days and it doesn't really matter

AI Baseline

Given that it is impossible to escape conversations about AI, I thought I might as well add to them. I assume that I am not the only one hearing discourse about AI that looks like this:

"AI is an incredibly powerful tool, but I'm worried people might use it incorrectly."

"Who knows what AI will be able to accomplish in a few years?"

"For a text transformer, the output of generative AI is shockingly good."

I am definitely in agreement that LLM's can prove a valuable tool, but there are many other developer tools that I consider more valuable (that I will get into) and that do not come with the enormous environmental and societal costs that AI does. I think we as developers should evaluate our relationship with AI in context to our relationship with other tools, and that we often fail to do this. This is perhaps because, as useful as LLM's can be now, most of the excitement about them continues to be optimism of their future impacts. An optimism that is not well earned, given that there are known limits to the current training strategies used for LLM's, and we can witness their growth stagnating already.

AI is good for a middleschooler

At this point, there seems to be a better understood baseline by developers about what LLM's are able to accomplish than there was a few years ago. Most developers find LLM's useful, at least some of the time. Even developers who are highly skeptical of AI (of which there are many) will generally concede that LLM's are a very powerful tool, and that the code (and advice) they are able to generate is impressive given the fundamental limitations of the algorithms behind the curtain. Granted, most senior engineers can easily see flaws and issues with AI generated code, and studies on AI generated code have found that there is a much higher churn rate for codebases written by AI than there is for codebases written by humans. The idea that LLM code is "surprisingly impressive" for a text transformer is a bit like saying that a drawing is "surpringly good" for a middle schooler. Even when the code generated by the LLM is flawless however, it fails to substantially improve my workflow as a developer, and this is why.

As an enterprise developer, most of my job is not writing code.

The vast majority of my daily responsibilities include:

  • Meeting with supervisors and stakeholders to discuss current and future work.
  • Reviewing details presented in a Jira ticket, asking for clarifications and designing around ambiguities.
  • Identifying which area of the codebase would best be used as a launching point to extend in order to add a new feature.
  • Refactoring legacy code to make extension possible. This often involves deleting more code than is written, not an LLM's strength.
  • Connecting endpoints and interfaces in a few key places.
  • Testing and reviewing the end result and refactoring code as needed.
  • Tracing bugs.
  • Updating documentation.
  • Providing QA with steps to test and verify the new feature or bug fix.

There are times where I will write a new feature that involves several files worth of new code. But even then, LLM's are usually not a good fit for this as the new code needs to fit cleanly within the existing architecture of a large legacy codebase, and LLM's struggle mightily with context rot and complex type systems. Where I have found LLM's most helpful is for the following tasks:

  • Writing quick throwaway scripts for testing or one off data transformations.
  • Talking to the duck.

These uses have value, and if these are the only use cases I have for generative AI, then my efficiency and experience as a developer is still improved by its existence, but there is a catch.

The cost of AI

AI has a dramatic environmental impact, an economic impact and political impact that makes me deeply uncomfortable both as a developer and as a human. If the end result of generative AI is that mass amounts of drinking water are taken from poor communities in Arizona, Brazil and elsewhere so that I can occassionally solve problems in 1 minute with an LLM that would otherwise have taken me 5 minutes? It doesn't feel worth it. Especially given the many other developer tools that have a more positive impact on my day to day work as a developer, and don't come with any existential threats.

Tools more valuable than AI

To put in context here are several tools that do not require generative AI, and that I have found infinitely more helpful. These tools have become such a core part of the developer experience that they are often taken for granted. They are subtle, working their magic in the background to make research and development an easier, safer and faster experience. While consistently appreciated, these tools are not commonly refered to as "amazingly powerful", or talked about as transformative to the developer experience in the same way that LLM's are.

LSP's

Go-to-definition, renaming tools, integrated debuggers and documentation lookups save me a lot more time than generative AI. Since code is read much more frequently than it is written. Strong tools for navigating large codebases efficiently save me time much more frequently than a code generator. I am aware that LSP's often integrate with generative AI to make coding suggestions, but I have also found this to be the least valuable features that LSP's have to offer, as these coding suggestions save only a few keystrokes and are wrong more often than not.

Documentation

Developer documentation has gotten much more thorough and accessible in the last couple decades. The breadth of documentation available on most modern enterprise API's is relied upon by AI to give good recommendations. You'll learn these technologies even more deeply as a developer if you go to the source and read the documentation yourself. Development is thought work, and understanding how the technologies you are using truly work saves a lot of time and misery.

Typing systems

Saves me from debugging countless bugs at runtime by catching them at compile time. Type-sace autocompletion brings me confidence in my code and reduces key strokes.

Linting tools

Saves me a lot of time by making me aware of common bugs, typo's, deprecated libaries and other issues before I even compile.

Modern language API's

Often cut boilerplate to a fraction of that seen in older language patterns and provide better safety for variable references, etc.

Summary

Compared to other developer tools, generative AI offers only a minimal benefit. The developer ecosystem's contribution to language servers, detailed documentation, linters, formatters, and other commonly used tools save me time constantly in the type of work I actually do. But they do not get the earth shattering credit that LLM's do. This is probably because many people still believe that LLM's will evolve into AGI, and write code autonomously in the future.

But I've seen my co-workers use AI to generate Jira tickets and it hallucinates all over the place. If the AI cannot summarize a simple feature request, it has little hope of autonomously implementing the change. AI is stagnating, and there is currently no roadmap forward for it being anything other than a tool. As far as tools go, it is valuable, but not especially and not compared to other developer tools. I think we need to be honest about where AI is at and evaluate it for what it is. Because the cost is enormous, and the benefit is small.

I don't expect LLM's to be dropped. Clearly, they have a place and are here to stay. But for as long as we continue to venerate AI and contribute to the hype bubble, we given OpenAI and others social and economical capitol that is leveraged to break actual laws and regulations and destroy real communities in their reckless quest for power and economic domination. This is not a future I want to encourage.

Top comments (0)