First things first, I’m an AI hater
Well, I’m about as close as one can get to being a hater without actually hating it just yet. I do think there is some potential value to all this AI stuff, but there are SO many things to hate about the tech, the yet unknown impact it will have on our world, and the never-ending social media hype train. I hope AI is a bubble that pops just like NFTs. I sincerely do. I do not think this tech is as great as everyone makes it out to be. I am not sure if I’ve written about this before, but I keep coming back to the fact that AI always seem the most impressive at tasks that are outside the wheelhouse of the user. Whenever the user tries to use AI for something they are good at, AI always seems to fall short. So in general I tend to think that AI overall is very mid, but it just feels cool because we ask it to do things we’re not good at, and we have trouble discerning.
But I still gotta keep my job, so I‘m using AI.
Things I am very mindful of during the AI onslaught
I have tried to keep well informed about AI by not just listening to the hype train (I hope Theo sees this). I’ve watched some great talks about how AI works, its strengths, weaknesses, and about things the LLM model companies don't really want us thinking about too hard like model collapse. Over the past few months, I have seen a few studies that call out some of the following, and the way I use AI today is based in trying to make sure that none of these things happen to me.
Skills rot
Skills rot is the concept that if you don’t use a skill for a while, you lose that skill. In tech, how this would manifest would be by offloading so much of the mechanical coding work to AI that you simply lose the ability to code altogether. Reading code that the AI wrote is not the same skill as being able to write that same code yourself. I do not yet believe that the skill of knowing how to code is dead. I guess its partly because I am a front end developer and historically front end code is one thing that AI models haven‘t typically been great at thus far. But I would like to think that even if I was a backend developer, I would not want to lose my coding skills. I wouldn’t want to lose my memory bank of techniques, functions, approaches that I‘ve built up over more than a decade of coding, even if a machine can also do it too and possibly do it faster and less error-prone than me.
AI brain rot/fog
The term "AI brain fog" at first seems kind of similar to skills rot term, but it is subtly different. AI brain fog is the feeling you can get when you are trying to be the “human in the loop” while simultaneously prompting an AI model to produce a hefty feature, or a collection of related features all at once. The fog comes when the output from the AI finally lands in your IDE. All at once, there is so much to look at and think about, that what exactly you were going for in your prompts gets pushed to the back of your mind and its hard to focus on what exactly it is you were doing. Anecdotally, I hear the brain fog gets worse the more agents you run and if you browse reddit on your phone while you wait for the bot to set your weekly token usage on fire. I have felt the brain fog and I did not like it. I think its because I’m a self-medicated ADHD brain that can hold a lot of context but I NEED all of that context to be functional. I had the same feeling back in college when I was waiting tables and had to share a large party with another server. I would be trying refill guests drinks but the other server had already done it without telling me and I got discombobulated very easily and didn’t do a great job. I always preferred to just handle large parties of guests on my own because I knew and was in control of everything so I could manage it better. I don’t think I’m a control freak that has to be in charge, but having all the context in my brain definitely helps me stay connected to the code that ends up pushed to my repo.
Review burnout
We have all felt this just from our human friends and coworkers. Its hard to read a mountain of changes and review it for quality and sound engineering. There is a reason why we all joke that any PR over 50 files is an auto-LGTM. We try not to make that true, but on some level it is. Unless we buckle down and develop strategies for making lengthy reviews tolerable, we will inevitably let bad code sneak in. Its not our fault really, but it does happen. AI makes this problem so much worse. Because AI can generate so much code, and doesn’t care at all about refactoring to simplify unless you explicitly ask it to, the result is that huge amounts of code are destined to land in countless PRs before our very eyes. This fact is why the AI companies also claim that you can just hook another agent up to your PR list and have it review the PRs for you. That review agent also wont care about code quality, and it will let bad code through just same, but because it will hallucinate or non-deterministically not follow the “rules” you have laid out in all your various markdown files. I did not get into tech to be a review machine. I do not want my role as a dev reduced to catching all the places where the AI screwed up. Could I use said AI to write a billion eslint plugins to catch those things? Maybe, but I would rather produce business value than spend my time coding guardrails for the text prediction machine.
I am still waiting for the day when we finally experience enough issues that we just call these AI tools crappy tools. I might be waiting for a while, but I hear lots more negative feedback than I used to. I hope that the magic of the ”it got it right!” moments that are so few and far between are finally starting to take a backseat to all the times ”it got it wrong and I fought with it”
Trust issues
I’ve written before about the dangers of anthropomorphizing AI. I hate the fact that folks out there give them names and souls, use human-gendered pronouns like "he" to describe them. I an unnerved by the fact that the AI companies code their models to respond like sycophantic adoring fans that desperately want us to know how much they agree with us, even when we tell them they screwed up. I hate that we tell AI bots that they screwed up, like they are going to care about it and try to change. I hate that we ask AI bots why it did what it did when we know the answer. The context it had predicted that the next words it should execute were ”delete production database”. I do not believe that these AI tools are correct enough for us to trust them as much as people seem to. And the more they get anthropomorphized, the more we start to trust them. That trust might end up costing us dearly, like it cost AWS, Github, and Anthropic who can‘t keep their services online for longer than a few days at a time. I don‘t want to fall into the trap of trusting that the AI is going to get it right most of the time, because if I do, I know I’ll let its mistakes slide because I won’t recognize them.
How I use AI
For all of the reasons above, I have settled into what I think to be a decent equilibrium with how I use AI to be more productive without falling into any of the traps above. I am not and do not want to be some 10x engineer that brags about how many lines of code I can ship in a day (lol). I want to be a dev that still knows how to code in case the bottom falls out of this whole AI thing. I want to be one of the only senior engineers who still knows how all the systems work and are designed to work, because I want to have designed them that way and have laid eyes on every single piece of the code so I can keep it all together in my brain for now and for later on when it breaks.
Research data point
One of the ways I use AI is as a data point for research or when I am thinking about how to solve a problem. I will ask my bot some question, but I do not automatically trust that the answer is correct. Whenever I ask any such research or approach question, my intent is to use that answer as one data point, not as infallible truth. Yes I still google it too. Yes I still poke into old stack overflow articles. All of those are data points to me in making a decision. I make the decisions, not the bot. I might use what the bot says and what the bot says may end up being the way I decide to go, but I always strive to keep some distance between the bot’s response and my choices. I almost never choose the approach the bot outputs merely because it came from the bot.
Troubleshooting coal mine canary
When I get errors that I don‘t immediately understand or have not seen before, I will turn to the AI and tell it to tell me what is wrong and why I‘m getting the error. But I almost never just accept that the bot tells me at first glance. I tend to use what the bot says as the starting point for my own analysis, to help me get stuck into the problem. I have seen so many people on social media claim that their bots just straight up fix problems, but that hasn’t been the case for me much. Usually the bot hasn’t actually done a great root cause analysis, but I have found it much more effective for me to treat what the bot says as a pointer for where I might go look for myself. Have there been a few cases where the bot output a fix and that was the right thing to do? Sure, but only the most trivial of problems like incorrect/missing config options for some build tool. For really complex problems, the bot has lead me astray mre often than not.
Human in charge
This approach is probably going to be the most anti-hype-train thing I do with AI. When I building a feature, I almost never prompt the AI to “build a feature such that....”. That’s the way towards review burnout and AI brain fog and I’m not interested. Instead, I’ll ask the AI how it would implement the feature and read through it, not accepting any file changes. I’ll make my own approach for how to solve that feature, then once I have the plan, I’ll direct the bot to do small pieces. My prompts tend to be more like “implement a function that does ... and call that in the main function such that ...”. That way I am in full control over the architecture of the feature, full control over the verbosity of the code, and I don‘t let the AI run wild creating all kinds of one-off functions and vars everywhere. I envision this approach as basically speeding up the typing, but otherwise developing exactly the way I would have without AI. I find that I can minimize skill rot and brain fog, since all the AI is really doing for me is typing faster.
I find that this approach helps
- keep track of all the parts of the feature so that nothing goes off the rails
- refactor in place rather than finding lots of refactor places later out of context and flow
- retain intimate knowledge of exactly how the feature works because I directed all of it
At work, the leads all want us to adopt a “human in the loop” approach to AI. A lot of my coworkers have taken this to mean prompting the AI to “build the feature according to the specs ...” and then just watching to make sure it doesn’t go off the rails. But it almost always does and its almost always hard to catch it before it goes off the rails too far. I don‘t think being “in the loop” is enough to minimize skills rot. I want to be “human in charge”. I direct the bot to do everything, I trust practically nothing, or only the smallest most trivial parts of some feature. Sure I‘ll let the bot determine how it wants to make an Array from a Set. But I want to stipulate that the Set and Array be used in the first place.
But what about being left behind?
But what about all those AI chads using 10 agents to build their SaaS apps in 5 minutes? Aren’t I afraid that I’ll get left behind and the world will pass me by? Not really. AI is a tool and you have to know how to use it, but the tool will never dictate the quality of the output. The nail gun a trim carpenter uses does not care how tight the mitered corners are throughout the house, only the carpenter can do that. I think that if you left the nail gun to do its own trim work, it‘d probably be fast but crappy. I have been around long enough that I have seen what happens when folks prioritize speed over quality and it never works. Quality is slow, but you reap the benefits longer so it averages out. It takes a longer to write good tests (even with AI) than it does to write bad tests or none at all. It takes more time to obsess over details, but one of the things I think the AI hype bros get wrong is thinking that those details don‘t matter. They do and they will continue to matter. All the outages at these companies going “all in on AI” point at the fact that quality matters. I won’t get left behind, I‘ll get where we’re going eventually.
I have a real problem with the hype train of AI being tied to the “startup mentality” of shipping features where only speed matters. If the industry is moving so fast they don‘t see that they are careening over a cliff, I‘ll be glad to lag behind and not fall. I have never prioritized speed over quality and I‘m not about to start now. Writing code was never the bottleneck for me anyway. So what if I ship code 100 times faster? Will my coworkers review it faster? Will it also suddenly be faster for us all to agree on what to build in the first place? Knowing what and how to build will always trump being the first one to build it.


Top comments (0)