DEV Community

mrmujo
mrmujo

Posted on • Edited on

Don't Let AI Make You Stupid

TLDR: AI makes you stupid and productive but it is a degenerative slope. Eventually you lose your ability to ascertain what is good code. I have to fix this for myself, I have far to many years left to sit by the side and watch programming go by. My solution is to match the amount of delegation to how much i care to learn, all depending on how interesting the topic is or how much i need this knowledge for my future self. It is about describing a problem in my own words, reading the docs or the API before coding. Agents gets to do the boring stuff.

How we used to learn

Before AI, we had Stack Overflow. To use it in any meaningful way you had to first understand the problem at hand well enough to describe it in general terms. From there either you searched and tried to adjust a general solution to your codebase or you posted and had to strip away the specifics of your codebase and ask a question a stranger without insight into your solution could answer.

That process was educational, forcing you to think through the problem and often you solved the issue by way of rubber-ducking it this way, if you did not, the potential answer would require some thinking and force the learning.

Levels of delegation

There is a ladder of how much thinking you hand off to AI. Each step is more productive and less educational.

You describe the problem in your own words. This is the Stack Overflow way. You know what you are talking about, you have diagnosed the issue and are mentally prepared for a solution, AI is a reference book here, though a reference book specifically tied to your problem.

You paste the broken code and ask what's happening. You are using the AI as a teacher now, less good but still you are not asking for the solution straight up, this will be great for those parts that you have good knowledge on but need a pointer. You will probably remember this.

You paste the broken code and ask for a fix. Now you have skipped diagnosis. You might learn from the answer but lets be honest, you probably will not.

You give an agent your whole repo and a task. Now you are a manager, you might understand what changed, but the longer you do this the duller your programming tools will become, to the point where you one day will not be able to ascertain whether or not the code the agent suggests is any good.

The pressure is always to slide down the ladder. You will not see the knowledge loss until the AI is subtly wrong and you cannot see it, cannot understand what is off.

Current system

So here is what I try to do now.

Boring work gets full delegation. I maintain a legacy .NET 4.8 solution. I have already learned everything it has to teach me. Letting AI handle it will not make much difference to my knowledge bank. I will have to own the result but I am fine with being a manager here.

Interesting work gets the Stack Overflow treatment. I describe the problem in general terms. If i cannot formulate it that way then that is a signal that i do not have a good enough grasp of the problem yet, either i go back and try to understand or i paste the problematic code and ask the AI to explain, not to fix.

New territory starts with documentation. When I am learning something new, I read the API/docs/random blog before writing any code. I load them on my Kindle. Reading documentation without the ability to immediately try things forces you to build a mental model. It also has the added benefit of being some off computer time, I value moments off screen high, I love the part of my job that is reading and understanding, and when I get to do that comfortably from my favorite armchair I feel like I'm winning. I come back to the keyboard with a clearer picture and an intent.

Local models for thinking, cloud models for labor. If what I am doing is asking general questions then I don't need the full cloud model with all its agentic bells and whistles. A local LLM through Ollama is more than enough, like qwen3-coder-next:80b. Privacy is free (except for the ridiculously expensive mac book work bought me) and the limited capability is a feature, forcing the temptation to hand it all over to the agent and forcing a clear mental model of the task at hand.

The point

The trend seems to be to maximize AI capability, but the cost is too high, we need to remain sharp, balance the productivity gains with knowledge lost, direct the learning path where we want it head and delegate the things that do not point in that direction. Constrain the AI to match the level of engagement you want to maintain.

AI is a tool. Tools shape their users. A man with GPS forgets how to read a map. That is fine if the signal never drops. But it does, and when it does, you are lost.

Rules

  • Delegate boring work fully. Automate what you have already mastered.
  • Describe interesting problems in your own words before asking AI.
  • If you cannot describe it in general terms, ask AI to explain, not to fix.
  • Read documentation before writing code. Do it away from the computer for added awesome.
  • Use local models for thinking. Save cloud agents for labor.
  • Constrain the tool to match the engagement you want to keep.

Top comments (0)