DEV Community

Cover image for We’re Not Solving Problems Anymore
/dev/scratchpad
/dev/scratchpad

Posted on

We’re Not Solving Problems Anymore

I've been developing for two decades: web things, applications, games, scripts, plugins... whatever my curiosity led me to. What always drove me was creating. As in, there was nothing, and now there is something I can use.

I also like puzzles. As, I guess, most developers do. How to architect code. How to put things together. How to do X when Y happens. These are things we (used to?) do every day.

That's what I liked about development: figuring things out. And lately, I feel like that part is disappearing.


I tend to think about how things will work together before writing any line of code. Earlier in my career, planning could get me lost in a maze of notebook pages and poorly drawn UML diagrams.

This mindset also pushed me to keep learning. New skills. New languages. Keeping up with what's going on.

Need to send messages between two apps? Check this RabbitMQ clone some random guy made, it's great.
Need to compute complex coordinate distances? Here is an NPM package that I promise hasn't been hacked... yet.
Need a one-page website? Use Rust: you won't have memory leaks.

I'm being cheeky, but learning and staying in the loop is a big part of the job.

I've been fortunate to work at a company that encourages writing technical specifications before implementing features, with validation before touching code.

That gave me space to think things through. To discuss design with colleagues. To improve projects together. It also taught me not to overdo it: clients have budgets, and at some point, things need to ship.

Nowadays, fast iteration is standard. It wasn't always like that.

As I became more senior, I spent a lot of time doing code reviews. Even a few months ago, it was a third of my workload. I'm also given time to stay up to date and share what I learn through an internal newsletter (for others to enjoy, or to trash the mail).


I'll say it right away: I'm not anti-AI.

AI is a great tool, even if the crypto-consortium-turned-AI-bro-lobby is trying to turn it into something it's not.

Like everyone else, I ask ChatGPT things I'm too lazy to Google. I've even asked it to put googly eyes on pictures of friends and family (Google can put googly eyes, that's a missed opportunity), while feeling slightly guilty about the environmental cost.

As someone who enjoys building systems, AI is great for writing code I've already solved dozens of times. I don't care about CRUD, date handling, or email validation. Hell I don't even want to open a Swagger file just to find an API route. AI can do that.

What I care about is how the pieces fit together.

For a while, I used AI as a rubber duck. I would iterate on ideas, get a clear view of the system, then use an agent to generate code step by step.

In that setup, AI is a tool. I'm still the one using it.


Recently, my company adopted a full agentic AI workflow. Everything from business analysis to code review is handled through internal tools: AI agents, skills, commands, etc.

The goal is simple:

  • take a client need and split it into detailed use cases.
  • turn use cases into technical specifications.
  • turn specifications into implementation steps.
  • execute those steps.
  • review the generated code.

Here is what a developer is expected to do:

  • If you're a lead dev: read a 300-line use case markdown file. Re-prompt and AFK for a coffee, until validation.
  • If you're a lead dev: read a 500-line specification. Re-prompt and AFK again, until validation.
  • If you're a dev: ask AI to split the spec into tasks. AFK.
  • Ask AI to code each task. Type “next” until it's done. AFK.
  • Ask AI to review, fix, and push.

Wife: What did you do today?
Me: I spent my day writing nothing but "next" in a conversation with a Turing-test passing LLM.

I don't know if you noticed, but there's something missing here.

There is no figuring things out. No puzzle solving. One could even say there is not much thinking involved at all. Just babysitting the AI.

Sure, you might step in if something looks wrong. The first weeks of using only this workflow, you will course-correct. You correct prompts. But you don't really engage with the problem anymore.

We're not solving problems. We're validating outputs.


Now imagine doing this every day.
There is no figuring things out. No puzzle solving. One could even say there is not much thinking involved at all. Just babysitting the AI.

Sure, you might step in if something looks wrong. The first weeks of using only this workflow, you will course-correct. But after some time, and , you become out-of-touch with the code. Your brain becomes lazy. You will start missing issues or won't notice some random color the client chose has been forgotten.

Shipping faster. Touching less code. With your whole team adding code like this... You become out-of-touch with the code, understanding less and less the system.

Or worse: you join a project you've never worked on.

You read specifications, but you don't really know how the system behaves globally. At some point, you just begin to trust the AI's word.

And despite what the Twitter-now-X-AI-conglomerate would like you to believe, you shouldn't trust the AI's word.

AI won't intentionally break your app of course. I'm not talking about crazy things the AI could do like giving access to sensitive data by accident (that would never happen right? RIGHT?). Nowadays, LLMs are smarter and don't hallucinate as much as they did. But it will break your app in subtle ways. And you won't catch everything because you're no longer close to the code, no longer paying attention.

Even if you have great prompting skills, you can't fit everything into a prompt or a context window. That feature mentioned casually by the client in a meeting that will come with v2? AI wasn't there. You were.

You're the one who can shape the system so future changes don't break everything.

At least, you used to.


The nails in the coffin were two comments from my team lead and universally-beloved-scrum-master.

First, he rejected a junior developer's goal of becoming “senior” in a frontend framework. His reasoning: AI writes the code and knows better. Learning it is unnecessary. This makes me mad.

Second, he told me I don't need human code reviews anymore. AI already does that. I can auto-merge. This also makes me mad.

Both statements are fundamentally wrong.

Learning is how you get better and faster at puzzle solving. It's also how you spot issues in what AI produces. Code reviews are one of the best places to learn.


That said, this workflow is not all bad.

A company exists to increase revenue. A tool that helps ship faster - even if it introduces issues, as long as fixing them is still faster that doing it all "by hand"- is attractive. Understandably so.

But it raises questions for the future.

Will we still read the code we ship?
Will large AI-generated code bases be maintainable?
Will adding features be smooth or a constant fight?
Should I even keep learning? (yes)
Do I still bring value?

I don't have the answers.

Maybe AI will become so good that no issue will ever arise? Maybe we'll find new problems to puzzle solve?


I used to be too mentally drained after work to touch personal projects.

Now, with this workflow, I have energy again.

Because I'm not thinking all day.

And I'm not sure that's a good thing.

Top comments (0)