DEV Community

Grant B
Grant B

Posted on

I have a theory about AI (just like everyone else)

As someone who has used an absolute shit ton of AI, I'm not going to tell you that it's not impressive. It is. It is so impressive sometimes it scares me. There are times it's not impressive, and times it's sheer idiotic. All of that is true too.

AI is great at solving well defined problems. It's great at it, and I think what we don't realize, or at least what I haven't realized, is how well defined most work actually is. Not to say that work is easy or that the designs are obvious, but that usually you can build processes around most work to make it somewhat routine. As a person who makes generators and focuses on the meta, I think it becomes apparent that if you use well defined structures you can configure them into all sorts of unique and programmatically solvable, generatable designs. The outcomes can be quite impressive, so great that they boggle the mind. Common pieces become more than their parts. But my question is this: are truly unique problems always directly built on what came before? My argument for humanity is that this is not always true. That sometimes what we see is not an extrapolation but a rupture that destroys a false peak. This is my honest to goodness hope for what humanity has to offer.

At any rate, my theory is that while AI is great at solving many problems, that is because many problems are composed of many smaller problems that can be individually defined and solved by common patterns, extrapolations upon what existed before. What I think is possibly the truth is that AI does not participate in that rupture of the norm. Rather, it is deeply entrenched in it. It can get so good that it can solve problems with many layers of defined patterns, at least eventually, not saying that's where we are today. Which leads to my theory: we will not get the new technology we were promised. We will just get optimization and an abundance of the same. Which in some cases, where we do have a shortage, will be good for some but not most. I think we will have some great problems solved, composed of many common shapes. But that will be it. And that will be good enough to kill most knowledge work, or at least convince everyone that it's not worthwhile. Then forever we will have abundance of what is, but never more than that. No boom. Nothing new. Just the same shapes we know today, sometimes in spectacular configurations.

I want to be wrong. And if you say I'm exaggerating, I want you to be right. Someday maybe even today I hope to be ridiculed for my opinion because I am so obviously wrong. I'm scared. I think everyone is, or they should be at some level. My hope is that we see something in humanity, the spark of what is new, and we realize that it's worthwhile. That we don't cheapen ourselves by saying the same is all we can ever hope for just because it can be created at cost or at great speed.

Now I hit publish and here comes a potential torpedo for my career. I hope when I go to my luddite farm they can at least accept me there. Good luck everyone and happy coding.

Top comments (1)

Collapse
 
wong2kim profile image
wong2 kim

I think you're onto something important here, and I don't think it's a luddite take at all.

Coming from 10 years of manufacturing engineering before switching to software, I've seen this pattern firsthand. Factory workflows are "well-defined problems" — and AI absolutely crushes them (vision inspection, DAQ monitoring, predictive maintenance). That's where I started.

But the moment I shifted to building consumer apps — things like a pregnancy tracking app or an ADHD planner — the "well-defined" framing breaks down fast. User needs are messy, emotional, and context-dependent. AI helps me write the code, but the judgment about what to build and why still comes from lived experience.

Your theory maps well to what I see: AI is a force multiplier for execution, but the problem-definition layer is still fundamentally human. And honestly, that's what makes building things exciting.