I recently had an interesting conversation with an investor.
I was explaining a very concrete technical problem and the solution behind it.
At some point he asked: “Won’t AI solve this in a few years? If so, doesn’t that make your solution irrelevant?”
That question stuck with me — because my instinctive reaction was: why would that make it irrelevant?
If a problem can be solved without AI, that solution is always:
- faster
- cheaper - or zero cost
- deterministic
- easier to reason about
- easier to trust
In other words, it has fewer moving parts. Using AI to solve a deterministic problem feels a bit like using ChatGPT as a calculator. Yes, it can tell you that 2 + 2 = 4. But the calculator still wins — every time.
I’m starting to think that the real winners won’t be “AI-powered everything”, but systems where AI is used only where the problem is actually probabilistic or ambiguous.
So I’m curious how others see this: Do non-AI solutions become less valuable just because AI could solve the same problem? Or do simple, deterministic solutions actually become more important in an AI-heavy world?
Top comments (15)
Yes, that's the principle of least power, plus the principle of least surprise. 100% agree that simple, deterministic solutions actually become more important in an AI-heavy world. And so does human intelligence based on real-word experience learning on the job.
Well said.
Did you ask ChatGPT or do you write like an AI agent in your everyday life ?
Before LLM, people commented on my grammar mistakes.
Now they comment on whether LLM was used.
Same reflex.
Different excuse.
Still no engagement with the argument.
This is the same reason why even a basic NES powered chess engine beats chatGPT at chess, decades after Deep Blue beat the world champ. It isn't AI got "worse", its that these are apples to oranges once you jump past the term "AI".
Pulling away all the hype/marketing LLM powered AI systems are like any other system, they are good a specific things they are built for and bad at everything else.
The distinction with modern AI is that its great at granting the perception of being able to accomplish almost anything. Except in practice its bad at basically everything and not able to get any better, but at a glance it looks fantastic and completely capable.
So depending on what your doing its possible AI could do it partially, or never or tomorrow. Practically it probably will do a partial job right now, or it will do a crap job forever.
Definitely something to ponder about. As an engineer I agree with you that solving a problem without AI has it's upsides which you mentioned in the blog... But at the same time my main struggle is the communication gap: how do we convince the non-engineers that 'AI-powered' isn't always the superior architecture?
I guess you could say to your investor - think of AI like a smart person, they can do a really large set of arithmetic operations using a pencil and paper and get the right answer - or they can use a calculator. Even though the calculator can't do anything except arithmetic, it's massively more efficient than a human with a pencil and paper. This is still true for AI - it's like the person, capable of lots of things, but being aided by tools and existing deterministic methods that already exist.
This is a really interesting discussion.
If there is just one possible 'correct' outcome, then AI could still help find the most efficient way to reach that outcome.
If there is more than one possible 'correct' outcome, then AI could still help find other outcomes that you may not have thought of.
No one is sure AI will be around in a few years 🙂 Now seriously — it depends. If you're solution is cheaper and better than AI — why not
Technical perspective: AI does not replace determinism — it competes with uncertainty
If a problem can be solved deterministically, introducing AI does not inherently make the solution better. In many cases, it does the opposite.
Deterministic solutions provide properties that AI-based approaches fundamentally cannot guarantee:
predictable behavior
bounded failure modes
explainability by construction
stable performance across contexts
clear accountability when something goes wrong
AI excels in domains where the problem space is probabilistic, ambiguous, or poorly specified. Using it outside those domains often increases system complexity without increasing system value.
From a systems architecture perspective, the question is not “Can AI solve this?” but rather:
Does the problem require inference under uncertainty?
Does variability add value, or does it introduce risk?
What happens when the system is wrong?
In safety-critical, financial, or governance-sensitive systems, replacing deterministic logic with probabilistic behavior is rarely an upgrade. It is a trade-off — and often a costly one.
As AI becomes more widespread, deterministic systems do not become obsolete. They become more important as anchors of reliability, trust, and control within increasingly complex architectures.
The real challenge is not making everything AI-driven, but knowing precisely where AI should stop.
I'm worry about same issue too.
I think AI can work better than person and make perfect, but person's imperfection can bring creativity too!!
So this is key to person's future
Some comments may only be visible to logged-in visitors. Sign in to view all comments.