DEV Community

Cover image for Entering the World of AI Mechanics
Anna
Anna

Posted on

Entering the World of AI Mechanics

I’ve been thinking a lot lately about how working with LLMs is spooky, as in “spooky action at a distance” spooky, and some parallels to the world of physics have come up in my mind.

We’re going through a paradigm shift in tech right now, which is a dramatic statement to make at any time and is often incorrect (ahem, blockchain). But given the size of the current #genAI bandwagon, it feels not far off to say that things will not be the same once LLMs are widely adopted.

Being somewhat of a physics buff, I can’t help but see comparisons between what’s currently happening in genAI and what happened when quantum mechanics was introduced into a classical mechanics world, as both brought about a dramatic change in thinking.

In classical mechanics, if you roll a ball from point A, and you know the exact force it was pushed with and the amount of resistance, you’ll know when it will reach point B. If it doesn’t reach point B when you expected it to, something is wrong with your calculations, not with the laws of physics.

The world of classical computing is much like the world of classical mechanics: input A should always result in output B. Sure, there are complexities and race conditions, but for the most part, whatever code you’re writing is likely to be buggy because you didn’t think of some side effect, not because the logic suddenly changed on you.

Not so with LLMs. Input A sometimes results in output B, sometimes in output C, and sometimes in “I’m sorry, I can’t answer that question right now”. And so we enter the quantum world of probabilities, where an atom is X% likely to be in a given position, but you will never be 100% sure until you measure it.

We can give LLMs safeguards and engineer our prompts in specific ways, but the chance that an answer is what we expect will always be a probability, not a guarantee; we’re never sure of the output until it’s measured by the user’s reaction.

That means as engineers, we need to change our mindsets, from building in a world of known laws to building in a world of probabilities, and optimizing for the best average or consistent result.

We also need to realize that for the average user, this will initially appear as a degradation: we went from presenting sure outputs to widely varying outcomes given the same input, which can be jarring at best and a poor experience at worst.

Rolling out half-baked products without sufficiently sure probabilities of valid results is a good way to frustrate users; no disclaimer will alleviate a bad initial user experience. Most users still live in the classical world, and rather than meeting them where they’re at and easing them into quantum outputs, we’re pulling out the rug and hoping we’ve engineered our prompts correctly, when “correct” is actually a percentage and not a bool.

There’s a final parallel, though: quantum physics only applies at the subatomic level. Once you have a mass of atoms comprising, for example, a ball, it behaves in a very classical way. Perhaps masses of software, say at an enterprise or at the infrastructure level, should behave in a classical, predictable, and repeatable way.

That means that there are places and use cases for LLMs, but there also very much areas that should stay classical, where probability is detrimental to the experience, and the blanket “put an AI on it” push is counterproductive. We’re still learning where that line is, but until we do, maybe some use cases should remain in the classical world.

Top comments (0)