DEV Community

Cover image for What Should LLMs Decide — and What Should We Decide First?
Micheal Angelo
Micheal Angelo

Posted on

What Should LLMs Decide — and What Should We Decide First?

Are LLMs Solving the Right Problem — or Just the Easy Part?

I’ve been thinking about something while working with LLM-based systems and RAG-style pipelines, and I’m genuinely unsure whether my intuition is right or wrong.

So I’m putting this out as a question, not a conclusion.

I’d really like to know how others think about this.


A Subtle Shift in Where the Hard Part Lives

With modern LLMs, something interesting has happened:

  • Writing code has become relatively easy
  • Forming clear, correct logic is still hard

If the logic is solid, an LLM can usually translate it into working code remarkably well.

If the logic is unclear, the output might still look convincing — but be quietly incorrect.

This feels like a shift in where most of the real effort now lives.


This Isn’t a New Problem — Just a New Surface

Even before LLMs, many of us experienced this during:

  • Competitive programming
  • Algorithms courses
  • System design interviews

Often:

  • The hardest part was deciding what to do
  • The easiest part was writing how to do it

LLMs compress that second step almost entirely.

Once intent and structure are clear, implementation often follows effortlessly.


The Constraint We Can’t Ignore: Limited Context

In real systems, context is not infinite.

You can’t:

  • Dump an entire codebase into a model
  • Ask it to infer everything end-to-end
  • Treat it as a black box that “just knows”

You’re forced to decide:

  • What information matters
  • What must remain explicit
  • What can safely be inferred

This alone requires strong upfront reasoning.


Reasoning Doesn’t Disappear — It Moves

One thing I’ve noticed is that reasoning never actually goes away.

It either:

  • Happens before the model is called
  • Or happens inside the model, implicitly

When it happens implicitly, small misunderstandings can compound:

  • A slightly wrong assumption feeds the next step
  • Which feeds another
  • Until the final output looks coherent but is structurally flawed

The system didn’t fail loudly — it failed smoothly.

That makes me cautious.


Delegation vs Abdication

LLMs are extremely capable, but I’ve started thinking of them less as autonomous decision-makers and more as powerful collaborators.

They’re excellent at:

  • Pattern completion
  • Translation of intent into form
  • Filling in well-defined gaps

They struggle when:

  • The structure itself is ambiguous
  • The boundaries of the problem are unclear
  • Too much responsibility is delegated without constraints

Some burden still needs to stay on our side.


Complexity as a Smell (Sometimes)

I’ve seen systems grow increasingly elaborate:

  • Multiple agents
  • Deep orchestration
  • Grammars layered on grammars
  • Heavy abstraction to “guide” the model

Sometimes this is necessary.

But sometimes I wonder:

  • Are we compensating for unclear logic with infrastructure?
  • Are we externalizing reasoning instead of simplifying it?
  • Are we building systems that look robust but are hard to reason about?

I don’t think complexity is inherently bad — but it should earn its place.


The Question I Can’t Quite Resolve

So here’s what I’m genuinely unsure about:

Should LLM-based systems maximize autonomy —

or should they operate inside carefully reduced, deterministic boundaries?

In other words:

  • Do better systems come from asking models to reason freely?
  • Or from deciding as much as possible before the model is involved?

I can argue both sides — and that’s the problem.


Why I’m Sharing This

My intuition leans toward:

  • Clear structure over clever prompting
  • Deterministic steps where correctness matters
  • Simpler pipelines that are easier to reason about

But I also know:

  • Real-world systems are messy
  • Edge cases are real
  • What looks “simple” often hides complexity

So I’m not claiming certainty.


An Open Question

If you’re working with LLMs or RAG systems:

  • Where do you draw the line between logic and delegation?
  • How do you prevent subtle errors from compounding?
  • Have simpler approaches worked better for you — or failed?

I’d love to hear perspectives, especially those that disagree.

Sometimes the hardest problems aren’t about tools at all —

they’re about where we choose to think, and where we choose to let the system think.

Top comments (0)