DEV Community

Cover image for Prompt Engineering Is Mostly Guessing (And That's Okay)
Gervais Yao Amoah
Gervais Yao Amoah

Posted on

Prompt Engineering Is Mostly Guessing (And That's Okay)

We need to talk about prompt engineering.

Not because it’s useless—it clearly works. But because we’ve started treating it like a craft you can “master,” the way you’d master React hooks or database indexing. There are courses, certifications, LinkedIn titles, and even job postings.

Here’s the uncomfortable truth: prompt engineering is mostly structured guessing with good communication skills.

And honestly? That’s fine.


The Problem With Calling It “Engineering”

When we say engineering, we imply a few things:

  • Precision
  • Repeatability
  • Predictability

If you write a function today, it behaves the same tomorrow. If you build a bridge, it doesn't arbitrarily decide to do something else during lunch.

Prompts… do not share these qualities.

The same prompt can yield:

  • a perfectly reasoned answer on Monday
  • a hallucinated detour on Tuesday
  • a policy refusal on Wednesday after a model update

Try this prompt across three major models and compare:

Explain recursion to a beginner programmer using a real-world analogy.
Keep it under 100 words.
Enter fullscreen mode Exit fullscreen mode

One model uses nesting dolls. Another picks infinite mirrors. A third invents a chef following a self-referencing recipe. All “correct,” all completely different.

Here’s Claude’s take:

Answer from Claude, mirror example

And here’s ChatGPT giving not one but two separate analogies:

Answer from ChatGPT

And that’s exactly the problem: you can’t predict any of this.


What We’re Actually Doing (If We’re Honest)

The real workflow looks something like this:

  1. Write a prompt
  2. Get something mediocre
  3. Add “think step by step”
  4. Get something slightly better
  5. Add “you are an expert”
  6. Get something different
  7. Tweak wording 13 more times
  8. Eventually land on something you can use

This isn’t engineering. It’s linguistic debugging—poking a very polite black box until the vibes are right.

And that’s okay! And let's call it what it is.


Why Prompting Does Work

Prompts work not because we’re exploiting deep model secrets, but because we’re applying the same principles you’d use when explaining something to a junior developer:

  • Be clear.
  • Be structured.
  • Give context.
  • Set constraints.

These aren’t engineering techniques. They’re communication techniques.

If you can explain a complex idea cleanly to a human, you can write a good prompt.


The Real Skill Isn’t Prompting—It’s Knowing What You Want

The best “prompt engineers” I’ve met aren’t great because they can craft clever incantations. They’re great because they can:

  • define problems clearly
  • evaluate whether an answer is good or bad
  • iterate toward a solution
  • understand their domain deeply

Notice what’s missing?

Prompt tricks.

If you don’t know what “good” looks like, even the perfect prompt won’t save you.


The Future: Less Prompting, More Goal-Setting

Here’s the other reason I think the hype will fade: modern models are getting better at interpreting messy, natural language. They’re starting to:

  • ask clarifying questions
  • correct themselves
  • handle multi-step reasoning
  • infer intent even from vague queries

We’re moving toward systems where you specify a goal—

“Build me a dashboard that tracks X”

and the agent handles the internal prompting for you.

In that world, prompt engineering is less like a core skill and more like knowing how to tune a carburetor: still useful in niche cases, but irrelevant for most people.


So What Do We Call It?

If it’s not engineering, what is it?

Maybe AI communication.

Maybe prompt shaping.

Maybe prompt vibing (my personal favorite).

Because that’s what’s actually happening—we’re learning how to talk to a probabilistic conversational partner that sometimes nails it and sometimes confidently makes things up.

It’s a useful bridge skill while the tools mature. But it’s not a job for the next decade.


The Bottom Line

Prompt engineering works. But it’s not engineering, and pretending it is gives people the wrong expectation.

The long-term skills that actually matter are:

  • Critical thinking — spotting wrong or shaky outputs
  • Domain expertise — knowing what “right” looks like
  • Problem decomposition — breaking tasks into solvable steps

Master those, and you’ll thrive—prompts or no prompts.


Try this experiment: Take your most "engineered" prompt and run it through three different models. I bet you'll get three viable but completely different answers. That's not a bug—it's just how language models work.

What do you think?

Is prompt engineering a real discipline, or are we all just winging it with nice formatting and good vibes? I’d love to hear your take.

Top comments (0)