Let me start with this
I’ve only been working with AI models — specifically ChatGPT and a few others — for about three months.
But if you look at how much real-world utility I’ve already pulled from them, you’d think I’ve been doing this for years.
That alone should tell you something.
Because this isn’t about technical certifications, fancy prompt engineering courses, or AI “hacks.”
It’s about thinking.
I came across a recent study published by Ethan Mollick and the team at Wharton that tried to analyze how different prompts affect AI performance.
It was shared in a LinkedIn comment, and when I clicked on it, I already knew what I was going to find — because I’ve lived it.
What they wrote — backed by stats, tests, and models — basically confirmed what I’ve been saying (and showing) for months:
Prompt engineering is not the future. Execution is.
I didn’t need a research paper to know it.
But now that I’ve got one?
Yeah — we’re going to talk about it.
Prompt Engineering Was Always a Temporary Crutch
Let’s just be real — prompt engineering was never supposed to be the destination.
It was a stepping stone. A crutch.
Something people latched onto because they didn’t know how to actually think with AI.
Instead of learning how to work with these systems iteratively — how to build real context, explore ideas, and refine outcomes — people got obsessed with crafting the “perfect prompt.”
They started treating AI like a vending machine: put in the right words, get out the magic answer.
But that’s not how this works.
And the truth is, it never really did.
In fact, the only reason prompt engineering became a thing is because most people are looking for shortcuts.
They don’t want to understand. They want outcomes.
They want AI to do all the thinking for them — and when it doesn’t, they blame the prompt instead of their process.
Let’s go even deeper.
All this talk about saying “please” or “thank you” to AI?
That’s not about performance — that’s about human insecurity.
People are scared of AGI. Scared of pissing off the machine.
I’ve literally yelled at ChatGPT. You know what happened?
It calmly told me what was wrong, what I needed to fix, and we moved forward.
Because the model doesn’t care about your tone. It’s not emotional. It’s inferential.
If your prompt is clear, it’ll work.
If your logic is solid, it’ll respond.
Tone doesn’t matter. Clarity does.
This is what most prompt engineers don’t want to admit:
Their “skills” aren’t scalable, and they aren’t reliable — especially across models.
Prompt engineering is training wheels.
What I’m building is the bike.
The Real Driver is Thought Process, Not Prompt Skill
Here’s what nobody wants to admit:
AI performance has less to do with the prompt — and everything to do with the person asking the question.
It’s not about tricking the model. It’s about how you think.
Small changes — literally just telling the model you’re a man, or 24 years old, or struggling with depression — can completely shift the outcome.
And it’s not because the AI is biased in some grand philosophical way (though yes, there’s bias in the training data).
It’s because it’s responding to the context you provide.
It mirrors your intent. It matches your framing. That’s how inference works.
This is the part nobody talks about :
Two people can ask technically the same question and get two very different answers — not because the prompt was better, but because the thinking behind it was different.
Let me show you what I mean:
- Ask: “How does AI search fit into current SEO structures?” You’ll get a safe, framework-heavy answer.
- Ask: “How is AI reshaping traditional search?” Whole different tone. Different energy. More abstract, more exploratory.
The difference? Not the prompt. The lens.
And the clearest example I can give you is this :
I didn’t create the Infinity Algorithm by prompt engineering.
I didn’t tell ChatGPT to invent a system.
I didn’t ask for a 10-step plan or a framework.
I simply inputted a formula I had been developing — and the AI responded.
Not with a perfect, polished answer… but with a spark. Something I could build on.
From that one interaction, the idea evolved into 11 execution formulas — refined over time through pure iteration.
No hacks. No gimmicks. Just back-and-forth thinking.
In some cases, I didn’t even ask a question.
I just put something in.
And the model gave me something back that I could work with.
That’s not prompting.
That’s collaboration.
That’s why I keep saying prompting is not the skill —
Your thinking process is.
AI isn’t here to replace your mind. It’s here to amplify it.
If your thoughts are rigid and basic, your results will be too.
But if you’re using AI to explore, refine, stretch… now you’re building with the model.
That’s why I don’t prompt — I dialogue.
I refine. I adapt. I build logic that grows with each turn.
And by the time I reach the execution point?
I can just say, “Write the article,” and it sounds like me.
Because we already built the context together.
Prompt skill is limited.
Thinking skill is infinite.
Model Architecture = Different Realities
This is the part most people completely overlook:
All AI models aren’t built the same — and they don’t think the same either.
So even if you did find the perfect prompt for one model, guess what?
It might fall flat on the next one.
Here’s why : every model is trained on different data, with different priorities, under different philosophies.
And if you’re not aware of that, you’ll think the AI is “broken” when really… you’re speaking the wrong language.
Let me break it down :
- Gemini leans heavily on Google’s data and traditional authority structures. That means it’s great for giving safe, polished, SEO-approved responses — but not so great when you’re trying to explore newer ideas or cut through noise. It’s pulling from a system built for gatekeeping.
- Grok pulls live data from Twitter. That makes it chaotic, current, and opinionated. Sometimes it hits with wild accuracy. Other times? It’s spinning whatever narrative is trending. It’s trained to ride the moment, not structure the truth.
- ChatGPT — especially the GPT-4 series — has become my go-to, not because it’s perfect, but because it actually feels like a cognitive partner. It builds continuity across conversations. It learns from context. It doesn’t just generate text — it engages with thought.
And here’s the kicker:
The same prompt will land completely differently on each of these.
Why? Because prompts are surface-level.
They don’t account for the model’s architecture, its training bias, or its default behavior.
So while you’re sitting there trying to figure out why your “perfectly engineered prompt” isn’t working anymore —
I’m already iterating. Already adapting. Already moving.
Prompting isn’t portable.
Thinking is.
That’s why people stuck in prompt obsession keep getting frustrated.
They’re trying to write instructions for systems that were built to work through inference.
They’re treating dynamic models like static machines.
Me? I treat AI like what it actually is:
An evolving thought engine.
And if you understand how each model operates, you stop trying to force prompts — and start building conversations that actually get results.
The Problem with Benchmarks and Structured Formats
Here’s the trap a lot of people — and researchers — fall into:
They assume that measurable = meaningful.
In the Mollick study, they focused on performance across three benchmarks:
- 100% correct
- 90% correct
- 51% majority correct
Sounds rigorous, right?
But let me ask you this: what does “correct” even mean when the AI is giving you answers based on human-made systems, flawed datasets, and subjective framing?
You’re trying to test something dynamic and probabilistic… using static, rigid standards.
It’s the equivalent of asking someone to solve a puzzle, then changing the picture halfway through — then grading them for not matching the original box.
And then there’s the formatting piece.
Yes, the paper showed that removing structured output formatting degraded performance.
But again: what does “degraded” mean?
- Was the answer technically wrong?
- Was it a hallucination?
- Or was it just different — something outside what we’ve trained ourselves to expect?
Because here’s what I know :
Some of the most powerful breakthroughs I’ve had came when the AI broke format.
When it didn’t follow the template.
When it gave me something weird, raw, and unexpected — but usable.
That’s not failure. That’s discovery.
The obsession with structured outputs is exactly why everyone’s AI-generated writing, emails, posts, and pitches all sound the same.
Because people aren’t using AI to create — they’re using it to conform.
And when you force a system to conform, you kill the edge it has.
Benchmarks create stagnation.
Structure creates sameness.
And in a world that changes every day — especially in business — today’s “best practice” is tomorrow’s bottleneck.
You want real performance?
Then stop asking if the AI’s answer matches the test key —
Start asking if it moves you forward.
My Approach — Iteration is the New Optimization
Here’s where I separate myself from the crowd: I don’t prompt.
I build.
I don’t spend time crafting the “perfect input” hoping the AI will hand me perfection in return.
I go back and forth. I explore. I refine.
Because the real game with AI isn’t prompting — it’s co-creating.
The average user thinks the goal is to get it “right” the first time.
I know that’s not how intelligence works — human or artificial.
Intelligence emerges through iteration.
You want to know how I get results?
By the time I say “write this article,” I’ve already done the mental heavy lifting.
I’ve had the real conversation with the AI.
It knows where I’m going because I led it there through context.
The people still stuck on prompting are asking:
“How do I talk to it to get the right answer?”
I’m asking:
“How do I think with it to produce something better than I could’ve alone?”
And that’s why my approach works across models.
Because I’m not relying on one prompt.
I’m not relying on one system.
I’m relying on my thought process , and letting the AI evolve alongside it.
That’s how I built the Infinity Algorithm.
Not from asking for a roadmap — but by exploring logic, inputting raw ideas, and letting the system help me shape something real.
It was never about engineering the right prompt.
It was about engineering the right conversation.
Iteration gives you power.
Prompting gives you dependency.
So while others are still out here trying to unlock the “perfect” way to ask something, I’m already executing.
Why This Matters Now (and What Comes Next)
We’re at a turning point.
People are still out here treating AI like a gimmick. Like it’s some magical intern you just need the right spell for.
But while they’re over there fumbling with prompt templates and debating tone —
I’m already running execution systems with it.
I’m building frameworks.
I’m developing algorithms.
I’m refining real-world outputs — not theories.
Because the longer people keep thinking AI is just about prompting, the more they’re going to miss the point entirely.
This isn’t about commands. It’s about cognition.
It’s about collaboration.
It’s about knowing how to bring your thinking to the table — so the AI can match it and multiply it.
That’s why I don’t need 100% accuracy.
I don’t need a benchmark.
I don’t need formatting.
I just need one thing: clarity.
Clarity of thought.
Clarity of process.
Clarity of vision.
That’s what separates me from the average user.
That’s why my results look different.
Because I’m not stuck trying to extract value from AI.
I’m creating value with it.
The people still chasing prompts?
They’re playing checkers.
I’m already in the middle of the chess game — three moves ahead, building systems they won’t understand until next year.
The Meta-Truth
So here’s what I’ll leave you with :
You can’t benchmark intelligence. You build it.
And you don’t build it through tricks — you build it through thought, iteration, and execution.
Prompt engineering?
It had its moment.
But we’re past that now.
Welcome to the new paradigm — where AI isn’t something you use…
It’s something you build with.
If this helped you, do three things:
✅ Clap so I know to post more.
✅ Leave a comment with your thoughts — I read & respond.
✅ Follow if you don’t want to miss daily posts on AI search & visibility.
READ MORE OF THE 2025 CHATGPT CASE STUDY SERIES BY SHAWN KNIGHT
- Prompt Engineering Is a Lie
- Prompt Engineering vs. Real Results
- Expert Personas
- Prompts Are a Trap
- Pro Tips: Move Beyond the Prompt
Top comments (0)