DEV Community

Robert Kirkpatrick
Robert Kirkpatrick

Posted on • Originally published at Medium

I Stopped Writing Prompts and Started Writing Systems. The Results Weren't Even Close.

Everyone is still typing one question at a time into ChatGPT like it's a search engine with feelings.

I was doing the same thing six months ago. Ask a question. Get an answer. Ask another question. Get a slightly worse answer. Paste the whole mess into a doc and pretend it was useful.

Then I started looking at what the AI was actually doing under the hood. Not the answer it gave me... the process it used to get there. And I realized something that changed every project I've touched since.

The prompt you type is not the most important prompt in the conversation.

There's a Prompt Before Your Prompt

Every time you open ChatGPT, Claude, or Gemini, there's an invisible layer of instructions already loaded before you say a word. The industry calls it a system prompt. It's the AI's job description. It tells the model who to be, what rules to follow, how to format responses, and what to prioritize.

When you type "write me a marketing plan," the AI doesn't just process those six words. It processes those six words through a filter of instructions you never wrote and probably never thought about.

That filter determines everything. Tone. Depth. Whether it gives you a surface-level outline or an actual strategy. Whether it pulls from current information or regurgitates something from its training data. Whether it checks its own work or just keeps going.

Most people are fighting the default filter without knowing it exists.

The 500-Word Prompt Problem

There's a viral post going around right now. A guy with a massive following said to stop writing 500-word prompts. He said a 29-word prompt outperforms all of them.

He's half right.

A 500-word prompt dumped into a single chat message is a mess. It's like handing someone a novel and asking them to cook dinner from it. Too much information, no hierarchy, no structure. The AI picks up some of it, ignores the rest, and gives you something that feels incomplete because it is.

But the answer isn't to write shorter prompts. The answer is to stop thinking about prompts at all.

The answer is to think about systems.

What a System Prompt Actually Does

A system prompt isn't a question. It's architecture.

It tells the AI what role to play before you even show up. It defines what good output looks like. It sets constraints so the model doesn't wander into generic territory. And when it's built right, it does something most people don't realize is possible.

It forces the model to work before it answers.

That's the part nobody talks about. A well-built system doesn't just shape the AI's tone or format. It shapes the AI's process. It can require the model to research before responding. To analyze before summarizing. To evaluate its own output against criteria before delivering a final version.

The difference between a single prompt and a system prompt is the difference between asking someone a question on the street and hiring someone with a job description, a checklist, and a performance review built in.

Why the "Prompt Engineering Is Dead" Crowd Is Wrong (and Right)

There's a growing narrative that prompt engineering is over. That AI models are getting smart enough that you don't need to be clever with your wording anymore.

They're right about one thing. The era of trying to trick the AI with magic words is over. You don't need to say "act as a world-class expert" or "take a deep breath" or whatever hack was trending last month. Models in 2026 are past that.

But they're dead wrong about the bigger picture.

Prompt engineering isn't dying. It's splitting in two. Andrej Karpathy, the former head of AI at Tesla and one of the most respected voices in the field, put it this way: the real skill isn't prompting anymore. It's context engineering. His analogy: if the AI model is a CPU, then the context window is RAM, and your job is to be the operating system that decides what goes into working memory for the next step.

That's not writing a clever question. That's building a system.

The people getting real results with AI aren't writing better questions. They're building better instruction sets. Persistent ones. Ones that survive across conversations and enforce quality every single time.

One researcher analyzed over fifteen hundred academic papers on prompt engineering and found that most of the advice floating around online is actively counterproductive. The biggest finding? Structure matters more than wording. Well-structured prompts outperformed verbose alternatives while cutting API costs by 76%. In real numbers, that's the difference between spending over a million dollars a year on API calls and spending a quarter of that for the same quality output.

Andrew Ng demonstrated something even more striking. He took GPT-3.5, a model most people had already written off, and wrapped it in an agentic workflow. A system. That older, "weaker" model hit 95.1% on a coding benchmark, nearly doubling its standalone performance. Not a better model. A better system around the model.

That's not prompt engineering dying. That's prompt engineering growing up.

The Gap Between a Prompt and a System

Here's what a typical prompt interaction looks like.

You type: "Write me a blog post about AI trends."

The AI writes you something. It's fine. It's generic. It sounds like every other AI-written blog post on the internet. You tweak it, paste in some edits, ask for a rewrite. Three rounds later you've got something passable that still doesn't sound like you wrote it.

Now here's what a system-level interaction looks like.

Before you ever type a word, the AI already knows your voice. It knows what patterns to avoid. It knows to check current data before making claims. It knows to grade its own output against a scoring rubric and flag anything that falls below the bar. It knows that if the answer isn't backed by real analysis, it shouldn't deliver the answer at all.

You type the same six words. You get something completely different.

Not because the AI is smarter. Because the AI has better instructions.

What Changes When You Build Systems Instead of Prompts

When I stopped writing one-off prompts and started building full instruction sets, three things happened immediately.

First, consistency. The AI stopped giving me wildly different quality from one conversation to the next. The floor went up. Not every output was a home run, but nothing was garbage anymore either.

Second, speed. I stopped spending 20 minutes workshopping a single prompt. The system handled the guardrails. I just pointed it at the task.

Third, depth. The outputs started pulling from current information instead of defaulting to whatever was baked into the training data. The analysis got sharper because the system required analysis, not just summarization.

That third one is the one most people miss entirely. The default behavior of every major AI model is to answer from memory. It's fast, it's confident, and it's often outdated or shallow. A system prompt can change that default behavior. It can make the model work harder before it opens its mouth.

That's not a minor improvement. That's a fundamentally different kind of output.

Why Most AI "Tips and Tricks" Content Fails You

Scroll through any social media feed right now and you'll find thousands of posts sharing "the best AI prompts" for every use case imaginable. Resume writing. Marketing plans. Business strategy. Content creation.

Most of them are single-shot prompts. Copy, paste, get a result.

And most of them produce output that sounds exactly like what it is. AI-generated filler that checks the boxes without actually doing the work. They're templates for mediocre output, dressed up as productivity hacks.

The reason is structural. A single prompt can't do what a system does. A single prompt is one instruction. A system is an entire workflow. It's the difference between giving someone a task and giving someone a job.

The posts that go viral with "10 prompts that replace a $500/hour consultant" are popular because they promise a shortcut. And shortcuts sell. But anyone who's actually tried to run a business strategy through a single copied prompt knows the output reads like a college student's first assignment, not like a consultant's deliverable.

The missing layer is the system. The invisible instructions that tell the AI how to think, not just what to say.

Sixty-eight percent of companies have stopped hiring dedicated prompt engineers entirely. They realized it's not a job title. It's a skill that needs to be embedded into everything. The firms getting ahead aren't the ones with a "prompt guy" on staff. They're the ones whose tools have systems built into them from the start.

What We Build and Why It's Different

Let me be clear about something. Raw prompt stacking isn't the answer. Piling more instructions into a single message is verbal vomit for your computer. It's cumbersome, it's clunky, and the model gets confused trying to sort through the mess. That's what most people get wrong when they hear "use longer prompts" or "add more context."

Large prompting isn't inherently bad. But if you tried to read the dictionary, you'd think it was terrible. It has every word you need, but none of them are in the right order to mean anything. You have to put those words together in the proper way for it to make sense. Same thing with AI instructions. More isn't better. More structured is better.

That's the difference between prompt engineering and what we call system prompting. Prompt engineering throws instructions at the model and hopes for the best. System prompting builds an architecture around the model so every instruction has a purpose, a sequence, and a quality gate. Think of it as the difference between handing someone a dictionary and handing them a novel. Same words. Completely different outcome.

That's what we built at TotalValue Group. We didn't abandon the idea behind prompt engineering. We escalated it into something hyper-efficient and systematically repeatable.

Every system we ship does something most prompt collections don't. It changes how the AI processes your request before it starts writing. It's not just telling the AI what to produce. It's telling the AI how to work. How to prepare. What to verify. What standards to hold itself to.

I won't lay out the full architecture here because, frankly, it took months of testing and iteration to develop. That architecture is the reason our tools produce output that doesn't read like AI wrote it. But I will say this: if your prompt doesn't include instructions for how the AI should gather, analyze, and validate information before it starts generating an answer... you're getting the AI's first draft when you could be getting its best work.

Our systems don't give you the first draft.

Bulletproof Writer is a prompt system with over 90 built-in analytical modules. It doesn't just help you write. It scores what you write, identifies weaknesses at the sentence level, and tells you what to fix before you publish. Version 3.3 includes eight new engines for gap analysis, pacing evaluation, and reader drop-off prediction.

Make AI Recommend You is a 12-module system for generative engine optimization. It structures your online presence so AI platforms like ChatGPT and Gemini can actually find and recommend your business. 122 prompts. Schema markup. Content architecture. Cross-platform authority building.

AI Signature Scrub strips every detectable AI pattern from your writing. Not by adding fluff or filler, but by identifying the specific structural, syntactic, and stylistic markers that make AI-generated text recognizable and systematically eliminating them.

Every product in our catalog works across ChatGPT, Claude, and Gemini. No subscriptions, no monthly fees. You buy the system and it's yours.


Full product catalog: TotalValue Group on Gumroad

Free diagnostic tool: AI Signature Scrub (Free Version) — No email required. No upsell wall. Paste your text in and see what AI patterns are hiding in your writing.

Website: totalvalue.com


Robert Kirkpatrick is the founder of TotalValue Group LLC and builds AI prompt systems that replace work you'd normally pay a consultant to do. He's a data analyst by trade who got tired of watching people fight AI tools that were designed to help them.

Top comments (0)