You have a task. You open ChatGPT. You paste in some code, type a vague instruction, and hope for good output.
When it's wrong, you paste more code. Add more context. Try again. Eventually you get something usable — 40 minutes later.
This is the copy-paste loop, and it's how most developers use AI assistants. It works, barely. But it doesn't scale, it's not repeatable, and it fails silently on complex tasks.
Here's what I do instead: I build prompt pipelines.
What's a Prompt Pipeline?
A prompt pipeline is a sequence of focused prompts where each step's output feeds into the next. Instead of one giant "do everything" prompt, you break the work into stages.
Think of it like a Unix pipeline: each stage does one thing well, and you compose them.
analyze → plan → implement → verify
Each stage has a clear input, a defined output format, and a quality check before moving forward.
A Real Example: Adding a Feature
Let's say I need to add rate limiting to an Express API. Here's how the copy-paste approach looks:
Add rate limiting to my Express API. Here's my server.js:
[pastes 200 lines of code]
Use Redis. Make it configurable.
And here's the pipeline approach:
Stage 1: Analyze
I need to add rate limiting to an Express API.
Current setup:
- Express 4, TypeScript
- Redis already available (ioredis)
- Routes defined in /routes/*.ts
- Auth middleware at /middleware/auth.ts
Analyze: What are my options for rate limiting middleware?
For each option, list: library name, Redis support, TypeScript types, maintenance status.
Recommend one with reasoning.
Output: A comparison table and recommendation. I review it, confirm the choice, move on.
Stage 2: Plan
We're going with rate-limiter-flexible with Redis backend.
Write a technical plan for integration:
1. What files need to change?
2. What's the middleware signature?
3. What configuration options do we need?
4. How do we handle rate limit exceeded responses?
Output as a numbered checklist I can review before any code is written.
Output: A concrete plan. I check it against my architecture, adjust if needed.
Stage 3: Implement
Implement the rate limiting plan above.
Constraints:
- Follow our existing middleware pattern (see auth.ts structure)
- Config goes in /config/rateLimit.ts
- Tests go in /middleware/__tests__/rateLimit.test.ts
- Use our existing error response format: { error: string, code: number }
Write the implementation files one at a time. Start with the config.
Output: Focused code files, one at a time, following my patterns.
Stage 4: Verify
Review the rate limiting implementation for:
1. Race conditions under concurrent requests
2. Redis connection failure handling
3. Missing test cases
4. Configuration edge cases (negative values, zero, etc.)
List issues found, then fix each one.
Output: Bug catches and fixes before I even run the code.
Why Pipelines Beat Copy-Paste
Debuggability. When something goes wrong in stage 3, I know the issue is in the implementation — not in a confused understanding of requirements. Each stage has a clear boundary.
Reusability. The analyze → plan → implement → verify structure works for any feature. I reuse the same stage templates across projects.
Quality. Each stage produces a reviewable artifact. I catch problems early instead of discovering them after 200 lines of code are written.
Context management. Instead of cramming everything into one prompt, each stage sends only what's relevant. The assistant stays focused.
Building Your First Pipeline
Start with this minimal three-stage pipeline for any coding task:
Stage 1: Scope
I need to [task]. My project uses [stack].
What are the key decisions I need to make before writing code?
List them as questions with your recommended answer for each.
Stage 2: Build
Based on the decisions above, implement [task].
Constraints: [your project's rules]
Write tests alongside the implementation.
Stage 3: Check
Review the implementation for bugs, missing edge cases,
and deviations from the constraints I specified.
List issues, then fix them.
That's it. Three prompts instead of one. Each takes 30 seconds to write. The total time is less than the copy-paste loop because you eliminate rework.
Saving Pipelines for Reuse
I keep my pipeline templates in a prompts/ directory in each project:
prompts/
add-feature.md # analyze → plan → implement → verify
fix-bug.md # reproduce → diagnose → fix → regression-test
code-review.md # summarize → critique → suggest
refactor.md # inventory → plan → migrate → verify
Each file has the stage templates with placeholders. When I need one, I fill in the blanks and run through the stages.
The Shift
Copy-pasting is improvisation. Pipelines are engineering.
One produces inconsistent results that depend on your mood, your context window, and how well you described the problem at 11 PM.
The other produces repeatable, debuggable, reviewable work — every time.
Pick one task you do regularly with AI. Turn it into a three-stage pipeline. Run it twice.
You'll never go back to copy-paste.
Top comments (0)