DEV Community

Junkyu Jeon
Junkyu Jeon

Posted on • Originally published at bivecode.com

How to Write Better Prompts for Bolt, Lovable, and Cursor

You and your friend both use Cursor. You both ask for "a simple to-do app." Three hours later, your friend has a working app with auth, persistence, and a clean UI. You have a pile of files, a broken login, and a database that won't connect.

Same model. Different prompt.

People talk about prompt engineering like mysticism. It isn't. It's closer to a checklist. Here's the checklist — plus templates you can copy.

Why Most Prompts Fail

Bad prompts share a few patterns. The AI isn't guessing maliciously — it's filling blanks you left empty.

  • No stack specified. "Add a database" — which one? SQLite? Supabase? Postgres? AI picks. Often picks something that doesn't fit.
  • Whole-app requests. "Build me an Airbnb clone." 40 files of plausible-looking code, none of which fit together.
  • Vague verbs. "Make it better." "Refactor this." No target — so it changes everything, including parts that were fine.
  • No examples. "Format the date nicely." AI guesses what "nicely" means.
  • Missing constraints. No edge cases, no max length, no expected error behavior. AI handles the happy path and ignores the rest.

Each blank is a decision the AI makes for you — usually not the one you'd have made.

The Anatomy of a Good Prompt

Five parts. Don't need every part every time, but more = less guessing.

1. Context

What is this project? What stack? What conventions? If you have a CLAUDE.md or .cursorrules, this is already covered. If not, lead with one sentence: "This is a Next.js 15 + Supabase + Tailwind app. We use shadcn/ui and follow App Router conventions."

2. Goal

What specifically do you want? One concrete change. "Add a button on the dashboard that exports the visible projects to CSV" — not "add export functionality."

3. Constraints

Must do? Must not do? "Must work for up to 10,000 rows. Must not block the UI. Use the existing supabase client from lib/supabase.ts."

4. Output Format

How do you want the answer? Code only? Diff only? Specify it.

5. Verification

How will you know it worked? Tell the AI. "After this, I should be able to click Export, get a CSV download named projects-{date}.csv, and open it in Excel without warnings."

When the AI knows what success looks like, it writes code aimed at that result — not at "something CSV-ish."

5 Patterns That Make Prompts Work

1. Specify the Stack and Version

"Use Next.js" is not enough. Next.js 13 Pages Router and Next.js 15 App Router are different worlds.

Bad: "Add a server-side API route."
Good: "Add a Next.js 15 App Router route handler at app/api/export/route.ts. Use the new Web Request/Response API, not the old req/res."

2. Show, Don't Tell

Examples beat adjectives every time.

Bad: "Format the price nicely."
Good: "Format as USD with dollar sign and 2 decimals: '$1,234.56'. Under $1, show 4 decimals: '$0.0042'."

3. One Feature at a Time

Resist the mega-prompt. Split it:

  1. "Create /profile route. Fetch current user from Supabase, display name/email/joined date."
  2. "Add Edit button. Toggles name/email to inputs."
  3. "Add Save button. Updates user via Supabase. Shows success toast."
  4. "Add avatar upload. Stores in Supabase Storage. Updates avatar_url."

Each step verifiable. When something breaks, you know which step. With a mega-prompt, you don't.

4. Constrain the Data Shape

Most bugs come from data the AI didn't expect.

Bad: "Calculate the order total."
Good: "Inputs: array of { price: number, quantity: number } (price in cents, quantity positive integer), and optional discountPercent (0-100, default 0). Return cents. Empty array → return 0. Negative quantity → throw."

5. Ask for Tests Explicitly

AI rarely writes tests unless asked. Almost never thinks about edge cases unless told.

Add one line: "Also write a test file covering: empty input, single item, max items, zero discount, 100% discount, one negative input that should throw."

The test file does double duty: forces edge-case thinking, and gives you a regression net.

5 Anti-Patterns to Avoid

1. "Make it better"

The vaguest possible prompt. Name what you want: "Reduce the number of state variables. Right now there are 7 useState calls; consolidate where possible."

2. "Add authentication"

Auth isn't one feature. Sign-up, sign-in, sign-out, password reset, session, route protection, email verification — at minimum. Pick a flow:

"Add Supabase email/password sign-in. One page at /login. On success, redirect to /dashboard. On failure, show Supabase error inline. Don't add sign-up or password reset yet."

3. Pasting whole logs without context

200-line stack trace + "fix this" = AI guessing what part matters and what your code looks like.

Paste the relevant error line, the function it points to, one-line description of what you were doing.

4. "Refactor this"

Refactor for what? Performance? Readability? Reuse? Each goal points at different changes.

Better: "Extract the price-formatting logic from this component into a function in lib/format.ts. Don't change behavior. Update component to import and use the new function."

5. Mid-stream stack swaps

Three hours into a Supabase project. You read a tweet about Drizzle ORM. "Switch the database to use Drizzle." AI tries. Half-rewrites a few files, leaves Supabase imports scattered, nothing works.

Stack swaps mid-project are surgery, not a prompt. Plan it as its own deliberate session.

Real Before / After

Same goal, two prompts.

Before:

Add export to CSV.
Enter fullscreen mode Exit fullscreen mode

What the AI does: invents a button somewhere, picks a CSV library you don't have, exports fields it guesses are interesting, ignores filtering, blocks UI for large datasets.

After:

Context: Next.js 15 App Router project, Supabase backend.
Dashboard at app/dashboard/page.tsx shows a table of "projects"
filtered by search input and status dropdown.

Goal: Add an "Export CSV" button next to the search input that
downloads the *currently visible* (filtered) rows as CSV.

Constraints:
- No new dependencies. Build the CSV string manually.
- Columns: id, name, status, created_at (ISO string), owner_email.
- Handle commas and quotes correctly (RFC 4180).
- Filename: projects-{YYYY-MM-DD}.csv.
- Must work for up to 10,000 rows without freezing.

Output: Diff to dashboard/page.tsx and any new helper file. No explanation.

Verification: With dashboard filtered to status=active, I click
Export CSV  download with only active projects, named
projects-2026-04-21.csv, opens cleanly in Excel.
Enter fullscreen mode Exit fullscreen mode

Second prompt: 60 seconds longer to write. Saves 30 minutes of back-and-forth.

Steal These Templates

New Feature

Context: [stack + relevant existing code]
Goal: [one specific user-visible change]
Constraints:
- [must do X]
- [must not do Y]
- [data shape, edge cases]
Output: [code only / diff only / explain first]
Verification: After this, I should be able to [observable thing].
Enter fullscreen mode Exit fullscreen mode

Bug Fix

Repro:
- Steps: [1, 2, 3]
- Input: [exact values]
- Expected: [what should happen]
- Actual: [what happens, including error]

Suspect file: [path]

Don't change behavior elsewhere. Show your hypothesis before
the fix, then minimal change to address the root cause.
Enter fullscreen mode Exit fullscreen mode

Data Model

Add a Supabase table "[name]" with:
- id: uuid primary key
- [column]: [type, nullable?, default?]
- created_at: timestamptz default now()
RLS: [who reads, who writes]
Generate SQL migration under supabase/migrations/ with
timestamp prefix, and TypeScript type in lib/types/[name].ts.
Enter fullscreen mode Exit fullscreen mode

When Prompting Alone Isn't Enough

For a single feature, a good prompt is enough. For a whole app, you need a sequence — and getting the sequence right is its own skill. Data model before UI. Auth before features that need it. Deploy setup before you have anything to deploy.


Better prompts aren't about being clever. They're about leaving fewer blanks. Every blank is a decision the AI makes for you — and it doesn't know your project, your users, or what you actually want.

The skill compounds. Every well-crafted prompt teaches you what to specify next time. Within a few weeks, your hit rate moves from "sometimes works, mostly weird" to "ships first try, most of the time."

Steal the templates. Add to them as you find what your stack needs. Your AI is the same as everyone else's — your prompts don't have to be.

Originally published at bivecode.com

Top comments (0)