DEV Community

Cover image for 5ive ways to make good AI code exceptional
Gleno
Gleno

Posted on

5ive ways to make good AI code exceptional

Table of Contents

Why this matters

I'm interested to hear how other vibe-coders are getting the best out of whatever model they are using. For me, and Claude, I've been reading everything, everywhere in order to improve my already pretty great outcomes and make more robust software and applications.

I’ve found that the difference between average AI-assisted coding and genuinely impressive output often has less to do with the model itself and more to do with how you direct it. In other words, the biggest improvement usually comes from becoming a better editor, product owner, and critic of the work.

This article is basically my current thinking on how to get better results. Not just more code, but better code. Better UX. Better structure. Better judgment. Less fluff. Less fake completeness. Less "looks great in a screenshot, falls over in real life."

1. Keep scope brutally clear

The first big improvement for me was realising that the model does best when the task is:

  • narrow
  • concrete
  • sequenced
  • judged against a clear standard

What hurts quality is asking for too much at once:

  • coding
  • product decisions
  • architecture
  • UX redesign
  • future planning

all bundled into one giant prompt.

The best pattern I’ve found is:

  • one task
  • one success definition
  • one summary at the end
Good prompt:

Improve the dashboard filter UX only. Do not add scope. Focus on clarity, spacing, active filter visibility, and reducing clicks. After changes, summarize what improved and what tradeoffs remain.

Less good prompt:

Make the whole app more modern, smarter, and production-ready.

That second prompt sounds ambitious, but it usually produces mush. The model starts solving five different problems badly instead of one problem well.

2. Give your model a stronger quality bar

Another thing that helped me a lot was stopping myself from only telling the model what to do and instead telling it what good looks like.

Don’t settle for technically correct if what you actually want is product-quality work.

Quality bar prompt:

Aim for senior product-quality work, not just technically correct implementation.

Bar for quality:

  • obvious, calm, low-friction UX
  • strong visual hierarchy
  • consistent naming and spacing
  • no unnecessary complexity
  • no fake completeness
  • preserve user familiarity where helpful
  • improve common workflows, not edge-case cleverness

For UI work, I’ve also found it helps to be even more direct:

UI quality prompt:

Do not settle for generic dashboard SaaS. Make this feel immediately understandable to experienced users, but cleaner, calmer, and easier to scan.

That little shift makes a big difference. Otherwise the model often gives you something that is fine in a technical sense but generic in every other sense.

3. Make it critique itself before and after

This is one of the best tricks I’ve found.

Before it changes anything, get it to identify weak spots. After it changes things, get it to critique the result honestly.

That helps push it out of “task completed” mode and into “quality review” mode.

Before coding:

Before coding, identify:

  1. the 3 weakest parts of the current implementation for this task
  2. the biggest risk of making it worse
  3. the standard you will use to judge success
After coding:

After coding, critique your own work:

  1. what improved materially
  2. what still feels weak or generic
  3. what a strong human product designer would probably still want changed

That has been incredibly useful for me because the model will often otherwise sound pleased with itself far too early.

4. Separate build passes from polish passes

One thing I’ve had to learn is not to expect the first pass to also be the best pass.

The model can build quickly, but quality usually comes from doing the work in layers.

My preferred sequence is:

  1. build it
  2. verify it
  3. refine UX
  4. clean code
  5. document next gaps

That feels much closer to how good teams actually work.

Refinement pass prompt:

Do not add features.

Now do a refinement pass on the existing implementation only.

Improve:

  • spacing
  • hierarchy
  • wording
  • empty states
  • action discoverability
  • friction in common interactions

Do not redesign the product. Tighten what already exists.

That distinction between build pass and polish pass has improved my results a lot.

5. Make it work against a reference standard

The model does better when it has something to aim at.

If you already know the kind of experience you want, say so clearly and repeatedly. Don’t assume the model will infer your taste.

For me, this often means defining a few non-negotiables around familiarity, clarity, calmness, cognitive load, and usability.

Design standard prompt:

Design standard:

  • preserve familiar mental models where helpful
  • reduce clutter
  • make active state obvious
  • improve scanability
  • reduce clicks
  • make common actions easier to find
  • use calmer, cleaner visual hierarchy

Prefer familiarity for core workflows and innovation only where it clearly improves speed, confidence, or clarity.

This helps stop the model wandering off into novelty for novelty’s sake.

The playbook

Here’s the practical version of how I now try to work.

Review every summary properly

When the model says it has completed something, I ask myself:

  • Did it solve the real user problem?
  • Did it stay in scope?
  • Did it introduce unnecessary complexity?
  • Does it feel generic?
  • Would a busy user understand this quickly?
  • Does it still feel like the kind of product I’m trying to make?

If the answer is “technically yes, emotionally meh,” that usually means it needs a refinement pass.

Ask for rationale on UI tasks

Not chain of thought. Just design rationale.

UI rationale prompt:

For each significant UI change, explain:

  • what stayed familiar
  • what improved
  • why it reduces cognitive load
  • any tradeoff between clarity and familiarity

That helps surface whether the model is making deliberate choices or just decorating things.

Keep a quality debt list

This has been useful too.

I like having the model maintain a small file of things that are still weak, awkward, or unfinished, such as:

  • awkward labels
  • generic empty states
  • spacing that still feels off
  • interactions that are too hidden
  • things that need real user testing
Quality debt prompt:

Maintain a short docs/quality-debt.md with only meaningful remaining UX/code quality issues. Keep it concise and prioritized.

That stops “good enough” from becoming invisible.

Make it do a cleanup pass

Before a task is truly finished, I often want one more sweep for consistency and simplification.

Cleanup pass prompt:

Before you finish this task, do a cleanup pass for:

  • naming consistency
  • dead code
  • duplicate logic
  • awkward wording
  • spacing inconsistencies
  • unnecessary abstractions

Force human-grade restraint

This one matters more than people think.

The model loves adding layers, helpers, hooks, abstractions, and future-proofing when they are not actually needed.

Restraint prompt:

Do not add abstractions, helper layers, hooks, or configuration unless they clearly reduce present complexity. Prefer simple, readable code over future-proofing theater.

That line alone can save a surprising amount of nonsense.

For UI/UX specifically

If I want the best UI/UX possible, I try to push the model toward these principles:

  • 10-second scanability
  • recognition over recall
  • fewer competing focal points
  • obvious primary actions
  • visible active state
  • trustworthy summary-to-detail flows
  • calm density, not empty Dribbble fluff
  • tables and controls that are genuinely usable

Click to see the full UI polish prompt
Do not add scope.

Refine the current UI to improve:

  • scanability in under 10 seconds
  • visual hierarchy
  • spacing consistency
  • filter clarity
  • table readability
  • action discoverability
  • trust and calmness of the interface

Keep the core mental model familiar.
Do not turn this into a flashy SaaS dashboard.
Explain what changed, why it is better, and what still feels weak.


That prompt has been a good one for me because it pushes the model toward clarity instead of visual showing off.

The biggest unlock

One of the best prompts I’ve used is asking the model what a great human would still dislike about the work.

Critique from multiple perspectives:

Critique this implementation from the perspective of:

  • a strong senior engineer
  • a strong product designer
  • a skeptical internal business user

What would each still dislike or question?
Then improve the top issues that are in scope.

That often gets you a better result than simply saying “make it better.”

My role in all this

The way I think about my job now is this:

My role is to stop the model becoming:

  • too broad
  • too clever
  • too generic
  • too pleased with itself

and keep pushing it toward being:

  • clearer
  • tighter
  • calmer
  • more honest
  • more product-quality

That has made a massive difference.

My strongest recommendation

If I had to pick one thing that consistently improves output, it would be adding an excellence pass after each meaningful chunk of work.

The excellence pass prompt:

Do not add scope.

Now do an excellence pass on the existing implementation.

Raise the quality of the work without changing the product scope.

Focus on:

  • code clarity
  • naming consistency
  • removal of dead or unnecessary complexity
  • UI hierarchy
  • spacing and readability
  • action discoverability
  • empty/loading states
  • reduction of cognitive load

Then critique the result honestly:

  • what is now strong
  • what still feels average
  • what still needs human judgment

That has probably been the single best quality multiplier for me.

Final thought

I’d genuinely love to hear how other people are getting the best out of their models.

What are you doing that consistently improves outcomes?

Are you using:

  • tighter prompting
  • staged passes
  • self-critique
  • design standards
  • test-first workflows
  • something else entirely?

Because the more I do this, the more I think the real skill in vibe-coding is not just getting the model to produce code.

It’s getting it to produce work you’d actually be proud to ship.

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.