DEV Community

Cover image for Vibe Coding Best Practices
Jolene Langlinais
Jolene Langlinais

Posted on

Vibe Coding Best Practices

AI coding tools are powerful accelerators, but only if used with intention. Seeing real gains requires structured workflows and making AI part of your discipline, not a shortcut.

Here’s how to get the most out of tools like Cursor and Claude on a serious engineering team:

Plan

Draft plans and iterate with AI to improve it

  1. Ask clarifying questions about edge cases
  2. Have it critique its own plan for gaps
  3. Regenerate an improved version

Save the final plan in a temporary file and reference it in every prompt

✅ Prompting with a well defined plan eliminates the vast majority of "AI got confused halfway through" cases

TDD

Implement code in TDD with AI

  1. Prompt AI to write a failing test that captures the desired goal
  2. Review the test and ensure it captures the correct behavior
  3. Prompt AI to write code to make the test pass
  4. Iterate through running the test and fixing failures

✅ Prompting with human-reviewed tests results in implemented code passing the correct requirements

Reasoning

Keep this in prompts: Explain your approach step-by-step before writing any code

You need to walk before you can run, and putting adequate time into planning will make the implementation smoother

✅ Prompting for reasoning results in fewer mistakes and the model revealing assumptions

Context Curation

Large projects and dumping context break AI attention. Be intentional on what you include to the context.

Look into these solutions for context curation

  • Context7 keeps your docs up to date without needing to re-paste snippets
  • GitIngest curates codebases into summaries digestible for models

Git Safety

Rely on the fundamentals of git to have a fallback point prior to implementing something with AI. Commit granularly and prevent uncommitted changes piling up.

✅ Prompting with a clean git state results in an easier isolation and rollback of AI-introduced bugs

Prompt Focusing

❌ A poorly formed prompt:

"Here's my entire codebase. Why doesn't authentication work?"

Inputting specific problems generate specific solutions.

Inputting vague problems generate hallucinations.

Use specific code terminology in prompts:

  • Reference the exact identifiers from codebase, not generic business terms
  • i.e., call createOrder() and processRefund() instead of 'place order' or 'issue refund'

This precision helps the AI apply the correct abstractions and avoids mismatches between your domain language and code.

✅ A well formed prompt:

*@src/auth.ts:85-90 panics on None when JWT is malformed*
Fix this and add proper error handling

File References

References files with @src/database.ts instead of pasting code blocks

✅ Prompting direct file references results in an up-to-date context, fewer used tokens, and a more readable prompt history

Specifications

Prompt exactly what to test:

For the new `validate_email` function, write tests for:
- Valid email formats (basic cases)
- Invalid formats (no @, multiple @, empty string)
- Edge cases (very long domains, unicode characters)
- Return value format (should be `Result<(), ValidationError>`)
Enter fullscreen mode Exit fullscreen mode

✅ Prompting specific test cases results in good test boilerplate generation

Debugging

When stuck, prompt for a systematic breakdown:

Generate a diagnostic report:
1. List all files modified in our last session
2. Explain the role of each file in the current feature
3. Identify why the current error is occurring
4. Propose 3 different debugging approaches
Enter fullscreen mode Exit fullscreen mode

✅ Prompting specific instructions results in systemic thinking, instead of guess-and-check

Style Guidelines

Prompt a styling system in order to receive a consistent code quality:

Code style rules:
- Use explicit error handling, no unwraps in production code
- Include docstrings for public functions
- Prefer composition over inheritance
- Keep functions under 50 lines
- Use `pretty_assertions` in test
- Be explicit about lifetimes in Rust
- Use `anyhow::Result` for error handling in services and repositories.
- Create domain errors using `thiserror`.
- Never implement `From` for converting domain errors, manually convert them
Enter fullscreen mode Exit fullscreen mode

✅ Prompting consistent rules results in consistent code quality

Code Review

Treat every AI change like a junior developer's PR/MR, and thus check for risks such as:

Security Review:

Performance Review:

  • N+1 query patterns and expensive loops
  • Algorithm complexity and efficiency
  • Unnecessary allocations and other performance bottlenecks

Correctness Review:

  • Does it handle edge cases correctly?
  • Verify error handling and null-handling
  • Are there off-by-one errors or logical flaws?

Maintainability, Scalability, and Readability Review:

  • Does the code adhere to team standards and established design patterns?
  • Is the code overly complex? Are names clear?
  • Should this code even be added?

✅ Prompting with an expectation of AI being smart but not wise results in considering if the feature or change is truly necessary and well-designed

Antipatterns

The "Magic Prompt" Fallacy

❌ Prompting will not be perfect and AI will make mistakes

✅ Prompting with good workflows result in good implementations

Expecting Mind-Reading

❌ Prompting without state requirements do not result in correct inferences

✅ Prompting with specifics results in good implementation

Trusting AI with Architecture Decisions

❌ Prompting a high-level system design results in poor implementation

✅ Prompting with architecture already designed results in good implementation

Ignoring Domain-Specific Context

❌ Prompting without context results in hallucinations

✅ Prompting with direct reference to business logic, deployment constraints, and/or team conventions results in good implementation

Pair Programming

For most implementation tasks, treat AI as a pair programming driver

✅ AI has no ego or judgement, rather it has infinite patience and good memory

❌ AI doesn’t always catch logic errors and rarely pushes back on bad ideas

TL;DR

AI coding tools can significantly boost productivity, but only if you use them systematically. The engineers seeing massive gains aren't using magic prompts, they're using disciplined workflows.

Plan first, test everything, review like your production system depends on it (because it does), and remember two things:

  • AI is your intern, not your architect
  • AI is a tool that is only as good as they way in which you use it

ℹ️ The material here is from both my own experience and referencing a blog I found on the subject:

Top comments (0)