DEV Community

Phaneendra Kanduri
Phaneendra Kanduri

Posted on

An Engineer's Practical Manual for Using AI Code Assistants

AI code assistants are everywhere now. But most engineers either treat them like magic oracles or ignore them entirely. Neither approach works.

Here's a practical manual for getting actual value from AI tools without letting them wreck your codebase.


TL;DR

  1. Treat AI like a smart intern who is good at execution, bad at judgment
  2. Set your communication style upfront in custom instructions
  3. Define non-negotiables AI must remember (tech stack, conventions, compliance)
  4. End every task with "Ask questions if you have gaps"
  5. For long projects, ask AI to document progress in a file
  6. Monitor actively. AI will go out of bounds
  7. Double-check everything, twice again before shipping
  8. AI accelerates scaffolding, doesn't replace decision-making

1. Treat AI Like a Smart, Overenthusiastic Intern

This is the most important mental model.

AI tools are not senior engineers. They don't understand system constraints, political context, or why a "simple refactor" will break 6 downstream consumers. But they're excellent at scaffolding, repetitive transforms, and generating first drafts.

What this means in practice:

  • You still own the architecture
  • You still own the tradeoffs
  • AI handles the grunt work: boilerplate, repetitive edits, documentation templates, test scaffolding

Example: During a platform migration, AI scaffolded 200+ Components files and saved ~40 hours of manual work. But the engineer still designed the directory structure, schema, and migration sequence.


2. Set Your Communication Style Upfront

AI adapts to how you speak to it. If you're vague, you'll get vague output. If you're direct, you'll get direct output.

Set custom instructions once:

Direct, structured, blunt. No filler, no apologies.
Flag gaps in my request if they exist.
When code has edge cases I didn't mention, list them before generating.
Enter fullscreen mode Exit fullscreen mode

This cuts back-and-forth by ~50%. Instead of:

"Sure! I'd be happy to help. Let me create a component for you. Would you like me to include props for styling?"

You get:

"Missing: error state handling, loading state, TypeScript types for onSubmit. Proceeding with basic implementation."

Set the tone once. Enforce it consistently.


3. Define Non-Negotiables That AI Must Remember

Some context applies to every task. Don't repeat yourself, encode it once.

Examples:

  • "Always use functional components with TypeScript. No class components."
  • "Accessibility is non-negotiable. Include ARIA labels and keyboard nav by default."
  • "When writing tests, use Enzyme, not React Testing Library"
  • "Performance matters. Flag unnecessary re-renders or heavy computations."

This prevents AI from generating outdated patterns or ignoring your team's conventions.


4. Always End Tasks With: "Ask Questions If You Have Gaps"

AI will confidently generate wrong code if you don't give it permission to push back.

Bad prompt:

"Build a dropdown component with keyboard navigation."

Better prompt:

"Build a dropdown component with keyboard navigation (arrow keys, Enter, Escape). Must be WCAG 2.1 AA compliant. Ask questions if you need clarification on interaction patterns or edge cases."

The second version produces:

  • "Should focus trap inside the dropdown when open?"
  • "Should Escape close and return focus to the trigger?"
  • "Should arrow keys wrap at list boundaries?"

Now you're reviewing a solution that handles real-world behavior, not a demo-quality stub.


5. For Multi-Week Projects: Make AI Document Progress

When working on long-running tasks (e.g., framework migrations, large refactors), ask AI to maintain a progress log.

Prompt template:

"We're upgrading our framework. This will take 3 weeks. After each session, update PROGRESS_LOG.md with:

  • What we completed today
  • Blockers encountered
  • Next steps

When I return, I should be able to read the log and resume without context loss."

Example log entry:

## 2024-01-15: Upgraded core packages

- Completed: Core packages updated to latest version
- Blockers: 14 components using deprecated APIs
- Next: Migrate deprecated API usage
- Estimate: 2 days for API migration
Enter fullscreen mode Exit fullscreen mode

This is critical. AI has no memory between sessions. If you don't externalize progress, you're starting from scratch every time.


6. AI Will Go Out of Bounds. Always Monitor Actively

AI optimizes for "solving the prompt," not "solving the right problem."

Real example:

An engineer asked AI to optimize a Python script. It rewrote the entire script using a different parsing library, which:

  • Broke compatibility with existing plugins
  • Introduced a new dependency
  • Required changes to CI/CD

The AI's solution was faster, but architecturally wrong.

How to prevent this:

  • Set constraints upfront: "Do not introduce new dependencies without asking."
  • Review diffs before running code
  • Test in isolated environments first

AI will go off-script. Your job is to catch it before it hits production.


7. Double-Check Everything. Then Twice Again.

AI-generated code is never production-ready on the first pass.

Standard review checklist:

  • Does it handle edge cases? (Empty arrays, null values, network errors)
  • Is it accessible? (Keyboard nav, ARIA, focus management)
  • Is it performant? (Unnecessary re-renders, large bundle size)
  • Does it follow team conventions? (Naming, file structure, test patterns)
  • Will it break in 6 months? (Deprecated APIs, hardcoded assumptions)

Real scenario: AI generated GraphQL queries that worked but over-fetched data, pulling entire content trees when only leaf nodes were needed.

Fixing this required:

  1. Understanding the schema
  2. Knowing the API's performance characteristics
  3. Rewriting queries with proper field and depth limits

AI got 70% of the way there. The last 30% required domain knowledge.


8. AI Accelerates Scaffolding. It Doesn't Replace Decision-Making

Where AI excels:

  • Migrations: Generated files, parsers, layouts → saved ~40 hours
  • PWA setup: Service worker boilerplate, cache strategies → saved ~8 hours
  • Accessibility audits: Scanned for missing ARIA labels, generated fixes → saved ~12 hours

Where AI fails and humans must step in:

  • Deciding between micro-frontends vs. monolith (architectural tradeoff)
  • Evaluating framework options (performance + DX tradeoff)
  • Negotiating with designers on performance vs. aesthetics (subjective judgment)

What AI Is Actually Good For

Excellent:

  • Boilerplate generation (repos, components, configs)
  • Repetitive transforms (file renames, import updates, data migrations)
  • Documentation (README templates, API docs, guides)
  • Test scaffolding (basic unit test structure, mock setup)

Mediocre:

  • Complex refactors (high risk of breaking changes)
  • Performance optimization (requires profiling and domain knowledge)
  • Debugging (suggests fixes, but often misses root cause)

Bad:

  • Architecture decisions (tradeoffs require human judgment)
  • Security-critical code (high stakes, low tolerance for error)
  • Organizational dynamics (AI doesn't understand politics or people)

Final Take

AI tools are productivity multipliers, not replacements.

They can cut scaffolding time by ~60%. They eliminate entire classes of grunt work. But they haven't changed the fact that someone still needs to know what to build, why to build it, and how to evaluate if it's correct.

That someone is you.

Use AI to move faster. But never stop thinking.


What's your experience with AI code assistants? What patterns have worked (or failed spectacularly) for you?

Top comments (1)

Collapse
 
lmrvngn profile image
Mike Taylor

Agree overall but I believe it's actually decent for debugging if it has access to code, logs and chrome in browser to access the console.

Here are some tips, I use with Claude Code:

  • Have a claude.md and at the start of the session ask it to read it.
  • Update claude.md or equivalent after big new task
  • Use plan feature each time when you build a feature.
  • Ask the AI to make a lot of tests from simple to complex one, this way after each implementation it can check if it broke something.
  • Ask it to use and test with playwright to figure out if something is wrong.