I just needed to complete a simple task. Create a basic Express.js app, nothing fancy, just a small REST API scaffold for a project demo. So I did what any modern developer would:
I opened Continue, my AI coding assistant, inside VS Code, and typed:
Create an Express app.
Seconds later, a few lines of boilerplate code appeared like magic:
const express = require('express');
const app = express();
// Start server
app.listen(4000, () => {
console.log('Server running on http://localhost:4000');
});

Everything looked familiar until I ran it.
Then Boom, Error!

I’d forgotten that I was using ES modules, not CommonJS. The AI didn’t know that.
I hadn’t told it.
Although it was a subtle oversight, in that moment, I realized something important: the accuracy of my coding assistant had nothing to do with its intelligence.
It had everything to do with how I talked to it.
When AI “Understands” without Understanding
Every developer who uses AI coding assistants has seen this happen: a technically incorrect yet perfectly confident answer.
That is AI hallucination: when an AI model produces incorrect, nonsensical, or fabricated information confidently, as if it were true. This is because language models predict the most statistically likely next word based on data it has been fed, rather than truly understanding the information.
So when you tell your assistant, “create an Express app”, it gives you what’s most common — not what fits your setup.
No wonder it missed the ES module part. It didn't misunderstand me. It just filled in the blanks I left empty.
Turning Point - Writing Better Prompts
After a few more “why won’t this run?” moments, I decided to treat Continue like a junior developer, not a search bar.
Instead of one vague sentence, I gave it real instructions:
Create a simple Express.js app using ES modules.
Useimportinstead ofrequire.
Add routes for/and/health, and include a middleware that logs the request method and URL.
Use port 4000.
Continuous AI generated this:
import express from 'express';
const app = express();
// Middleware
app.use((req, res, next) => {
console.log(`${req.method} ${req.url}`);
next();
});
// Routes
app.get('/', (req, res) => res.send('Hello from Continue.dev!'));
app.get('/health', (req, res) => res.json({ status: 'ok' }));
// Start server
app.listen(4000, () => {
console.log('Server running on http://localhost:4000');
});
And this time, it ran perfectly.
No rewrites. No fixes. Just clean, working code.
Why This Works
Large language models like the ones Continue supports don’t think, they predict. The clearer your prompt, the narrower the prediction range, and the higher your accuracy.
When you give Continuous AI context, constraints, and intent, you help it eliminate wrong guesses.
In practice, it looks like this:
| Prompt Element | Example | Why it matters |
|---|---|---|
| Context | “Using ES modules in Node.js…” | Avoids incompatible syntax |
| Constraints | “Only include routes / and /health” | Keeps code focused |
| Intent | “I just want a lightweight scaffold for testing APIs.” | Prevents over-engineering |
Coding assistants like Continue also gain an edge by reading your actual codebase in VS Code, so they don't need to guess your environment if you let them see the files. That means your next prompt becomes part of a conversation rather than an isolated question.
Best Practices For High Accuracy
Generally, you should follow these guidelines to get the most accurate responses from your coding assistant:
Write Clear, Goal-oriented Prompts
- Start with the problem statement (“Write a function that validates requests”)
- Specify the language and framework (“in JavaScript using Joi”)
- Describe constraints and expected behaviour (“must validate all requests, authorized or unauthorized”)
Provide Adequate Context
- Reference related functions if needed (“use the existing database schema to check authorized or unauthorized requests”)
Break Tasks Down
- Request for smaller testable parts instead of entire modules. Example: ask for “the SQL query first” before “the full Express route handler.”
Refine And Improve
- If output is off, clarify what went wrong in the following prompt(“It doesn’t handle empty input; fix that”).
- Each feedback loop improves accuracy significantly.
With a coding assistant like Continuous AI, you can access models configured for various roles in the Continue extension. This makes it easy to switch between chatting, autocompleting code suggestions, applying edits to a file, and more. When you describe what you want (e.g., “add authentication middleware” or “optimize this query”) and switch to the desired role, it understands your intent and acts accordingly.
Overall, AI coding assistants can only amplify your skill, they don’t replace it. Hence, when working with AI coding assistants, remember to always read and reason through the generated code before running it. The key to 90% accuracy isn’t automatic; it’s clarity, validation, and iteration.
Top comments (0)