As a developer juggling multiple side projects, I've learned firsthand how to harness VS Code Copilot and i assume it would similar for Cursor to streamline workflows and avoid common pitfalls. In this guide, I'll share actionable strategies to maximize their potential, drawing from my experiences and mistakes so you can avoid repeating them. By setting up your environment thoughtfully and maintaining disciplined practices, you can supercharge your development process.
1. Tailor Your Copilot Instructions to Your Project
One of the mistakes is using generic or copied templates for your Copilot instructions. Instead, craft instructions specific to your project’s structure and goals.
For example, in one of my projects, I maintained a single repository for both frontend and backend code but created separate Product Requirement Documents (PRDs) for each. This setup gave the AI clear context for both components, making it easier to generate relevant code and suggestions. The result? Faster development and fewer misaligned outputs.
Tip: Spend time writing detailed Copilot instructions that reflect your project’s architecture. Specify whether you’re working with a monorepo, microservices, or separate frontend/backend stacks to ensure the AI understands your context.
2. Treat Your PRD as the Source of Truth
Ever wondered your coding agent has suddenly became dumb and its response are not aligning with your expectation then this section is for you!. Your PRD is the backbone of your project, and letting the AI manage it without oversight can lead to issues like context poisoning (hallucinated or incorrect information creeping into the context) or context distraction (irrelevant details derailing the model). These are critical failure modes outlined in this insightful blog, i would highly recommend going though the blog, And believe me i spent hours to days just cleaning up my PRD and tasks to remove unnecessary information which might be degrading the model performance. All the tasks mentioned below aren't meant to be done manually i.e you editing the file let agent handle it and you supervise the process.
What to do:
- Manually review and update your PRD to ensure it’s free of conflicting or repetitive content. Use agent to question things like "do you think this section is required?" in the PRD and based on its respond decide if you wanna keep or remove it.
- Avoid letting the AI overwrite critical sections without your approval.
- Regularly check for context engineering issues, such as outdated dependencies or ambiguous requirements, which can degrade model performance.
- have separate PRD's for frontend and backend if you are handling and single repo for both the codes.
- make sure to define a proper directory structure with filenames, and have a metadata about the files and their purpose. i have seen scenarios where the agent started created a new file even though a file already exist for that exact purpose.
By keeping your PRD clean and focused, you ensure the AI has a reliable foundation to work from.
3. Embrace Continuous Learning and PRD Evolution
A PRD isn’t a “set it and forget it” document. As you work with AI tools, you’ll notice occasional missteps—code that doesn’t align with your expectations or repeated errors. Use these as opportunities to refine your PRD.
For instance, if the AI generates incorrect API calls, document the correct behavior in your PRD. You can even ask the AI to update the PRD for you with a prompt like: “Update the PRD to include this API endpoint correction to prevent future errors.” This iterative process ensures the AI learns from its mistakes and improves over time.
Pro Tip: Treat your PRD as a living document. Each time the AI deviates from your desired outcome, update the PRD to include specific guidance, reducing the chance of repeated errors.
4. Set Up Model Context Protocol (MCP) in VS Code
The Model Context Protocol (MCP) is a game-changer for AI-assisted development. It allows your AI to interact with external tools, APIs, or data sources, making it far more powerful than relying on its internal knowledge alone. Setting up MCP in VS Code is straightforward and can save you hours of manual verification.
For example, if you’re unsure whether an API endpoint is correct (especially if the AI might hallucinate it), you can configure an MCP server to fetch real-time data from the web. Instead of manually checking endpoints in a browser, you can ask the AI: “Verify this API endpoint by fetching its documentation.” This is particularly useful for staying up-to-date with rapidly changing APIs.
How to set it up:
- Follow the VS Code MCP setup guide to configure MCP in your workspace.
- Explore the MCP server repository for a list of available servers, such as those for GitHub, file systems, or web scraping.
- Add relevant MCP servers to your
.vscode/mcp.json
file. For instance, to use a web-fetching server:
{
"mcpServers": {
"puppeteer": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-puppeteer"]
}
}
}
This setup allows your AI to fetch and verify external data, reducing errors and speeding up development, the is puppeteer
5. Don’t Blindly Trust AI—Do Your Own Research
While Copilot and Cursor often provide solid suggestions, they’re not infallible. Over-relying on AI can lead to suboptimal architectural decisions or missed opportunities to leverage better tools and frameworks.
During the architecture phase, conduct your own research to validate the AI’s suggestions. For example, if the AI recommends a specific library, cross-check its documentation or explore alternatives. This ensures you’re not locked into a suboptimal solution just because the AI suggested it.
Example: In one project, while developing the workflow the AI kept suggested implementing all the LLM API's and the workflow was getting complex, during the entire time it didnt suggest any alternatives even after asking, but after studying about LangChain i mentioned to AI about it and its response was you're absolutely right we should implement this framework!, so always do your own research.
Tip: When the AI’s suggestions feel off, prompt it with alternative approaches: “What if we used [alternative tool] instead? Compare the two.” This encourages the AI to rethink its recommendations.
6. Automate Subflows with Reusable Prompts
One of the most powerful features of VS Code Copilot is the ability to create reusable prompts to automate repetitive tasks. By setting up prompts, you can streamline subflows like creating branches, running tests, committing code, and raising pull requests.
For example, I created a prompt that triggers when I start a new task:
- Creates a new branch based on the task name.
- executes the task from tasks.md folder.
- Commits changes with a standardized message.
- Pushes the branch to the remote repository and raises a pull request.
How to set it up:
In VS Code, Create a prompt file with the Chat: New Prompt File command in the Command Palette. This command creates a .prompt.md file in the .github/prompts folder at the root of your workspace.for more information visit Reusable Prompts.
Trigger the prompt in agent mode by typing
#newTask
in the chat input.
This automation saves time and ensures consistency across your workflow.
Conclusion
By following these strategies—tailoring Copilot instructions, maintaining a robust PRD, embracing continuous learning, setting up MCP, conducting your own research, and automating subflows—you can maximize the power of AI tools like VS Code Copilot or Cursor. These practices have helped me avoid common pitfalls like context poisoning and over-reliance on AI, and I hope they’ll do the same for you.
For further reading, check out:
Happy coding, and may your side hustles thrive with AI by your side!
Top comments (0)