Cursor is powerful IDE that extends the normal capabilities of the AI models you likely already use. It's built on a fork of VS Code, so if you've ever used that then the interface should feel familiar. I've only been using it for a few weeks, but in that time I've come to establish a routine that is consistently giving me higher quality results than when I started.
What We'll Cover
- Start with a Plan - Creating detailed planning documents
- Agree on a Directory Structure - Avoiding structural chaos
- Commit (Often) - Using Git for safety nets
- Rules, Rules, Rules - Configuring AI guidelines
- Beyond Code - Research and ideation workflows
- What Else? - Community discussion
Start with a Plan
It's sometimes quite amusing to see how easily AI can deviate from what you've discussed, especially in a lengthy conversation or long-running task that eats up context. And it only takes a tiny deviation to quickly end up in hole, since AI can make changes in the editor about a 100x faster than a human.
I avoid this by creating a detailed plan, usually through natural conversation with the AI. I tell it my initial ideas, share any thoughts on technologies and architecture, and have a back and forth as I might if collaborating with someone a project. The details get drawn out in the conversation. Once I feel like we've covered all the key areas, I have the AI turn the conversation into a detailed planning document. Depending on the complexity of the project, a single document might be enough; in other cases, we (the AI) will write more detailed documents for individual aspects that require breaking down further.
Once the plans are written, I review. If there are errors, I correct them or tell the AI. These plans then serve as a consistent reference point that we come back to again and again, checking things off as they get done, or updating if we alter any decisions along the way.
A written plan is one of the most reliable ways I've found to ensure the AI doesn't get lost in the development process.
Agree on a Directory Structure
This might seem like overkill, but you won't think so when you're 50+ actions into your vibe-coded project and you're suddenly like:
Wait, why is there another
src
directory containing only services?
Did I mention that AI can make a lot of changes very rapidly? This is even more likely if you live on the edge like I do. I'll often just set it to work and walk away to do something else. But if you do this without agreeing on a clear structure for your project, you may come back to all sorts of gnarly decisions made in your absence.
Don't wait for the WTF Moment™. Agree on the project structure, then have it create it or write it to a doc—perhaps one of the planning documents you created earlier. Personally, I do both.
Commit (Often)
Before you put the AI to work on your detailed plan, I strongly advise that you initialise a Git repository for the project. Cursor has its own feature that creates "checkpoints" before each set of changes. You can then revert the project to any of these checkpoints if you run into problems (and you will).
But I've found it's fairly easy to lose track of changes when relying solely on automatic checkpoints. It's safer and easier to just commit regularly, especially after a feature has been implemented, a bug fixed, or some other satisfactory outcome from the conversation. If you want to guarantee you can always get back to a working state, commit early and often.
Rules, Rules, Rules
An important feature of Cursor is its Rules. These are basically prewritten instructions that can be sent along with requests. You can have both global and project-specific rules, as well have rules added to every request or to specific types of requests via pattern matching. None are configured by default, so it's up to you to add the instructions that matter. I find they're good for a variety of project preferences:
- coding guidelines
- testing strategy
- commit frequency & style
- tooling
You can find the rules in Settings > Cursor Settings.
At the time of writing there are two main sections.
User Rules: [...] sent to the AI on all chats, composers and Command+K sessions. Basically, they're global, persist between sessions and windows, and get sent with every chat message.
Project Rules: As implied, they only apply to the current project. You set them by creating .mdc
files in .cursor/rules
, either manually or through the Project Rules interface. A good place to put rules about project specific coding conventions. I usually add these files to my repo to make life easier for anyone else who might work on them.
Cursor User Rules
But what exactly should you put in these rules? That depends what matters to you. Here's what I put in mine.
# Rules of Development (RoD)
These are general guidelines to follow for writing, modifying or refactoring code.
1. **Modularity:** Keep functions and modules small and focused on a single responsibility.
2. **Organisation:** Adhere to the established folder structure.
3. **Readability & Maintainability:** Write clean and understandable code. Only add comments where its purpose is not clear from the code and context (e.g. complex logic, counter-intuitive implementation).
4. **Modifications & Refactors:** If you are proposing changes, always check with the user first before implementing, unless already explicitly told to do so. ONLY make changes relevant to the specific task at hand.
5. **User Confirmation for Installations:** Do not install any software, packages, or dependencies using terminal commands without explicit confirmation from the user first.
You'll notice that I've given them a heading. This allows me to draw attention back to them simply by punctuating instructions with something like, "Remember the RoD".
You might also have noticed that there aren't many of them. Perhaps it's just me, but I find the AI can get "overwhelmed" or confused when there are too many rules. This varies from model to model, of course, but I've found that less is more and that it's better to target specific things with each rule. Each of the above was added to address a specific undesirable behaviour.
As an example, before I included these explicit instructions in the rules, I found the AI was prone to creating massive functions and extremely long files. Classic Junior Dev. But now... It still creates massive functions and extremely long files—but it does so far less often ;-)
Jokes aside, I get far better results with the rules than without them. This makes them essential. Try for yourself and see what works.
Cursor Project Rules
It's a chore, but you should absolutely add rules for each project you work on. Just like in the real world, the rules and tooling often change from project to project. The Project Rules allow you to highlight the project-specific details so that the AI doesn't have to guess. Below are some things I've found useful to add in the project rules files.
Coding Guidelines
Any specific formatting preferences, I'll add here; naming conventions, indentation, curly brace position, take your pick. Linters can fix some of this stuff, but I'd rather start with something close to my preferred style than have to do lots of clean up later on.
Git Branching Strategy
Can vary between organisations, teams and projects. Whether you prefer Git Flow or some other strategy, if you want the AI to safely make changes to your repo, you'd better let it know how you like to manage it.
Git Commit Style
I started a project recently without any project rules, and was so confused when the AI was trying to cram a 10-line commit message into what should have been the commit title. Plus, it wasn't adding a subject. Of course, I realised the project rules were missing, so it didn't know that I prefer Conventional Commits.
Testing Strategy
I always include information about testing because AIs will often write massive amounts of code without a thought for any tests. And when you ask them to add some, they'll respond like, "Good idea!"
Deployment Strategy
Yes, I live so wild I'm willing to let AI commit, push and deploy changes to live environments. But only when I know it has the playbook on how to do so.
Here's a snippet from one of my project's .mdc
files:
...
**Testing Setup**:
- **Framework**: Vitest across all packages
- **Workers**: `@cloudflare/vitest-pool-workers` with Miniflare for Cloudflare APIs
- **Environment**: Node.js for shared-utils, Cloudflare Workers environment for workers
**Testing Strategy**:
- **Priorities**: Unit tests prioritised over integration, comprehensive service layer testing
- **Creation**: Create or update unit/integration tests for all new or modified features and fixes
- **Frequency**: Always run all tests after completing changes
**Package Management**: PNPM with workspace dependencies (`workspace:*`)
**Deployment Strategy**:
- **Flow**: `master` → feature branch → work → commit → push → merge to master → auto-deploy
- **Staging**: Deploy branch directly before master merge
- **Production**: Master branch triggers deployment
- **Platform**: Cloudflare Pages (web), Cloudflare Workers (services)
- **Environments**: local, staging, production with environment-specific configs
...
While there's more detail in this rules file, each section and point is still highly focused. With this kind of detail provided, I've found the AI models I work with in Cursor are less likely to get stuck, or use the wrong tools or strategies.
Project Structure
Remember that directory structure I mentioned earlier? I'll often include it in a rules file. I haven't quite settled on a specific approach for this, I'm still experimenting with providing just a high-level overview versus a detailed file-level map.
One method I've used is to have a script that creates a directory tree diagram in plain text. I then have a pre-commit hook run the script and output the diagram to a dedicated rules file, which also gets added to the commit. This works well for smaller projects, but, as one might imagine, it's not so well suited for large projects due to the size of the map that is generated.
A variation of this approach could work, though, if the traversal depth and directory file count is limited. This would create a smaller map with enough detail to give the AI useful clues as to where to find/put particular types of files.
It's worth noting that Cursor did release a beta feature back in April that provides project structure in the context, though I've not seen that it actually works as intended.
It's Not Overkill
This might seem like a lot of extra work and support in order to get the desired results when using Cursor with your chosen model(s), but in reality it's not that different from working with people. All of the above are things you'd need to share, in some form or another, when onboarding new team members onto a project. Why would an AI engineer need any less?
Beyond Code
What has really surprised me about Cursor is how useful it has been beyond unlocking rapid engineering. I've increasingly been using it as both an ideation and a research tool.
Research
The true benefit of using Cursor for research is context. Whether you're starting with research, or in the midst of a project and needing to break out into research tangent, I've found it's both extremely convenient and effective to do it within Cursor.
To give an example, while experimenting with an idea I realised we needed a better approach to creating realistic skin texture for some aquatic wildlife. Because it already had access to overview and planning documents within the project, as well as all of the existing experimental code, I was able to set the AI on a research task with a single-sentence request. It searched several websites and forums, compiled the report in Markdown with links back to relevant sources, and added it to the project ready for viewing.
As I noted in a previous post, I also use Manus and ChatGPT for research. But if I'd turned to them on this occasion, I would have had to answer follow-up questions and explain far more.
Ideation
When I'm starting with nothing more than a vague notion that I'd like to explore and flesh out, I create an empty directory and open it with Cursor. I start the conversation with that vague notion and let it unfold. What naturally emerges from this discussion are samples, prototypes, and—if the original nebulous notion starts to really take form—overview and planning documents, architecture diagrams, etc. The project grows organically, and whichever model I decide to use has full access to this history.
Knowing where we're going and where we've been makes the AI less likely to unintentionally revisit old and dismissed ideas. It helps the AI understand the why of the current project state. It's all about context.
What Else?
The routines I've described are just what I've learned so far. I'm always experimenting, tweaking, chopping and changing my approach. It's better not to remain too attached to one way of doing things, especially in the age of AI. And Cursor tend to ship new features at least once a month, so things you have to do manually today might be automated tomorrow. Ain't that the truth?
What routines have you developed when working with Cursor? I'd love to know what's working for you, so please share in the comments!
Top comments (0)