DEV Community

Jamaica Pitts
Jamaica Pitts

Posted on

Detailed Note Taking Boosts AI Editor Performance

Anyone else been let down by AI coding editors? As soon as the complexity crosses a certain threshold, they create more problems than they solve. Especially when changes interact across multiple files with complex flows due to events, queues, or multiple services, they entirely fall apart.

I’ve seen this first-hand while working on practiceproblems.org, a site that indexes STEM problems from YouTube and layers on a bunch of tagging and filtering logic. Most of the complexity comes from the custom CMS tooling I built for managing the content. I had hoped to just vibe code my way through the remaining features with help from cursor (my current AI code editor), but it wasn’t quite up to the task.

Eventually, I found a simple trick (yes, really) that helped a lot: keeping detailed notes.

I have a habit of taking extensive notes while coding, usually in a file like dev-notes.md. I write down thoughts for how to solve problems, what bugs to encounter and what to begin working on the next day. Originally, I had this file in .gitignore, thinking it was just temporary scaffolding for myself. Turns out that meant cursor wasn’t sending it to the LLM when trying to autocomplete or generate code. I only realized this by accident. On a new project, I checked the notes file into git by default. The difference in AI code completion was immediate.

Suddenly, the autocomplete suggestions felt smart. Cursor wasn’t just guessing, it seemed to know exactly what I was doing. It would anticipate changes I had written down in the notes file, like updating a certain typeScript interface and adding a corresponding validator and autocomplete by autocomplete it would flesh out exactly what I wanted without even touching the heavy prompts.

Now I make sure my notes file is part of the repo. I still don’t push it to main, but I check it in while I’m actively working so it can be used as context. I’ll also make sure the notes include high-level info like what a feature is supposed to do, which files it will touch, and what’s still broken.

From what I can tell, the entire file is parsed and relevant snippets are passed along to the model depending on what you're asking for. So when you open a file and start working on a task, the LLM might already know from your notes what that task is, which makes a big difference.

It’s clear to me that context is everything. The people building AI editors want it to feel like magic, something you don’t have to think about, but that’s not where we’re at yet. Until these tools get a lot better at guessing what’s relevant, I think the best move is to manually control what context gets fed into the model. In my experience, a well written notes file has made more of an impact on the ai editor performance than anything else I've tried.

Top comments (0)