DEV Community

Cover image for I vibecoded an extension for VS Code so I wouldn't have to vibecode with Claude Code (and an example of how to develop with AI)
Rostislav Dugin
Rostislav Dugin

Posted on

I vibecoded an extension for VS Code so I wouldn't have to vibecode with Claude Code (and an example of how to develop with AI)

For the past year and a half, I have been using Cursor IDE as my primary development environment. At first, I used it as an IDE with comfortable autocompletion (relative to GitHub Copilot). Then the AI became smarter and more convenient, a planning mode appeared, and Claude became a basic attribute of my workday.

A week ago, my open source project Databasus (a tool for backing up PostgreSQL, ~6k stars ⭐️ and ~275k Docker pulls) received support from Anthropic as part of their OSS program: and now I have free Claude Code Max for the next six months.

An email from Anthropic

So, I switched to it... and realized that I was very used to the UX in Cursor IDE 😐. The smartest unlimited models are, of course, great. But convenience and control over changes are my priority.

So I took Opus and vibecoded an extension for VS Code that brings interaction with CLI agents closer to the Cursor IDE experience: when you do see changes and can correct them precisely. A couple of hours spent made my work for the next six months significantly more comfortable.

What I did and how I did it is described below.

Disclaimers

There is a big difference between "vibecoding" and "using AI as a tool":

Vibecoding is creating a program through prompts, without looking at what exactly the AI has changed in the code and without evaluating the adequacy of the solution "under the hood". And you do this in large chunks. In other words, the person is not actually responsible for the quality of the code.

Using AI as a tool is when AI helps to plan, write routine code, double-check decisions, and perform actions that a person tells it to. In other words, the person controls every change in the code and is responsible for it.

I use the second approach in my work and open source. This is because even the most intelligent AI cannot fight with the task of "understanding requirements" and "assessing consequences" (for example, that the code will become unsupported in a couple of days). Although they radically increase the speed and efficiency of work.

When you need to assemble a PoC, a simple script, or something for testing, you can use vibe coding. Like, for example, with the extension mentioned in the article.

By the way, at the same time, I still don't know and don't understand how you can "orchestrate agents" by letting them work for hours. My brain is only capable of keeping one task (i.e., chat) in context at a time 🥲.

And for reference:

1) I have been developing for ~11 years and, in my opinion, I am good at writing code by hands.

I have managed to:

  • develop an eye for while was creating millions of lines of bullshit code;
  • ask stupid questions on StackOverflow (it was mainstream back then, he-he);
  • spend months reading references and boring books, and I also know how to access JDK documentation from the IDE and still don't mind digging through library source code;
  • learn how to evaluate the trade-offs of solutions, because I've seen the consequences of bad decisions many times;
  • install printer drivers in Debian when ChatGPT didn't exist yet (now I'm even afraid to remember that);
  • spend many, many years writing tests by hand and understanding what needs to be tested and what doesn't.

In general, I have enough expertise to understand the strengths and weaknesses of AI. For me, AI is a valuable tool that complements my own expertise, rather than compensating its absence.

2) Databasus has a disclaimer right in README.md about how AI is used in the project and how AI is NOT used in the project. You can read it here. It talks about the code's quality gates, requirements (for humans and AI), etc. There is a separate point that vibecoders are not welcome there.

How do I work with AI?

First, a little context on what my work looks like when I write code with AI (and let me clarify right away that I only use the latest paid Thinking models from Claude and OpenAI):

1) I decide what problem I need to solve and how to solve it. Sometimes I don't know how to solve the problem, or I have doubts that my idea is the most rational one. Then I go to Claude or ChatGPT and simply communicate with them in the format "I want to do X in way Y, but I think Z is possible. What do you think?" or "How does X work, or how is Y usually done? Is there any useful documentation or examples?".

After that, I form an idea of which direction to move in.

2) I take the task I need (let's say, feature X) and break it down into logically separate subtasks. These can be done and tested separately. For example, "add model X to the code, repository, service, and controller, write migration, write tests Y and Z."

3) I ask the AI to prepare a plan for solving the task (with input from me, if necessary). I add to the context of the plan which files relate to the task and how I see the solution, so that the AI understands what to read, what to change, and what to interact with and how.

4) I refine and detail the plan until I am confident that nothing important has been overlooked. By the way, this is the most important point in working with AI (which takes up ~40% of the time).

Usually, those who complain that "AI generates stupid code for me" do not write a plan or do not bring it to an adequate level of detail.

In the process, I ask AI to double-check its own plan.

If I see that the plan is too big and too many files are changing, I go back to the task decomposition step. Not too much should change at once, otherwise I will lose context and won't be able to understand what the AI has done.

5) I send the AI agent to write code according to the plan. Here, it changes the code, runs tests, etc.

6) I review and correct the changes made by the agent. Here, I go through each line of code and make sure that the code does exactly what I need it to do.

If I see that the naming is wrong, there is code duplication, patterns are missing, or something can be done better, I refactor it manually. If I see that the result is not what I need or I realize that the solution is flawed, I cancel the changes and return to the stage of preparing the plan or even decomposing the task.

This step also takes about 40% of the time, roughly the same as planning.

Here, it is important to reject what you have if it is not quite what you need. Or if you feel that the solution seems to work, but will create maintainability issues in the future.

A significant part of development is subjective, depending on subjective experience, insight, and understanding of the project context (from a product perspective, not code) by the developers themselves. And this is where AI falls short.

7) If the task is related to security or reliability, I ask AI to double-check that everything is reliable, nothing has been overlooked, nothing needs to be improved, and no test scenarios have been missed.

All these steps are convenient to go through in Cursror IDE. There are tools for all of this, and they are really intuitive and convenient.

What did I find lacking in Claude Code?

Claude Code has a CLI interface. It can plan and write code. It even has extensions for VS Code and JetBrains IDE (I initially attempted to return to GoLand, but did not because it is convenient to open both the front and back ends in one place).

However, I missed two features:

1) The ability to drag files directly into the AI chat (point 3, “planning,” above) to set the context during planning, suggest where I need changes, and what to take into account. Neither the terminal nor the extension has a drag-and-drop area.

Manually writing the path to the files each time is time-consuming and inconvenient (when you are used to the convenient Cursor). In addition, in Go, it is customary to name files with 1-2 words, and even in my small projects, there are 30+ model.go, service.go, controller.go files with identical names. This means that without the full path to the package, the desired file is rarely highlighted.

2) Highlighting Diff changes directly in the file and the ability to accept/reject a piece of changes after changes from Claude Code (item 6 “review changes” above). That is, there is no such highlighting directly in the file:

Cursor IDE shows roughly the same thing.

Yes, Claude Code shows changes in the chat like this:

Changes directly in the chat

But it's hard for me to understand the context of the code when I can't see the whole file and there are many files. I can't:

  • navigate through references;
  • see variable usage;
  • see syntax highlighting;
  • see the surrounding code (more precisely, above and below).

This means that I cannot properly review changes from AI (more precisely, it takes much more time). This is equivalent to me working inefficiently.

P.S. It is also inconvenient to use Git diff because I usually have more than one iteration of changes from AI within a commit.

How did I make the extension?

There is a nuance in tracking changes through the extension: I cannot directly access Claude Code from the outside and understand what exactly it is changing. Therefore, I need to take a snapshot of the code base and compare it with what has changed. This tracking finds edits made by AI, but also highlights my own changes. So I decided to make an "on/off" button.

I took Opus 4.6 Thinking and started writing requirements.

I needed:

  • a "start tracking changes" and "stop tracking changes" button so that changes from AI would be highlighted, but changes from me would not;
  • highlight changes directly in the editor (so that the entire file is visible and the changed section within the file is highlighted);
  • make it so that I could apply/reject changes in chunks;
  • I need arrows to navigate through the changed files and a counter (to make sure I've checked all the files);
  • I need buttons to accept/reject all changes in the file at once;
  • when I select multiple files using Ctrl, I need an option in the dropdown menu called "Copy as @ references."

After formalizing the requirements, Opus went to write the code. He created the structure, wrote the files, gave me instructions on how to install the extension, and I went to test it.

Problems arose:

1) Even after three attempts, it was not possible to create attractive highlighting for the modified (rather than supplemented) code: it is still displayed as added + red comment, as it was “before” with lines connected via join:

Collapsed changes

It's moderately convenient, but much better than when the code jumped around the screen, was re-rendered, or changes weren't displayed at all. I don't know exactly how to fix it yet. When I have time, I'll open the CodeLens documentation and figure out how code display works in VS Code. Here, AI needs to tell me what to do.

2) If there are a lot of binaries in the project (which is specific to Databasus), the extension took a long time to load them into RAM. I added a restriction that files up to 1 MB in size can be tracked and those in .gitignore can be skipped.

3) The extension itself installed after ~5 attempts and changes to package.json. I didn't understand why this happened either. I just went through the options from Opus by trial and error — the costs of vibe coding.

What was the result?

Now the extension can copy links to selected files so that I can paste them into a chat with Claude Code using @. It's not drag-and-drop yet, but at least I don't have to type in the paths manually.

Code as References for Claude Code

The extension tracks changes, highlights modified code snippets in the file, and allows you to accept or reject changes:

Changes highlight like in Cursor

So, before launching Claude Code into the code base, I click "Start tracking" and VS Code starts showing me everything that the agent has changed. Now I can review the changes relatively comfortably, move through the changed files, and see the context around the changed sections of code (albeit not as conveniently as in Cursor).

Conclusion

A couple of hours of vibecoding helped me bring the Claude Code experience closer to that of Cursor IDE. Now that I can see what exactly AI is changing, I can work productively with Claude Code. If this option weren't available, I wouldn't be able to review changes effectively and would return to Cursor for ~$200 a month, trading unlimited Opus for a good UX with Sonnet.

This is yet another reminder for me that small tools can now be made very quickly. Two years ago, the same extension would have taken me probably a week. And now, a working (albeit rough) solution is available in just ~3-4 hours.

The extension is available on GitHub — https://github.com/RostislavDugin/diffus

And don't forget: use AI wisely, it's just a tool like an IDE, not an opportunity to slack off and think less. It's just that now you can think with higher efficiency per unit of time 🙂.

If you like the article, don't forget to look, try or, at least, put a star on my GitHub repo for PostgreSQL backup tool - https://github.com/databasus/databasus

Top comments (0)