Let's address the elephant in the room: I am NOT a programmer. At least not a professional one, although I've learned BASIC at the age of 16 (which was 40 years ago, yeah... that's the mammoth in the room).
Later, when I was 29, I had a web development agency as a side business, and made a dozen of websites myself, so I guess I am (kind of) developer after all. But it's more like a hobby, frankly.
It's a long story, actually, but I'll do my best to make it short. For a year or so I'm quite interested in AI dev and watched lots of videos on the subject, some courses at Udemy and Scrimba too.
I am quite used with VS Code as a coding IDE, but quite often Cursor and Antigravity are mentioned as better alternatives for AI dev.
That was the /init for the main topic. This morning I had an idea - in the middle of a Cursor course at Udemy, I've asked myself:
"Do I really need this course, and do I REALLY need ANOTHER coding IDE?"
Than came the idea: to ask this question not to just myself, but to the leading LLMs: Perplexity, ChatGPT, Google Gemini, Anthropic's Claude, DeepSeek, Grok, Kimi, Qwen and Z.ai (GLM-5).
I am using the free plans at all of them and often ask them one and the same question - and compare their answers before making up my mind on a certain topic.
So, here is what I;ve asked them:
"I'd been using VS Code as a main IDE for my frontend (mainly) and other software projects for several years. Now that I'm into AI (Agentic) development, I here often about Cursor and Antigravity as alternative IDEs for AI dev. Is it worth it to explore deeper in any of these alternatives - having in mind I do not want to pay any more monthly fees for any AI subscriptions and preffer to use open source models locally? I also chose OpenCode over Claude Code and other paid alternatives."
Usually I read thoroughfully all the answers after that, but this time I had a second idea: what if I ask OpenCode to do that for me?
And that's how this experiment turned into a small project. I created a new directory and put my question - and all the answers - in a corresponding MD file:
Afterwards I started OpenCode in a new terminal, fired the /init command to let it view all the files (and correct all MD errors, such as blank lines, etc.) and than gave it a simple task:
It took OpenCode 1 minute and 10 seconds to fire out the verdict. It cost me about $1.
It's not that I'm lazy, but to read it myself would take hours. So, I decided to used the spared time and share this experience with you.
You might find it stupid or boring, but someone else might find it useful or even inspiring.
Do not let laziness, prejudice or anything else deprive you from you birth right to have fun. While learning...
Wanna read the final verdict? Here it is:
# Final Verdict: Stay With VS Code, Supercharge It Locally
The nine model writeups land on one message: keep VS Code (plus
OpenCode in the terminal) and pour effort into a local-first extension
stack instead of switching to Cursor or Antigravity.
## Why Cursor And Antigravity Miss The Mark
1. **Subscription pressure** – Cursor’s best agent modes (Composer, Auto,
higher request caps) hide behind $20+/month plans, so a free workflow
throttles fast (chatgpt.md, claude-sonnet-4-5.md, perplexity.md,
z_ai_glm-5.md).
2. **Cloud dependency** – Both IDEs expect hosted Claude/Gemini/GPT
models; local hooks exist but feel brittle or unsupported (grok.md,
kimi.md, qwen3_5-plus.md).
3. **Operational overhead** – Pointing Cursor at self-hosted models needs
tunnels or proxies and still routes code through their servers,
undermining privacy goals (gemini.md, perplexity.md).
4. **Antigravity’s trade-off** – The preview is free, but most of the
“manager view” polish depends on Gemini. Switching to your own models
drops key features, and future pricing is unknown (deepseek.md,
grok.md).
## What To Do Instead
1. **Stay on VS Code** – Every extension targets it first, and your
muscle memory is built there. Forks like OpenCode can lag upstream,
so keep stock VS Code as home base (qwen3_5-plus.md, z_ai_glm-5.md).
2. **Adopt a local agent extension** – Roo Code/Cline and Continue.dev
speak OpenAI-compatible APIs. Point them at Ollama or LM Studio for
Cursor-style agents without fees (gemini.md, qwen3_5-plus.md,
z_ai_glm-5.md, perplexity.md).
3. **Pair with a CLI agent** – OpenCode (already in use) or Aider handle
repo-wide edits, auto-commits, and terminal workflows fully offline
(kimi.md, qwen3_5-plus.md).
4. **Standardize the model runner** – Ollama is the simplest host for
DeepSeek Coder, Qwen 2.5/3, or Llama 3. Point every tool at
`http://localhost:11434` to keep data air-gapped (gemini.md,
qwen3_5-plus.md).
## Optional Experiments
1. **Try Antigravity briefly** – Sample the agent “manager view” while
the preview is free, but treat it as research. Once hosted quotas run
out, you fall back to your local stack (deepseek.md, grok.md).
2. **Weekend trial of Cursor** – Useful only to benchmark UX. Expect to
hit free limits quickly, and keep sensitive repos away because calls
transit their cloud (chatgpt.md, perplexity.md).
## Final Advice
Invest in a VS Code + Ollama + agent-extension workflow. You will reach
most of Cursor’s automation (and much of Antigravity’s planning) without
subscriptions or code exfiltration. Treat other IDEs as curiosities
until they can run first-class on local, open-source models.
Now you know. Non-hateful comment are highly welcome.


Top comments (0)