- Book: Observability for LLM Applications · Ebook from Apr 22
- Also by me: Thinking in Go (2-book series) — Complete Guide to Go Programming + Hexagonal Architecture in Go
- My project: Hermes IDE | GitHub — an IDE for developers who ship with Claude Code and other AI coding tools
- Me: xgabriel.com | GitHub
JetBrains polled 906 developers in January 2026. Four AI coding tools took all the oxygen: Copilot at 29%, Cursor at 18%, Claude Code at 18%, Windsurf at 8%. By April, two of those four are the same product — OpenAI closed Windsurf in March. If you are still paying for everything on the list, you are probably paying for two products twice.
Here is the April 2026 cut of what's worth your subscription, what is not, and the one tool I keep renewing without thinking about it.
The rules for this post: I've sat with each of these in real work, prices are what you pay in April 2026, and the verdict at the end of each section is the one I'd give a friend starting a new project tomorrow.
GitHub Copilot — keep it only if someone else is paying
Copilot is 29% of the market because it ships on by default when your employer gives you a laptop with VS Code on it. That is its story. The product itself has improved: Copilot Chat got better in 2025, the agent mode (released general availability in late 2025) handles multi-file edits, and the GPT-5 and Claude 4.6 model options in the picker close most of the quality gap with Cursor.
Pricing in April 2026:
- Individual: $10/month
- Business: $19/user/month
- Enterprise: $39/user/month
Where it wins: inline tab completion on common patterns. A CRUD handler, a Zod schema, a standard SQL join. If you are writing new code line by line and your patterns are well-represented in the training data, Copilot still feels magical at that specific task.
Where it loses: anything past the current file. Project-wide context is thin compared to what Cursor does with .cursorrules or Claude Code does with CLAUDE.md. Agent mode works, but you can feel that it was retrofitted onto a tab-completion product. The UX is fine. The ceiling is lower than the competitors.
Verdict: If your company pays, keep it. If you are paying $10 out of pocket, the money buys you more at Cursor or Claude Code. I cancelled my personal subscription in February. I still use the Copilot on my work laptop because it is free to me.
Cursor — worth the $20 if you live in an editor
Cursor shipped version 3 in April 2026 with Composer 2, their own frontier coding model trained from scratch. Not a fine-tune of Claude. Their own pretraining run on code, their own kernels, 200+ tokens per second on their infrastructure. That changed the economics of the product. Parallel agents, an always-on BugBot reviewer, a local-only Ghost Mode — all of that ships because Anysphere now owns their inference stack.
Pricing in April 2026:
- Hobby: free, limited
- Pro: $20/month
- Business: $40/user/month
Where it wins: multi-file edits and refactors. The Composer panel reads ten files, plans an edit across all of them, applies diffs, runs your tests. You watch it work. When it goes wrong, you see where it went wrong and can stop it. That visibility is the thing the other IDE tools do not match.
Where it loses: terminal-heavy workflows. If your day involves running a build, grepping logs, editing a config, rerunning a migration, and then writing code, Cursor's agent is one step removed from all of that. You end up alt-tabbing to a terminal anyway. Also: the $20 plan has a "fast request" cap. Heavy users hit the slower model mid-month, and the vibes change when that happens.
Verdict: Worth it if your work is mostly in-editor. If you are shipping a TypeScript app, a React frontend, a Next.js backend — Cursor earns the $20. I keep a Pro subscription active for the design-heavy projects where Composer 2 and Design Mode pull their weight.
Windsurf — wait out the bundle
Windsurf is OpenAI's now. The acquisition closed in March 2026, and the product roadmap is getting folded into OpenAI's coding stack alongside TBPN (the terminal tool OpenAI bought in April). The standalone Windsurf Pro at $15/month still exists, but the signal from OpenAI is loud: expect a ChatGPT Plus + Windsurf bundle in the $25–30 range before Q3. If that lands, the standalone $15 tier becomes silly overnight.
Pricing in April 2026:
- Free: limited
- Pro: $15/month
- Teams: $35/user/month
Where it wins: if you already pay for ChatGPT Plus, this is the coding surface that ships first with every new OpenAI model. GPT-5.4 support was in Windsurf the day the model shipped. That is a distribution advantage Cursor cannot match on OpenAI models specifically.
Where it loses: the rest of the time. Against Cursor, Windsurf's Cascade agent is slightly behind on multi-file reasoning. Against Claude Code, it is nowhere near on long-running agentic tasks. The only reason to be here today is the ChatGPT tie-in, and that reason gets stronger after the bundle lands, not now.
Verdict: Do not subscribe standalone in April 2026. Wait for the bundle. If you must use an OpenAI-first coding tool, the free tier covers most of what the $15 tier does anyway.
Claude Code — the one I can't live without
This is the verdict I've been circling. Claude Code is the only one of these tools I run every day and would pay double for if I had to. It is a terminal CLI from Anthropic, it runs on Sonnet 4.6 and Opus 4.6, and it does something the IDE-native tools structurally cannot: long-running agent sessions that read a repo, make a plan, execute across many files, run the test suite, and fix the failures.
Pricing in April 2026:
- Claude Pro (includes Claude Code with Sonnet 4.6 default): $20/month
- Max plan (higher limits, Opus 4.6 for agent work): $100/month or $200/month tiers
- API usage, metered: what heavy users actually pay on top
Where it wins: agentic work. "Add rate limiting to all public endpoints with Redis, per-user tier limits, tests passing." That prompt in Claude Code gets an actual working implementation across fifteen files in twenty minutes. No IDE tool in this list runs that same loop at the same quality. The CLAUDE.md file in the repo root persists project context across sessions, so the agent stops asking you the same questions about your stack on turn one of every new task.
Where it loses: flow-state line-by-line coding. There is no tab completion. There is no inline prediction. If your current task is "type a function, see the next three lines suggested, hit tab," Claude Code is not for you in that moment. The context switch to the terminal costs real seconds, and those seconds are felt when the task is small.
The pricing reality is worth being honest about. On the Max $100 tier, heavy Opus 4.6 users report all-in costs around $150–250 per month across subscription plus metered API usage. That is more than any other IDE subscription in the category. It is also cheaper than an hour of a junior engineer's time, and for the work it does, that comparison is the one that matters.
Verdict: This is the one. If I had to pick a single AI coding subscription and cancel everything else, Claude Code on the Max plan is it. The agent work is structurally different from what the IDE tools do, and nothing else ships that capability at this quality in April 2026.
Full disclosure: I'm building Hermes IDE around Claude Code as the engine. The reason Hermes exists is that Claude Code is good enough to be the compute layer, and the UI layer around it is still a fragmented mess of terminal sessions, tmux panes, and custom hooks. That is the gap Hermes is filling. If you already use Claude Code and you want an editor that is built from scratch to make the agent-first workflow feel native, that is the project. If you are happy in a terminal, carry on.
The also-rans: Cline, Cody, Continue
Three tools that show up in every comparison post and are worth a quick honest take.
Cline (formerly Claude Dev)
Open-source VS Code extension that wraps Claude and GPT models as an agentic coding tool inside the editor. Free to use, you bring your own API key. The feature set overlaps with Cursor's Composer but lives inside regular VS Code without the fork.
Where it wins: you already own a Claude or OpenAI API key, you do not want to leave VS Code, and you are comfortable paying per-token for inference. No subscription.
Where it loses: UX polish compared to Cursor. The agent panel works, the diffs show up, but the rough edges are real. API usage costs add up fast if you run long sessions.
Verdict: Solid free option if you have API credits and patience. Not the tool I reach for, but not a bad choice for the price.
Cody (Sourcegraph)
Cody's pitch is codebase-wide search and context. It reads your entire repo via Sourcegraph's indexing, which in theory means better answers on large codebases than tools that rely on window stuffing.
Pricing: Free tier, Pro at $9/month, Enterprise custom.
Where it wins: genuinely large monorepos where semantic search across thousands of files matters more than model quality on a single task. The Sourcegraph index is real infrastructure, not a prompt trick.
Where it loses: everywhere else. On a normal 50-file repo, the context advantage does not show up, and the model choice and agent loop are weaker than Cursor or Claude Code.
Verdict: If you work in a 10,000-file monorepo at a big company, Cody earns a look. Otherwise, skip.
Continue
Open-source extension for VS Code and JetBrains, configurable model backend. Think of it as the "build your own Cursor" kit.
Pricing: Free, you bring models.
Where it wins: flexibility. Point it at a local Ollama model for air-gapped work, point it at Claude 4.6 for heavier tasks, configure your own prompts and agents.
Where it loses: configuration cost. You are the integrator. If you want something that works out of the box, this is not it.
Verdict: Good if you enjoy tuning your own dev environment. Not a product for the average paying developer.
What I actually pay for in April 2026
Cutting through all of it, here is the subscription stack I run:
- Claude Code Max plan: $100/month, plus metered Opus usage on top
- Cursor Pro: $20/month, for the editor work that is not agentic
- Copilot: kept on the work laptop because employer pays, cancelled on personal
Total out of pocket on coding tools: about $120–200/month depending on how much agent work I run through Opus. For the work it does, that number is fine. For the alternative of not having these tools, that number is a rounding error.
The tools I dropped in the last six months: Windsurf (waiting on the OpenAI bundle), Cody (no monorepo need), personal Copilot (Cursor covers the same ground). The tool I renewed without thinking: Claude Code.
If the JetBrains survey rerun in January 2027 still has four names on it, my money is on Claude Code being one of them, and on the 8% for Windsurf having shifted into whatever OpenAI calls the bundled product. The two names I am less sure about are Copilot (incumbent inertia is real but not infinite) and Cursor (Composer 2 is a real moat, but the $20 tier is the kind of thing a tighter budget year cuts first).
Pick one agent-first tool. Pick one editor-first tool if you need it. Skip the rest. That is the 2026 cut.
If this was useful
Picking AI tools is the first half of the problem. The second half shows up when the AI-powered features you build with those tools hit production and the dashboards go green while the users go angry. The book walks through how to instrument LLM applications so that the blind spots APM tools leave behind — hallucinations, silent drift, agent loops, cost blowups — show up on a graph before they show up in a support ticket.
And if you already live in Claude Code, Hermes IDE is where I'm pushing the envelope on what the UI layer around an agent-first tool can look like.
- Book: Observability for LLM Applications · Ebook from Apr 22
- Also by me: Thinking in Go — Book 1: Go Programming + Book 2: Hexagonal Architecture
- Hermes IDE: hermes-ide.com — an IDE for developers who ship with Claude Code and other AI coding tools
- Me: xgabriel.com | GitHub


Top comments (0)