DISCLAIMER! The best AI coding tool is the one available to you, that gives you the best model and reasonable token limits. From the text below it...
For further actions, you may consider blocking this person and/or reporting abuse
Great writeup. Very relatable initial premise.
hit this exact wall. cursor's fast-apply mode eats tokens insanely fast on anything with large files - one decent refactor session and you're done for the day. copilot's subscription model removes that anxiety completely which is genuinely underrated. but i kept missing cursor's codebase awareness - the way it just knows what you're working on without you having to explain context every time. ended up going back and being way more deliberate about when i trigger expensive operations
The config portability problem Ned mentioned is what kills me. I've been burned by this enough that I now keep a tool-agnostic AGENTS.md at the project root — both Cursor and Copilot pick it up, and if I need to bail to Claude Code or something else, the context carries over.
Your point about Copilot's plan mode being a "gray piece of misery" made me laugh. I tried it once, got a wall of text from a subagent that basically restated my prompt, clicked Proceed, and it just... started over. Never used it again. Cursor's structured .MD plan with the Build button is genuinely one of the best features in any coding tool right now.
One thing I'm curious about — with nearly 1T tokens used in 2025, what's your monthly Cursor bill looking like? I've been hesitant to go all-in on agentic loops because the token burn gets wild fast, especially with subagents. Do you find the productivity gain justifies it vs being more deliberate with prompts?
Last year I had $150 monthly credit in Cursor and by the end of the year I started chronically hitting the limit... Now at $300 at still not enough to last even 2 weeks, relying on my GH Copilot subscription more due to that
I’ve been debating making the switch myself because of the token limits. The breakdown of the 'Checkpoint' reliability is a huge factor I hadn't considered. Definitely leaning towards keeping both installed for different use cases now. 🚀 Great write-up!
The token anxiety is real. I've started thinking of Cursor credits the same way I think about AWS costs — you don't realize how much you're burning until the bill comes.
The interesting thing in your comparison: Copilot's advantage is the deep VS Code integration. When it works, it feels like the IDE itself understands what you're trying to do. But Cursor's context handling for multi-file changes is noticeably better. I end up using both depending on the task type.
What's your usage pattern — mostly completions or more chat-based generation?
Don't do completions at all, mostly ahentic loops and relying more on subagents to squuze more scope into single thread
Token cost tracking needs to be built in from day one, not added after the first surprise invoice. The asymmetry is that costs scale with task complexity, not user count — which breaks the mental model most people have from pricing cloud compute.
Great comparison. The plan mode difference really stood out to me — I use Cursor's plan mode heavily for multi-step refactors and the structured .MD output + "Build" button workflow is a game changer. Getting a generic paragraph back from Copilot's plan mode sounds frustrating.
The "Apple vs Microsoft" analogy is spot on. I've noticed the same pattern across the AI tooling space — the startups that dogfood their own products aggressively just ship better UX. The dialog branching feature alone saves me hours per week when context windows get long.
One thing I'd add: the image file viewing gap in Copilot is a bigger deal than people realize. I've been building a project that involves analyzing financial document screenshots, and not being able to reference saved images in the workspace is a dealbreaker. Curious if you've tried any workarounds beyond pasting into chat?
Thanks for the feedback! For the image viewing by agent in Copilot, what I found working was using Claude Code though Copilot subscription :) Wanted to search for MCP workarounds, though never did it
Oh nice, Claude Code through Copilot is a solid workaround! I hadn't considered that route for image viewing. For MCP, there are a few community servers that handle multimodal input now - worth checking the awesome-mcp-servers list on GitHub if you ever get around to it. The ecosystem's moving fast.
The plan mode difference is the most telling signal in this comparison. Structured output with an explicit build step versus a paragraph response maps directly to how well each tool fits into an existing workflow versus expecting you to adapt to it. The deeper question is whether the IDE integration advantage compounds over time or whether the two converge as agents become more autonomous.
switching between tools sucks because none of the config transfers. i've got .cursor/rules files tuned for how i work, and if i switch to copilot or claude code for a week, all of that context is gone. i have to re-explain my project conventions from scratch every session.
i built a linter for cursor rules partly because of this. wanted to at least know if my rules were valid before i invested time tuning them for a tool i might have to abandon next month. the whole ecosystem feels like it's one pricing change away from forcing a migration nobody's config is ready for.
Cursor deprecated rules and proposed to migrate to agent skills, putting skills to .claude/skills makes those skills discoverable by major tools. Same goes for AGENTS.md as alternative to coplot-intructions
rules aren't deprecated though. cursor's docs still have a full active page for them with four types (project rules, user rules, team rules, AGENTS.md), and the v2.4 changelog explicitly positions skills as complementary: "compared to always-on, declarative rules, skills are better for dynamic context discovery and procedural how-to instructions."
there is a /migrate-to-skills command, but from what i can tell it converts dynamic rules and slash commands into skills, not all rules. the use cases are different: rules are declarative and always-on ("use TypeScript strict mode"), skills are procedural and on-demand ("here's how to deploy to AWS").
you're right about .claude/skills/ being cross-tool discoverable though. cursor auto-discovers .claude/skills/, .codex/skills/, and .cursor/skills/. that part of the ecosystem is converging. but for the kind of stuff most people put in .cursor/rules/ (coding conventions, style enforcement, framework patterns), rules are still the right tool.