I've been trying out a bunch of AI coding tools lately and I'm sure I'll be trying our a bunch more going into 2026. Started with Claude, Copilot, Windsurf, then Cursor, and somewhere along the way I realized I'd tested like a dozen of these things trying to figure out which ones actually help versus which ones I had to babysit to get them to cooperate.
Turns out they're all pretty different from each other. Some are autocomplete with extra features. Others try to be more like a coding partner or at least a junior programmer. A few let you describe what you want and they try to build it, which sometimes works and sometimes goes sideways.
The ones I actually use:
Cursor is what I use most days. The codebase chat is useful for understanding how old code works. Agent mode is inconsistent but when it works it can save you serious time. $20/month feels like a lot until you see how much faster things get.
GitHub Copilot is solid. $10/month, works in most editors, does what it says. Not as flashy as Cursor but it's reliable. If you have GitHub Pro you might already have access.
Some I found interesting:
Windsurf is free if you bring your own API keys, which is pretty smart. Being newer means less help online when stuff breaks, but the actual tool works well.
Aider is a terminal tool that works with any editor. Free and open source. Really good at making changes across multiple files. No autocomplete though, so it's more for bigger refactoring tasks. It's also more involved to get it up an running.
Claude Code (VS Code extension) is what I would use for contained objects, not great at planning a large project. Also, not great at autocomplete but the reasoning is solid. The $20 plan gets you a lot and resets every 5 hours or so.
Others I tried:
There are more. Amazon Q is AWS-focused. Supermaven is fast. Lovable and Bolt are for building apps without code. They all do different things, which is why asking "what's the best" doesn't really work.
Things I figured out:
The free options (Aider, Cline, Windsurf with your keys) are actually good. The paid ones are more polished and better at autocomplete but it's not a huge gap.
Being specific helps. Don't just drop an error in there. Give context, say what you tried, explain what you're working on. More info means better output.
Big projects with lots of dependencies still confuse these tools, like I was saying with Claude. They handle focused tasks well but get lost when too much is happening at once. Cursor does better with complexity, especially using the planning agent, but you still need to guide it.
If you want to try one:
GitHub Copilot if you're paying. $10/month, stable, lots of resources.
Cursor gives you a free week and you can use any model on an unlimited basis (or at least it feels unlimited), but once the week is over you best hit the breaks on the expensive models. If you don't you'll be throttled back or even hit the limit, although I never have.
Windsurf or Continue if you want free. You need your own API keys but then there's no limit.
Just pick one and use it for a week. You'll know pretty quick if it fits how you work.
Full comparison here:
I put together a breakdown of 18 tools with scores, pricing, what they're good at: Best AI Coding Tools Comparison 2026
Hope this helps, but please let me know what's working for you.
Top comments (0)