DEV Community

Paul
Paul

Posted on

Cursor vs Trae vs Kiro vs GitHub Copilot: My Honest 2-Year Review of 4 AI IDEs

Two Years, Four Mainstream AI IDEs, and Honest Paid-Subscriber Feedback — Not an Ad, Just a Real-World Pitfall Guide

This article is about my real experience with four tools: Cursor, Trae, Kiro, and GitHub Copilot. I’ll walk through how I got into each one, what feels great in day-to-day use, and where each one breaks down. If you’re deciding which IDE to use next—or you’re already unhappy with your current setup—I hope this hard-earned experience helps.

That said, tools are always personal. What works for me may not work for you. If your experience is different, or you think I missed a better setup, feel free to challenge me—discussion is exactly how we all get better.

I: My AI Coding Journey

GitHub Copilot: From “Old-School Typing” to “Tab-Driven Coding”

When people talk about AI coding, Copilot is hard to ignore. In 2023, when it moved from preview into mainstream use, I was an early adopter. It felt revolutionary at the time: type a few characters and it can complete an entire line, paragraph, or function. I jumped from “manual typing” to “trust the Tab key” almost overnight.

I’m a heavy JetBrains user, and VS Code never felt like home to me. Copilot’s strength is that it works across environments: not only VS Code, but also an official IntelliJ IDEA plugin. That gave my existing workflow a smooth AI upgrade. In 2023–2024, “IDEA interaction + Copilot completion quality” was my productivity sweet spot. I paid for it for two straight years, and it became foundational to my workflow.

Completion tip: Copilot pushed me to write clearer comments. The clearer I described intent in natural language, the better its completions became. That also improved my code readability.

Cursor: Shifting from Manual to Automatic

Cursor represented a new paradigm: not just completion, but conversation-driven development.

Talk to me,
Code for me.
Enter fullscreen mode Exit fullscreen mode

When this capability first appeared, it was a shock to the industry. While Copilot was still optimizing Tab completion, Cursor was already handling higher-level reasoning and generating large code blocks from natural language prompts.

I started using Cursor in 2024. At that time there was no IDEA plugin, so I had to work in VS Code, which took some adjustment. My prompting skills were also weak at the beginning, so it didn’t click immediately. But after adapting to the workflow, I hit a second major productivity jump.

Kiro: Following the Leader, with Growing Pains

Because Cursor could be unstable in some network environments, I looked for a backup. That’s when I found Kiro. It started as Amazon Q and later became a standalone AI IDE. As an early user, I had a long free period, and when Cursor failed, Kiro often saved me.

Kiro is also VS Code-based, so onboarding was easy after using Cursor. The purple theme grew on me too. As an AWS product, it appears feature-complete: code completion, chat coding, diagnostics, and more.

But “has features” and “feels good to use” are very different things. Kiro has had more basic product bugs than any AI IDE I’ve used. I’m not talking about model hallucinations; I mean UI and interaction bugs in the product itself. That’s especially frustrating when expectations are high for a company known for infrastructure reliability.

Trae: A Late Entrant with Strong Engineering Discipline

I had seen Trae ads for a long time, but only started using it deeply in early 2026. I use the international version and subscribed with a local Visa card. By then, I was fully comfortable with VS Code-like IDEs.

My first impression: Trae is extremely polished in fundamentals. The UI is clean, response is fast, and localization is excellent. There are reports that the team rewrote major VS Code internals in Rust. True or not, large projects feel very smooth. After months of use, I can say this: touching so much low-level architecture while staying this stable is genuinely impressive. (Yes, this is also a subtle shot at Kiro 😏)

Of course, I’ve used Trae for less time than the others, so my view may evolve.

Chapter II: Deep Comparison — The Devil Is in the Details

Enough background. Here’s a direct breakdown of strengths and weaknesses from daily, high-intensity use.

GitHub Copilot: The Best Tab Experience

  • Positioning & price: $10/month. Its biggest advantage is ecosystem reach. You can use it in VS Code, the IntelliJ suite, Vim, and Neovim.
Subscribe once, tab everywhere
Enter fullscreen mode Exit fullscreen mode
  • Core strengths:
    1. Completion quality: Years of iteration still make its line/block completion speed and accuracy top-tier.
    2. Ecosystem fit: Deep GitHub integration gives it strong context awareness for repos, style, and team habits. In JetBrains, it still feels best-in-class.
  • Core weakness (Agent mode):
    1. Slow/inaccurate context retrieval: It often misses the right files unless you repeatedly guide it.
    2. Whole-file rewrite behavior: This is my biggest pain point. Instead of focused edits, it tends to regenerate full files, which causes:
      • Unwanted formatting/style churn: comments, spelling, and style get changed even when not requested.
      • Higher breakage risk: full-file rewrites raise the chance of introducing regressions.

Bottom line: Copilot is an elite “Tab accelerator.” But if you want an autonomous project-aware partner for complex tasks, it can feel behind newer AI IDE workflows.

Kiro: Backed by a Giant

  • Positioning & price: $20/month (1000 points) at entry level. No free fallback model after points run out, so AI functions stop hard.
  • Core strengths:
    1. Claude-based intelligence: Good reasoning and instruction following for most engineering tasks.
    2. Spec-driven mindset: Useful for teams that prioritize strict, document-first workflows.
  • Core weaknesses (frustrating details):
    1. Visible product bugs: From completion edge cases to UI glitches.
    2. Painful file reference UX: In chat, you often have to manually type file names. In large projects with similar names, this is a productivity killer.
    3. Fragile image handling: The model may not support images, but the IDE allows image content. Accidentally pasting rich text with images can break the session.
    4. Long-context slowdown: It starts strong, then degrades into very slow “one-line edit, one-step think” behavior in late-stage refinements.
    5. Expensive token economy: $20/1000 points can disappear quickly on long-context tasks. Several full-time users around me report ~$40/month as the practical minimum.

What Kiro taught me: Control context aggressively. Split weakly related tasks into separate sessions. Precisely scope files with @ references instead of relying on broad retrieval.

Trae: An Elegant Swiss-Army Knife

  • Positioning & price: $10/month entry plan with excellent value.
  • Core strengths:
    1. Excellent fundamentals: UI, performance, stability, and localization are consistently strong.
    2. Efficient workflow design: Its agent tends to follow: Find rules -> Retrieve code -> Read context -> Think -> Edit -> Check -> Fix -> Summarize It uses retrieval and proactive context compression, which helps reduce token waste.
    3. Strong Chinese/local adaptation: Especially useful for Chinese-language workflows and local ecosystems.
  • Core weaknesses (trade-offs of its philosophy):
    1. Overkill for tiny tasks: The full process can feel heavy for simple edits or translations.
    2. Code-first bias: Outside coding tasks, it may ask for project structure before proceeding.
    3. Session concurrency limits: Compared with some competitors, parallel multi-chat workflows feel more constrained.
    4. “Prepare first” tendency: Even with precise instructions and file targeting, it may still perform extra retrieval before acting.

My take on Trae: Polished, powerful, and methodical. Great for substantial tasks; less ideal for lots of micro-edits.

Cursor: The Gold-Standard All-Rounder

  • Positioning & price: $20/month (with more flexible plans now).
  • Why it still feels best overall: Most pain points from other tools are either absent or better handled.
    • Project-level understanding: Strong repository awareness without constant manual file tagging.
    • Fast and focused edits: Usually edits targeted blocks instead of rewriting full files.
    • Reliable baseline experience: Fewer product-level bugs in day-to-day use.
    • Flexible model fallback: Auto mode can bridge gaps when premium quotas are exhausted.
    • Smooth interaction model: Chat, edit, and file operations feel tightly integrated.
  • Biggest drawback: In mainland China, reliable access to top overseas models (like Claude) usually requires a stable proxy/VPN setup.

III: Summary & Selection Guide — Find Your Coding Partner

So which one should you choose?

  1. GitHub Copilot: Your Tab Booster

    • Best for: Developers who want AI enhancement inside their existing IDE workflow.
    • One-liner: If you want stronger coding flow without changing your toolchain, Copilot’s cross-IDE plugin ecosystem is hard to beat.
  2. Cursor: Your General-Purpose AI Coding Partner

    • Best for: Developers ready to embrace conversation-first coding and able to handle connectivity requirements.
  3. Kiro: Your Process-Heavy Contractor

    • Best for: Teams that are deeply tied to AWS and strongly prefer spec-driven, documentation-first delivery.
  4. Trae: Your Automated Workshop Lead

    • Best for: China-based developers, especially those on local stacks, who care about cost-efficiency and structured automation.
    • One-liner: Excellent fundamentals, strong localization, and highly automated flow.

I welcome discussion—and disagreement.

Extra Notes (Detailed Comparison)

Here’s the short practical version from real usage:

  • Copilot is still the best pure Tab completion experience, especially if you use multiple IDEs.
  • Its Agent mode is weaker: context lookup can miss, and full-file rewrites often create noisy diffs.
  • Kiro can solve real tasks with Claude-class reasoning, but UX issues and point-based billing can make long sessions expensive.
  • Kiro taught me to aggressively manage context and explicitly scope files to save tokens.
  • Trae is very strong on product quality and token efficiency, but can feel heavy for tiny, non-coding tasks.
  • For larger tasks, Trae’s structured flow is reliable and cost-effective.

Top comments (0)