DEV Community

Jim L
Jim L

Posted on

I Tested Cursor 3 Glass for a Week — The Agent-First IDE Is Real, But Not for Everyone

Cursor dropped version 3 on April 2 with a codename — Glass — and a rebuilt interface that moves the code editor into the passenger seat.

The pitch: you describe tasks in natural language, AI agents write the code, and you orchestrate. It sounds like marketing copy until you actually open the Agents Window and see three parallel tasks running across different repos simultaneously.

What Actually Changed

The old Cursor was a VS Code fork with an AI sidebar. Version 3 is something else entirely. The Agents Window is a separate workspace where each task gets its own context, its own file access, and its own execution thread. You can run local agents or cloud agents — the cloud ones persist even when you close your laptop.

Design Mode is the other big addition. You can point at a UI element and describe what you want changed. It generates the code, previews the result, and you approve or reject. For React and Next.js projects, this worked surprisingly well in my testing. For anything with complex state management, it struggled.

The Good Parts

Parallel execution is genuine. I ran a test where Agent 1 was refactoring a data layer while Agent 2 was building a new API endpoint. They didn't conflict. The context isolation means each agent sees a consistent snapshot of the codebase, and Cursor handles merging the changes.

Multi-repo support. You can open multiple repositories in a single workspace and run agents across them. For monorepo-heavy teams, this matters.

The prompt box as primary interface. Instead of navigating menus and panels, you describe what you want. "Add error handling to all API routes in /src/api" — and an agent spins up, creates a plan, and starts executing. This felt natural after about 20 minutes.

The Honest Problems

Context window limits hit fast. Large codebases — anything over roughly 50K lines — caused agents to lose track of earlier instructions. I had to break tasks into smaller chunks manually, which somewhat defeats the purpose.

Cloud agents are slow. Local agents respond in seconds. Cloud agents take 30-90 seconds to start, and they run on Cursor's infrastructure. If their servers are loaded (which happened twice during my week of testing), everything stalls.

Price jumped. Pro is still $20/month, but the Business tier at $40/month is where you get unlimited cloud agent hours. The free tier is now almost unusable for real work — you get 5 agent sessions per day.

It is no longer a code editor. If you want fine-grained control over your code, Cursor 3 fights you. The interface prioritizes agent delegation over manual editing. Some developers will hate this.

Who Should Care

If you manage a team shipping features on tight timelines, Cursor 3's parallel agents could save real hours. If you are a solo developer who enjoys writing code, this might feel like a solution to a problem you don't have.

The $2 billion ARR number tells you Cursor found its market. Whether that market includes you depends on how much of your coding you are willing to hand off to agents that are good — but not perfect.


I test AI coding tools as part of my workflow. Previously covered Claude Code, Windsurf, and OpenCode. All opinions are from actual project use, not benchmark screenshots.


For my full review of Cursor 3 Glass and how it compares to other AI coding tools, see this detailed comparison.

Top comments (0)