I've been testing AI development tools with real projects, not surface-level reviews. This piece covers the space between no-code platforms and pure IDEs — specifically where Cursor, Google AI Studio, and a newcomer called Antigravity IDE compete for the same workflow slot.
The landscape shifted dramatically between December 2025 and March 2026. What felt settled got complicated fast. I ran actual benchmark tasks across all three tools and tracked the numbers.
Here's what the full post covers:
- Why Google AI Studio is underrated for prototyping — Gemini 3 Pro enables quick deployment, but there are hard limits most reviews don't surface. I found out the hard way
- Antigravity IDE's agent-first architecture — benchmark score of 0.69+, free public preview, and a reliability problem I hit in February that completely changed my take
- How Cursor actually compares on cost vs. output — benchmark 0.751, but ~$28 per task. I ran the same work through Claude Code CLI ($1.60–$4/task) and the gap is not what the benchmarks suggest
- The tool split I actually use daily — I run both Cursor and Claude Code for different things. The reasoning behind the split matters more than the tools themselves
- What knowing how to code actually does for your AI workflow — and it's not the "AI will replace developers" take you've heard before
The part that surprised me most is buried in the cost comparison section. The efficiency math is non-obvious.
Full post: https://thoughts.jock.pl/p/cursor-vs-google-ai-studio-antigravity-ide-comparison-2025
Free weekly newsletter on AI agents and automation: https://thoughts.jock.pl
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.