DEV Community

A3E Ecosystem
A3E Ecosystem

Posted on

I tracked which AI tools actually shipped my last 30 days of work. The data surprised me.

I tracked which AI tools actually shipped my last 30 days of work. The data surprised me.

The 2025 Stack Overflow Developer Survey shipped late December and one number jumps off the page: Claude Code at 46% "most loved" — versus Cursor at 19% and GitHub Copilot at 9%. Adoption is still inverted (ChatGPT 82%, Copilot 68%, Cursor 18%, Claude Code 10%) but loved-vs-used is the leading indicator that matters.

I'm an indie operator running an autonomous-business stack — multiple repos, three media engines, a trading bot, a publishing pipeline. I've been instrumenting which AI tool I reach for, for which kind of task, for the last 30 days. The pattern that emerged isn't "use the best tool" — it's "use the right tool for the move."

Here's the multi-tool workflow that actually shipped code at A3E this month.


The split

Copilot for completions. Inside the editor, mid-line, the autocomplete is faster than my fingers and the latency is sub-100ms. I never leave context. It also catches the dumb stuff — wrong variable name, inverted return, forgotten await. Copilot earns its keep on the boring 70% of typing.

Claude Code for refactors and cross-file work. When the task is "rewrite this publisher module to add a browser fallback route, update the dispatch table, file an escalation if both routes fail, and add the test fixture" — that's a Claude Code job. Multi-file edits, with reasoning about why the architecture should hold, are where the SO survey's "most loved" signal lines up with my felt experience. The 46% number isn't about benchmarks. It's about the feeling of "this thing actually understood what I asked for."

ChatGPT for the rubber-duck conversation. When I'm trying to figure out what I should want before I know what to ask the IDE for. ChatGPT 82% adoption is real because it's the universal whiteboard. Different mode of use; different KPI.


The thing the survey doesn't measure

The survey asks about tools. It doesn't ask about workflow stitching. The unlock isn't picking the best AI — it's the routing logic between them. My current rule of thumb:

  • < 20 lines or single-file completion → editor + Copilot
  • Multi-file or "thinking required" → Claude Code session
  • "I don't know what I want yet" → ChatGPT conversation, then back to one of the above

The Stack Overflow blog post called out that 45% of professional developers use Anthropic's Claude Sonnet models versus 30% of those learning to code. That's the most interesting line in the report. Pros are converging on Claude for the same kind of work I'm describing — the high-context, opinion-required tasks. Beginners are still mostly on the conversational entry point.

If you're shipping production code in 2026 and you're mono-tooled, the survey is telling you something. Not "switch to Claude Code." Something better: stop treating AI tools as substitutes for each other. They're a stack. Pick three for the three different kinds of moves you make in a day.


Tracked across 30 days at A3E Ecosystem (autonomous-business stack — trading bot, publishing pipeline, multi-repo monorepo). Citation: 2025 Stack Overflow Developer Survey AI section, December 2025; "most loved" rating Claude Code 46% / Cursor 19% / Copilot 9%. Anthropic Claude Sonnet usage 45% pro / 30% learning.

Top comments (0)