I've been bouncing between free AI coding tools for about eight months now. Not because I enjoy the constant context-switching — I don't — but because the free tiers keep changing. One month Copilot's generous, the next they throttle completions. So you adapt.
Last week I finally settled on a setup I'm actually happy with, and the catalyst was Gemini Code Assist dropping its paid wall entirely.
My old setup (and why it fell apart)
For most of late March I was running Copilot's free tier inside VS Code. It worked. The completions were decent for boilerplate — React components, SQL queries, that sort of thing. But I kept hitting the monthly suggestion cap at the worst possible moments. Always mid-sprint, always on a Friday afternoon.
I tried supplementing with Codeium. Fine for autocomplete, weak on multi-file reasoning. If I asked it to refactor a service layer that touched three files, it'd confidently edit two and hallucinate imports from a package that didn't exist.
Gemini Code Assist going free changed everything
Google quietly made Gemini Code Assist free for individual developers in their March update. No waitlist, no credit card, just install the extension.
I was skeptical. Google's track record with developer tools is... uneven. But after two weeks of daily use, here's where I've landed:
What actually works well:
- Multi-file context. It handles monorepo navigation better than anything else I've tried at this price point (free). I pointed it at a Next.js app with 40+ route files and it correctly traced a bug through three layers of middleware.
- The chat interface is fast. Sub-second responses for most questions. Copilot's chat feels sluggish by comparison.
- Gemini 2.5 Pro under the hood means the reasoning quality on architecture questions is genuinely solid.
What doesn't:
- Inline completions are hit-or-miss. Sometimes brilliant, sometimes it autocompletes a function signature that doesn't match the types I defined six lines up. Copilot still edges it out for pure autocomplete speed and accuracy.
- No terminal integration. I can't ask it to run tests or explain error output without copy-pasting. Minor, but it adds up.
- Extension conflicts. If you're running both Copilot and Gemini Code Assist, VS Code occasionally freezes for 2-3 seconds while they fight over who gets to suggest first.
I wrote a detailed breakdown of Gemini Code Assist's free tier after my first week with it, if you want the full picture.
Where cloud agents fit in (and where they don't)
The other big shift I've been watching is cloud-based coding agents — tools like OpenAI's Codex and Anthropic's Claude Code that run in sandboxed environments and execute code autonomously.
I tested both for a side project: migrating a Flask API to FastAPI. The kind of tedious, well-defined task that should be perfect for an autonomous agent.
Codex handled the route conversion cleanly but choked on the async database layer. It kept generating synchronous SQLAlchemy calls wrapped in asyncio.to_thread(), which technically works but defeats the purpose. Claude Code got further — it actually rewrote the DB layer properly — but burned through my free credits in about 40 minutes of back-and-forth.
The fundamental tradeoff: cloud agents are powerful but expensive, and the free tiers are thin. Local tools like Gemini Code Assist give you unlimited usage but less autonomy. For my workflow, I use the cloud agents for one-off complex refactors and keep Gemini running for daily coding.
I found a solid comparison of cloud vs local coding agents that maps out these tradeoffs in more detail than I can here.
My current stack (April setup)
After all the experimentation, this is what I'm running daily:
Primary: Gemini Code Assist (VS Code extension) — unlimited completions, good multi-file reasoning, free chat
Backup autocomplete: Copilot free tier — I keep it installed but disabled by default, toggle it on when Gemini's suggestions feel off
Heavy lifting: Claude Code via terminal — reserved for large refactors, maybe twice a week
Total monthly cost: $0.
Is it as good as paying $19/month for Copilot Pro or Claude Pro? No. The completions are slower, the context windows are smaller, the autonomy is limited. But for solo projects and learning, it's more than enough.
What I'd tell someone starting fresh
Skip the paid tiers until you've actually hit a wall with the free ones. I spent three months paying for Copilot Pro before realizing I used maybe 30% of its features. The free tools have gotten genuinely capable — not because any single one is perfect, but because you can layer them.
The one thing I'd watch out for: don't install five AI extensions simultaneously and expect VS Code to behave. Pick two, max. Your RAM will thank you.
I write about developer tools and the weird economics of AI pricing. If you found this useful, I'm @jimliu_dev on Dev.to.
Top comments (0)