DEV Community

Cover image for My AI Experience in Russia as a European🤯
Ross Peili
Ross Peili

Posted on

My AI Experience in Russia as a European🤯

This is a story about how I built a fully local AI dev setup (and why you should too).

Moving to Moscow from the EU felt like a grand adventure, until I tried to open my laptop and actually get some work done. I’m a casual GCP ecosystem user. Nothing fancy, the usual Gemini API, Vertex AI Studio, Antigravity, the occasional Claude call, etc. I had three enterprise clients waiting for custom AI solutions, a handful of personal projects, and the blind confidence that “it’ll just work.”

Needless to say it didn’t.

Since April 15, 2026, Russia has not only banned VPNs, but they’ve gotten scary good at hunting them down. We’re talking like 99% insta-kill rate on commercial VPNs the moment your device touches the network. Sophisticated custom VPS setups might work according to some TG groups I've been digging, but only if you built them before landing, unlike me.

And so began the frantic thought of "how bad can this be?"

The VPN graveyard

I tried everything. Every provider I could think of, every protocol, every “guaranteed to work in Russia” whisper on Reddit, with no dice. A couple of mobile-only solutions survived ocassionaly before getting sniped every now and then. As for my laptop? A ghost town of connection timeouts. Forget about it. My only partner was my own ΌΨΗ (arpa.chat) on an Advanced Plan which was the only western model still accesible without a VPN. She helped me test what's coming next.

Qoder, GigaIDE, and other dead ends

ΌΨΗ suggested that I should forget about Antigravity, unless I setup my own VPS, so desperate, I pivoted to alternative IDEs. Qoder, the Qwen-powered IDE looked promising at first glanse. It’s Chinese, so surely sanctions wouldn’t apply, right? Right?. Wrong! Part of their deal to sell in the EU and US means no service in Russia. Blocked with a hard stop.

Then I tried GigaIDE, built around GigaChat, Sberbank’s Russian ChatGPT equivalent trained on a DeepSeek architecture. I wanted to like it. I really did. But the UI, performance, and output quality made me actively miss Gemini 3.1 Pro like a lost limb. Everything felt sluggish, hollow, and about three steps behind what I was used to.

Next up I try VSCode with KODA, a Russian plugin. It talks. It answers. Exclusively in Russian. I could hardcode system instructions in all caps and it would still reply “Конечно, но я расскажу тебе по-русски.” Not exactly what I needed for enterprise clients.

Bringing my own brain (on an SSD)

So I did what any dev backed into a corner does, pulling out the big guns. I’d had the foresight to bring offline models on a portable SSD. Gemma 4, Qwen 2.5 Coder 3B, Qwen 3.5 9B, DeepSeek Coder 7B, and a few more, you could say old friends now. I downloaded Ollama, followed arpa.chat instructions, fired up my terminal, and published them locally.

The easiest, and honestly most beautiful path I found was VSCode + the Continue plugin + Ollama. I went deep into the config.yaml, assigning different models for autocomplete, chat, and code generation. Different prompts, different temperature settings, different contexts. I tweaked. I cursed. I tweaked again. I ran everything on CPU and RAM because my VRAM situation was laughable, and renting from western vendors obviously not an option.

SSD Carrying offline AI models

And then… after several iterations it worked. Not just barely. With hardcore fine-tuning, I hit an acceptable, stable performance. The kind that makes you lean back in your chair and laugh because you just MacGyvered your entire development environment out of spite and a handful of GGUF files. The agents would now understand the repos I was presenting them with, plan, work in steps and phases, evaluate themselves, and solve quite complex multi-step tasks, manage git, and run tests across all ops. On top of that, I installed Skillware and used the prompt rewriter skill to compress my token usage as much as possible while getting the same context and results.

Conclusion

I think I’m not going back to paid AI subs anytime soon. Not because I can’t, but because this whole mess taught me something crucial: restrictions force you outside the box. When you lose access to the polished, corporate, one-click wonders, you learn how to build your own stack. How to collect models like Pokemon, how to configure local inference, tailor models to specific tasks, and make peace with the terminal.

It was frustrating af, but It was also fun, intriguing, and deeply educational. I now have a fully offline AI development setup that no sanctions body, no VPN crackdown, and no corporate policy can take away from me.

So here’s my unsolicited advice: if you’re addicted to commercial AI models and cloud IDEs, take a weekend and imagine they disappear tomorrow. Set up a local model. Learn how to fine-tune a small coding model for your stack. Bring an SSD full of open-weight models if you ever travel to a place like Russia (and maybe set up that custom VPS before you fly).

PS. By the time I finished with my local setup, I realized that Cursor works just fine, but only with the Cursor auto agent (not Gemini, Claude, etc.). In case you find yourself in a similar situation and wanna save some hustle. :D

Top comments (0)