A solo builder's post-mortem on over-engineering cloud security, losing everything, and rebuilding the right way.
There's a specific kind of silence that hits when you realize the command you just ran worked perfectly — and destroyed everything in the process.
That was me, staring at a terminal that had no response. No SSH prompt. No connection. Just... nothing. My VM was running. My n8n instance was technically alive. But I had sealed it so tightly that not even I could get in anymore. Not even Gemini Cloud Assist — the AI I was relying on to help me navigate GCP — could reach it.
The only path forward? Hit the Project Delete button and start over.
This is that story.
The Heartbreak of "I Was Trying to Be Secure"
I wasn't being reckless. I was being careful — or so I thought.
I had a public-facing Ubuntu VM running n8n, and I knew enough to be worried about it. Open ports. Bot scans. The usual internet noise. So I did what seemed logical: I started locking things down. Removed the external IP. Tightened the firewall rules. Stripped away anything that felt like unnecessary exposure.
What I didn't account for was that I had also stripped away my own access path. No external IP meant no standard SSH. The firewall rules I'd tightened had quietly cut off 35.235.240.0/20 — the IP range Google uses for Identity-Aware Proxy, which is the very backbone of GCP's secure management tunnel. Without it, Gemini Cloud Assist couldn't see my instance. I couldn't SSH in. I was locked outside a door I had just welded shut from the inside.
I spent an hour trying everything. Nothing worked. So I deleted the project, took a breath, and asked myself the only useful question in that moment:
What does "doing this right" actually look like?
The Post-Mortem: What I Did Wrong
Looking back, the failure had one root cause: I tried to harden a bad architecture instead of starting with a good one.
The original setup had a public IP by default — that's the GCP standard. And instead of rethinking that decision from the ground up, I tried to bolt security on top of it after the fact. I removed the public IP mid-flight without first establishing an alternative management path. I closed firewall ports without understanding which ones Google's own tooling needed to stay open.
The lesson isn't "don't be too secure." The lesson is: security has to be designed in, not added on. Retrofitting is where the lockouts happen.
The Rebuild: A "Private First" Architecture
The second time around, I flipped the entire mental model.
Instead of starting with a public VM and removing access, I started with a private VM and added only the access I needed. The machine — an e2-medium on Ubuntu 24.04 LTS — was provisioned with no external IP address from the very first click. It is, by design, invisible to the public internet. No address means no surface. No surface means no bot scans, no brute force attempts, no port knocking. Nothing.
But a private VM still needs to talk to the outside world — to pull Docker images, receive package updates, and run workflows that hit external APIs. That's where Cloud NAT comes in. Paired with a Cloud Router, it gives the VM outbound internet access without exposing any inbound surface. The VM can reach the internet; the internet cannot reach the VM.
For administration — SSH, file transfers, running gcloud commands — I used Identity-Aware Proxy (IAP). Instead of opening Port 22 to the world, I opened it exclusively to 35.235.240.0/20, which is Google's IAP tunnel range. This means every SSH session is authenticated through Google's identity layer before a single packet reaches my VM. The command looks like this:
gcloud compute ssh --tunnel-through-iap <instance-name>
Simple. Audited. No public port.
And for the n8n dashboard itself — the UI I actually use to build workflows — I set up Tailscale. Tailscale creates a private mesh network between my devices using WireGuard under the hood. My VM gets a stable Tailscale IP, and I connect to the dashboard at http://<tailscale-ip>:5678 from any device on my Tailscale network. No SSL certificate configuration needed, no reverse proxy, no public DNS entry. Just a VPN tunnel that works.
The Security Stack:
- No External IP → Zero public attack surface
- Cloud NAT → Outbound-only internet access
- IAP on Port 22 → Google-authenticated SSH, no open port
- Tailscale → Private dashboard access via mesh VPN
The Small Fixes That Saved the Setup
Two things tripped me up during the Docker setup that are worth documenting, because they're the kind of issues that don't show up in tutorials.
Volume Permissions. The n8n container was crash-looping on startup. The culprit was ownership on the ~/.n8n directory — Docker's internal n8n user expects 1000:1000 ownership, and it wasn't getting it. One chown command fixed it:
sudo chown -R 1000:1000 ~/.n8n
The Secure Cookie Flag. By default, n8n sets N8N_SECURE_COOKIE=true, which means it will only accept the session cookie over HTTPS. Since my Tailscale access is over HTTP (a private IP, no cert), this caused silent login failures. Setting N8N_SECURE_COOKIE=false in the Docker environment resolved it without opening any real security risk — you're on a private VPN, not the public internet.
These are exactly the kinds of subtle issues where Gemini Cloud Assist earned its keep. Describing the crash loop in natural language and getting back the precise diagnosis — volume permissions, not a misconfiguration — saved me from an hour of Docker log archaeology.
Why Every Developer Running Automation Should Consider This
If you're self-hosting anything — n8n, Zapier alternatives, AI pipelines, bots — and you have a public IP on that machine, you are being scanned right now. Not maybe. Now.
The "Private First" architecture isn't just for enterprises with security teams. It's practical for solo builders. It costs the same on GCP (an e2-medium sits comfortably within the $300 free credit tier). It takes maybe 30 extra minutes to set up compared to a standard public VM. And it gives you something that's genuinely hard to buy with money: the ability to stop thinking about your infrastructure's attack surface.
You can focus on what you're actually building.
The Outcome
My n8n instance now runs on a machine that doesn't exist, as far as the internet is concerned. No public presence. No bot noise in the logs. No anxiety about exposed ports. Gemini Cloud Assist has full visibility through IAP. My workflows run 24/7 — Apify lead pulls, AI processing, email drafts, WhatsApp messages — and I access the dashboard from my laptop over Tailscale like it's a local app.
Fort Knox style, as I like to call it. But it took destroying the first fort to build it properly.
If you're setting up a self-hosted automation stack and want to avoid the project-delete moment — feel free to reach out. The setup is simpler than it looks once you understand the architecture.
Tags: #n8n #GoogleCloud #DevOps #CloudSecurity #Automation #SoloFounder #SelfHosted #Tailscale #BuildInPublic



Top comments (0)