DEV Community

Cover image for Putting Your Site Behind G Cloud CDN, an unapologetically detailed, how-to
Athreya aka Maneshwar
Athreya aka Maneshwar

Posted on

Putting Your Site Behind G Cloud CDN, an unapologetically detailed, how-to

Hello, I'm Maneshwar. I’m building LiveReview, a private AI code review tool that runs on your LLM key (OpenAI, Gemini, etc.) with highly competitive pricing -- built for small teams. Do check it out and give it a try!

Our GitLab instance was running on a VM outside GCP (OVH, AWS, whatever) and while it worked, it felt… sluggish.

By putting it behind Google Cloud CDN with a global HTTPS load balancer, we cut page load times by ~50%, thanks to caching, compression, and Google’s edge network. And the best part? We didn’t have to mess with GitLab’s existing HTTPS or any other setup at all.

This guide shows exactly how we pulled it off, why each step was necessary, and the sneaky landmines we avoided along the way (DNS loops, backend type mistakes, etc.).

Before (slow GitLab):

After (50% faster):

TL;DR (global overview)

We placed a Global External HTTPS Load Balancer (GCLB) + Cloud CDN in front of your external GitLab VM.

The LB terminates client TLS (Google-managed cert), then proxies traffic to the origin VM over HTTPS. To do that safely we:

  • created a helper DNS name that points to the origin VM IP (so the NEG does not resolve the LB IP — avoids loops);
  • created an Internet NEG that points to that origin (IP:443);
  • created a backend service (HTTPS) with Cloud CDN enabled;
  • built a global external HTTP(S) load balancer and wired host/path rules to the backend service;
  • added an HTTPS frontend with a Google-managed cert for gitlab.apps.dev.to;
  • updated gitlab.apps.dev.to in Cloud DNS to point to the LB public IP.

All caching and invalidation is configured on the backend service (Cloud CDN) and the URL map on the LB.

Architecture Overview

  • Origin: VM at IP 111.0.113.50 running GitLab Omnibus, managing its own HTTPS.
  • DNS: *.apps.dev.to managed in Google Cloud DNS.
  • CDN + SSL: Google Cloud Load Balancer with Cloud CDN enabled.

Step 0 — Pre-flight: why this is not “just another LB”

Goal: Put Cloud CDN (a global edge cache) in front of a remote VM without changing the VM’s GitLab HTTPS config.

Why this needs extra care:
Cloud CDN sits at the Google edge (the load balancer).

For the LB to forward traffic to an origin that’s outside GCP you must use an Internet NEG (Network Endpoint Group) or a backend that points to that external IP/FQDN.

If you instead point Cloud CDN at a GCS bucket, or accidentally point an NEG at the same hostname the LB will use, you create a DNS or routing loop — everything breaks.

So we plan and test carefully.

Step 1 — DNS housekeeping: create an origin helper name

What we did: created a helper A record in Cloud DNS:

gl.apps.dev.to.    A    00.105.05.200   (your origin VM IP)
Enter fullscreen mode Exit fullscreen mode

Global overview / Why this step matters

  • The NEG needs to know how to reach your origin.
  • If you give the NEG the same hostname that you intend to point at the load balancer (for example gitlab.apps.dev.to), once you repoint that hostname to the LB IP, the NEG will start pointing at the LB — a loop.
  • The helper record (eg gl.apps.dev.to) points directly to the origin; it’s private infrastructure used only by the load balancer configuration. The public hostname (gitlab.apps.dev.to) will later point to the LB IP.

Pro tip: choose a helper name that is obviously “origin-only” so future ops don’t accidentally re-use it publicly.

Step 2 — Create an Internet NEG (Network Endpoint Group)

What we did: created gitlab-ovh-neg-https, an Internet NEG that includes a single endpoint: 00.105.05.200:443.

Global overview / Why this step matters

  • A NEG is a set of endpoints the LB can forward traffic to. For external origins, use an Internet NEG. It allows the GCLB to target external IPs/FQDNs.
  • This is the bridge between Google’s global frontend and your remote origin. Without it the LB wouldn’t know where to send requests.

Common mistakes

  • Creating a NEG that points at the same hostname you’re exposing publicly (loop).
  • Trying to use a backend bucket or instance group (these aren’t valid for an external VM).

Step 3 — Create the Backend Service and attach the NEG

What we did: created gitlab-ovh-backend-service (protocol HTTPS) and attached the gitlab-ovh-neg-https NEG. We enabled Cloud CDN on this backend and left backend TLS verification disabled (so the LB will accept the origin’s existing certificate, even if self-signed).

Global overview / Why this step matters

  • The backend service is the LB’s logical target: it defines protocol, timeouts, balancing policy, and whether Cloud CDN caches the response.
  • Enabling Cloud CDN on the backend service is the switch that says: “cache static responses from this origin at Google’s edge.”
  • Disabling backend certificate validation is the pragmatic choice when your origin has an internal/self-signed cert; it saves you from re-issuing origin certs during migration.

Important config decisions

  • Protocol: set to HTTPS, because GitLab Omnibus is serving HTTPS and we are not touching it.
  • Cloud CDN: Enabled, Cache mode = Cache static content (recommended for apps like GitLab).
  • Backend TLS verification: Off for now to avoid failing on self-signed origin certs.

Step 4 — Create the Global External HTTP(S) Load Balancer

This is the piece customers connect to: a global IP + an HTTPS frontend + routing rules that forward requests to the backend service.

What we did:

  • Created gitlab-cdn-1 (Global External Application Load Balancer).
  • Initial frontend listening on 33.149.6.84:80 (HTTP) — we later add HTTPS :443 and attach the certificate.
  • Routing rules point gitlab.apps.dev.to (or “All unmatched”) to gitlab-ovh-backend-service.

image
image

Step 4a — Frontend configuration (HTTPS)

Why this step matters

  • The frontend is how clients connect to your app. We want https://gitlab.apps.dev.to to present a valid Google-managed certificate (so clients don’t see the origin’s cert).
  • The LB terminates TLS, optionally applies HTTP/2 or QUIC, and then forwards requests to the backend.

Form you filled (example)

Name: git-frontend-lb
Protocol: HTTPS (with HTTP/2)
IP address: gl-https-frontend (a reserved global IP)
Port: 443
Certificate: gitlab-frontend-loadbalancer-created-certificate
SSL policy: GCP default
Enable HTTP→HTTPS redirect: optional (we used redirect)
Enter fullscreen mode Exit fullscreen mode

(Screenshot retained from your forms — you added the cert here manually because it didn’t appear in the dropdown at first.)

Step 4b — Backend binding

Select your previously created backend service (gitlab-dev-backend-service) and attach it.

Step 4c — Routing rules

Select your previously created backend service (gitlab-dev-backend-service) and attach it.

Global overview / Why this step matters

  • The URL map determines how different hostnames & URL prefixes get routed. This is also where you can have different backends for different paths (e.g., static assets from a bucket, dynamic traffic to VM).
  • We kept it simple: all traffic for gitlab.apps.dev.to goes to the NEG-backed service (the origin VM).

Step 5 — Certificates & HTTPS frontend (what you asked to mostly ignore)

Short note (since you said ignore provisioning waits):
You created/attached a certificate during the LB update because the auto-managed cert wasn’t present in the UI dropdown. That cert is bound to the HTTPS frontend so clients will see a valid cert for gitlab.apps.dev.to. LB handles client TLS; LB→origin continues to use the origin’s HTTPS.

Why do TLS at the LB?

  • Clients get a trusted Google-managed cert (zero hassle for users).
  • Google can offer HTTP/2 or QUIC at the edge.
  • You still get end-to-end encryption if the LB forwards to https://origin:443. This matches your requirement to not touch GitLab’s internal SSL.

Step 6 — Final DNS switch (public hostname → LB IP)

What we did: updated the public A record in Cloud DNS for the hostname:

gitlab.apps.dev.to.   A   33.149.6.84   (the LB frontend IP)
Enter fullscreen mode Exit fullscreen mode

We kept the helper origin record:

gl.apps.dev.to.       A   00.105.05.200   (origin IP – used by NEG)
Enter fullscreen mode Exit fullscreen mode

Global overview / Why this step matters

  • This flips public traffic to the LB so requests go to the global edge and then to the origin via the NEG.
  • Keeping the helper record ensures the NEG still resolves the real origin IP and doesn’t accidentally resolve to the LB.

Testing checklist

  • curl -I https://gitlab.apps.dev.to → header should include Via: 1.1 google or a Cloud CDN header.
  • git ls-remote https://gitlab.apps.dev.to/… → verify Git operations over HTTPS.
  • If something fails, revert gitlab.apps.dev.to A record to the origin IP to point clients directly back at the origin.

Step 7 — Cache & CDN settings (what we chose and why)

Settings we used

  • Cache mode: Cache static content (respect origin Cache-Control) — this caches CSS/JS/images while leaving dynamic pages alone. This is safe and effective for apps like GitLab which fingerprint assets.
  • Client TTL: 1 hour
  • Default TTL: 1 hour
  • Maximum TTL: 1 day
  • Compression: Automatic
  • Serve while stale: Disabled (safer)
  • Restricted content: Public (no signed URLs)
  • Negative caching: Disabled (sane default)

Global overview / Why these choices matter

  • GitLab serves dynamic and private content. You should not “force cache all content” or you risk serving stale or private pages. Let the origin decide caching for static assets (via Cache-Control), and let Cloud CDN obey that.
  • TTLs of 1 hour (default) balance freshness and edge performance; asset fingerprinting handles immediate spoof-free updates.

Step 8 — Invalidation strategy (how to purge caches safely)

Cloud CDN supports invalidation by host + path or by cache tags (if your origin emits them). GitLab doesn’t emit cache tags by default, so we invalidate by path when needed.

Examples

  • After upgrading assets: invalidate /assets/* for gitlab.apps.dev.to.
  • Avatars not refreshing: invalidate /uploads/* or the specific avatar path.
  • Emergency / last-resort: /* (very heavy — avoid unless necessary).

CLI example

gcloud compute url-maps invalidate-cdn-cache gitlab-url-map \
  --path "/assets/*" \
  --host "gitlab.apps.dev.to"
Enter fullscreen mode Exit fullscreen mode

Why this step matters

  • Invalidation prevents users from being stuck with stale assets after upgrades. It’s rate-limited (up to 500 invalidations/min) so batch where possible.

Step 9 — Health checks, monitoring & logging (don’t skip these)

Health check: create an HTTPS health check that targets something like /users/sign_in (GitLab login page) on port 443. Associate it with the backend service so LB can mark backends healthy/unhealthy.

Monitoring & logging: enable backend logging and LB logs (initially low sample rate if you want) — this helps troubleshoot cache misses or request failures.

Why these matter

  • Without an appropriate health check GCLB can consider the backend unhealthy (or you won’t be alerted when the origin is down).
  • Logs show whether traffic hits the CDN or goes to the origin, and whether the backend is returning errors.

Step 10 — Troubleshooting & rollback plan (practical and fast)

If something breaks after switching DNS

  • Revert gitlab.apps.dev.to A record back to the origin IP (00.105.05.200). That sends traffic directly to GitLab and restores normal service instantly.
  • Check LB logs and backend logs to see what the LB saw (status codes, origin handshake issues).
  • Common culprits:
    • NEG endpoint misconfigured (points at LB IP, causing loop). Fix: ensure NEG uses origin IP/FQDN.
    • Backend protocol mismatch — you created a backend service with HTTP but origin requires HTTPS (or vice versa). Fix by editing backend service protocol.
    • SSL verification on backend enabled while origin uses self-signed cert. Fix: disable backend verification or install a trusted cert on origin.

Rollback commands (example DNS revert)

  • Edit the A record in Cloud DNS or with your registrar and point gitlab.apps.dev.to back to 00.105.05.200.

Final notes, tips & advanced ideas (for when you want to clean up)

  • Future clean-up (optional): Once you’re ready, move to Option 1: terminate TLS at the LB and have the origin serve plain HTTP (change GitLab to listen on 80). That’s simpler and slightly more efficient (no double-encryption), but it requires touching the GitLab config.
  • Cloud Armor: If you want WAF-like protection, add Cloud Armor policies to the LB.
  • Origin certs: long-term, a valid origin cert + backend TLS validation gives you end-to-end strict TLS (recommended for maximum security).
  • Logging & metrics: enable Stackdriver logging & monitoring for LB metrics (cache hit ratio, backend latency, 5xx errors).

Appendix — Full ordered checklist (copy/pasteable)

  1. Create origin helper A record: gl.apps.dev.to → 00.105.05.200 (short TTL for testing).
  2. Create Internet NEG gitlab-ovh-neg-https pointing at 00.105.05.200:443.
  3. Create backend service gitlab-ovh-backend-service with protocol=HTTPS, attach NEG, enable Cloud CDN, disable backend TLS verification for now.
  4. Create global external HTTP(S) LB gitlab-cdn-1.
    • URL map: host gitlab.apps.dev.to → backend service.
    • Frontend: add HTTPS listener with a managed certificate attached (and optional HTTP redirect).
  5. Update Cloud DNS: gitlab.apps.dev.to → <LB IP>. Keep gl.apps.dev.to → origin IP.
  6. Configure HTTPS health check (port 443, path /users/sign_in).
  7. Test curl -I and git ls-remote against gitlab.apps.dev.to.
  8. If needed, invalidate caches (e.g. /assets/*) via Cloud Console or gcloud CLI.
  9. Enable LB logging and monitor for errors.

Closing thoughts

Putting an external GitLab behind GCP’s CDN is a little like building a toll booth on the highway that politely checks tickets and hands out free maps to tourists — the travelers get faster rides, you get the analytics, and the origin server gets fewer late-night support calls.

The trade-offs are small: a bit of setup overhead (NEG, backend, cert binding), but the payoff is global speed and a smoother experience for everyone who clones, pushes, and browses your repos.

LiveReview helps you get great feedback on your PR/MR in a few minutes.

Saves hours on every PR by giving fast, automated first-pass reviews.

If you're tired of waiting for your peer to review your code or are not confident that they'll provide valid feedback, here's LiveReview for you.

Top comments (2)

Collapse
 
shrsv profile image
Shrijith Venkatramana

Excellent walk-through

Collapse
 
lovestaco profile image
Athreya aka Maneshwar

Thanks Shrijith