DEV Community

Brian Michalski
Brian Michalski

Posted on

Dynamic nginx config on Cloud Run

Google Cloud Run has quickly become one of my favorite cloud products. Not only is it super simple to setup (*cough* k8s *cough*), but it also has a generous free tier and lets you run nearly any stateless container. It also supports custom domains, with a small CNAME tweak you can start serving foo.your-domain.com over HTTPS without any extra work. Where have you been all my life?

This got me thinking -- could I just run an nginx container on Cloud Run and, with a reverse proxy config, glue together a bunch of microservices and static resources under a single domain? I could effectively use Cloud Run as a frontend server / poor-man's load balancer (without the actual balancing features).

I wanted to avoid baking or hardcoding my nginx.conf file in each Docker image. Launching a new microservice on my domain should require a simple config change, not building, pushing, and deploying a whole new Docker image. To do that, we have to use the 1 editable thing Cloud Run provides: environment variables.

With some clever bash-foo, we can build a Dockerfile which takes a $CONFIG environment variable and injects it into to a file before starting nginx. We can change the CONFIG variable as many times as we want without needing to rebuild a new Docker image each time.

FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY site.template /etc/nginx/conf.d/site.template
CMD /bin/bash -c "envsubst < /etc/nginx/conf.d/site.template > /etc/nginx/conf.d/site.conf && exec nginx -g 'daemon off;'"
Enter fullscreen mode Exit fullscreen mode

[Dockerfile on Github]


With that in place, configuring our nginx instance is as easy as copying an nginx config blob into the Cloud Run UI. The UI doesn't support multi-line strings so everything gets mashed into 1 line, but nginx doesn't care about line breaks when it's reading the config.

As an example, you can run a very simple reverse proxy by pasting this into a CONFIG variable:

Cloud Run Config section

location / {
  proxy_http_version 1.1;
  proxy_set_header Connection "";
  proxy_buffering off;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_pass http://www.google.com;
}
Enter fullscreen mode Exit fullscreen mode

Or something a bit fancier which mixes in some Cloud Storage resources (a great way to expose them over HTTPS on a custom domain on the cheap):

location /static/ {
  proxy_http_version 1.1;
  proxy_set_header Connection "";
  proxy_buffering off;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_pass https://storage.googleapis.com/cloud-run-fun-static-test/;
}

location / {
  proxy_http_version 1.1;
  proxy_set_header Connection "";
  proxy_buffering off;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_pass http://www.google.com;
}

Enter fullscreen mode Exit fullscreen mode

In the Cloud Run UI, this looks a little bit like:

Cloud Run Details section


If you want to quickly try this out yourself, you can deploy my nginx image from gcr.io/cloud-run-fun/nginx:latest or jump right to the Console:

Run on Google Cloud

Full code @ https://github.com/bamnet/cloud-run-fun.

Top comments (3)

Collapse
 
mattwelke profile image
Matt Welke

Clever use of Cloud Run. I'd like it more if it had the ability to connect to VPCs, which as far as I know is still a feature in progress for Cloud Run. Could use a Cloud Function instead, since they can connect to VPCs, but then you can't serve it on a custom domain. Ah well, can't have it all I guess. :P

Collapse
 
bamnet profile image
Brian Michalski

Yea, I've been following this issue to see when VPC support will land.

Cloud Functions is good workaround, I hadn't considered that! Having each request spin up a new nginx instance feels like a bit of overhead but workable. There might be a messy hybrid approach pointing Cloud Run -> Cloud Function using something lighter weight than a full nginx instance on Cloud Functions.

Collapse
 
mattwelke profile image
Matt Welke

Cloud Functions would only spin up a new backing container if it needs to scale up. The first nginx instance would keep running until traffic warrants a second instance being spun up. Thanks for the issue link! I'll be watching that closely too.