DEV Community

Cover image for Serverless Picture Gallery on Google Cloud - Part 1
Piotr Pabis
Piotr Pabis

Posted on • Originally published at pabis.eu

Serverless Picture Gallery on Google Cloud - Part 1

Today, I want to walk you through a simple project I decided to build to better learn and understand Google Cloud while preparing for PCA exam. I wanted to utilize many services that Google offers and let me troubleshoot things, as when you break and fix something, you understand it better. In this project, we will create a picture gallery where the pictures will be labeled by Vision API and augmented using Nano Banana. Other services we will utilize are: CloudRun, CloudRun Functions, Global Load Balancer, Cloud Armor, Storage Buckets and Firestore.

Diagram of the solution with part 1 highlighted

In this episode of the project we will only focus on the highlighted part. We will create a publicly readable bucket for our images. A Global External Load Balancer will serve them (along with CDN caching) and also route us to a CloudRun function, currently with a dummy website showing placeholders. We will also hide the load balancer behind Cloud Armor so that only our IP subnet can access the service. In my setup I also added SSL protection but this is optional - you can just use HTTP with IP directly, as you will need a domain for SSL to work.

Find the code on GitHub: ppabis/gcp-photos-gallery

Prerequisites

Before we start, be sure that you have the following:

  • Google Cloud account (obviously),
  • OpenTofu v1.11.0 or newer,
  • gcloud CLI configured and connected to your project,
  • Docker provider such as Docker Desktop or OrbStack.

Enabling APIs and configuring Docker

We will also enable necessary APIs (the list might not be complete) and configure Docker credentials to use with Artifact Registry. Run the following commands to set up everything.

gcloud services enable vision.googleapis.com
gcloud services enable eventarc.googleapis.com
gcloud services enable cloudfunctions.googleapis.com
gcloud services enable certificatemanager.googleapis.com
gcloud services enable cloudbuild.googleapis.com
gcloud services enable aiplatform.googleapis.com
# You can change region here if you want, just accept the prompt
gcloud auth configure-docker europe-west4-docker.pkg.dev
Enter fullscreen mode Exit fullscreen mode

Providers and basic configuration

In this part of the project we will also import all the Terraform providers we will need later. I will also create necessary basic configuration. The region of my choice is europe-west4 but you can pick any other you want.

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~> 7.15.0"
    }

    random = {
      source  = "hashicorp/random"
      version = "~> 3.0"
    }

    docker = {
      source  = "kreuzwerker/docker"
      version = "~> 3.0"
    }

    archive = {
      source  = "hashicorp/archive"
      version = "~> 2.4"
    }
  }

  required_version = ">= 1.11.0" # OpenTofu only!
}

variable "project_id" {
  type        = string
  description = "The ID of the Google Cloud project"
}

provider "google" {
  project = var.project_id
  region  = "europe-west4"
  zone    = "europe-west4-a"
}

provider "docker" {
  registry_auth {
    address     = "europe-west4-docker.pkg.dev"
    config_file = pathexpand("~/.docker/config.json")
  }
}
Enter fullscreen mode Exit fullscreen mode

Bucket for storing images

Let's start with the simplest thing which is a bucket which will store the images to show on the website. We will use random_string resource as bucket names have to be globally unique (just like in AWS S3). My bucket will be multi-region across "EU" location. I will also choose uniform bucket level access to prevent ACLs being used for the objects.

resource "random_string" "bucket_name" {
  length  = 10
  special = false
  upper   = false
  lower   = true
  numeric = true
}

resource "google_storage_bucket" "photos_bucket" {
  name                        = "photos-bucket-${random_string.bucket_name.result}"
  location                    = "EU"
  storage_class               = "STANDARD"
  uniform_bucket_level_access = true
  force_destroy               = true
}
Enter fullscreen mode Exit fullscreen mode

What is more, we want the bucket to be publicly accessible and reachable via the Load Balancer later. I will create two resources for that. First will be bucket viewer role that will be given to allUsers. Next I will create a backend for the bucket (AWS equivalent is Target Group).

resource "google_storage_bucket_iam_member" "public_read_access" {
  bucket = google_storage_bucket.photos_bucket.name
  role   = "roles/storage.objectViewer"
  member = "allUsers"
}

resource "google_compute_backend_bucket" "photos_backend_bucket" {
  name        = "backend-bucket-${random_string.bucket_name.result}"
  bucket_name = google_storage_bucket.photos_bucket.name
  enable_cdn  = true
  # edge_security_policy = ... We will revisit it later
}
Enter fullscreen mode Exit fullscreen mode

After you apply, you should be able to upload any object to the bucket via Console or gsutil and see it when accessing via browser.

Object publicly visible

Global External Application Load Balancer

As you see by the title, Google has amazing naming convention when it comes to their products. But just as long is this name, also list of its functionalities. This essentially combines AWS'es Application Load Balancer, CloudFront and Global Accelerator into one service.

We will reserve a new public IPv4 IP from Google and use it as a frontend for our load balancer. This frontend will lead to URL map that for now will route everything to the storage bucket (we will revisit it later). In this example, it only has HTTP endpoint. In the repository I also left an example for HTTPS if you happen to have a domain. (You will need to add a CNAME record to validate domain ownership.)

resource "google_compute_global_address" "lb_ip" {
  name = "lb-external-ip"
}

resource "google_compute_url_map" "url_map" {
  name            = "global-app-lb"
  default_service = google_compute_backend_bucket.photos_backend_bucket.id

  host_rule {
    hosts        = ["*"]
    path_matcher = "path-matcher"
  }

  path_matcher {
    name            = "path-matcher"
    default_service = google_compute_backend_bucket.photos_backend_bucket.id

    path_rule {
      paths   = ["/images", "/images/*"]
      service = google_compute_backend_bucket.photos_backend_bucket.id
    }
  }
}

resource "google_compute_target_http_proxy" "http_proxy" {
  name    = "lb-http-proxy"
  url_map = google_compute_url_map.url_map.id
}

resource "google_compute_global_forwarding_rule" "forwarding_rule" {
  name                  = "lb-forwarding-rule"
  ip_protocol           = "TCP"
  load_balancing_scheme = "EXTERNAL_MANAGED"
  port_range            = "80"
  target                = google_compute_target_http_proxy.http_proxy.id
  ip_address            = google_compute_global_address.lb_ip.id
}
Enter fullscreen mode Exit fullscreen mode

Now if you go to the loadbalancer via the IP address it should show you also the image. If you go to images/ path it will be appended also when requesting object from the bucket. So for example http://1.2.3.4/images/test.jpg will go to gs://photos-bucket-123456/images/test.jpg. This will be the case also when we later change the default route to Cloud Run.

Image visible via Load Balancer

Creating an index website

First, let's create a dummy website that will simply display some placeholder photos. I will use Python with FastAPI. The site needs to be built with Docker, so you have a very wide choice of languages and SDKs. Below is a simple app that serves a templated HTML website in templates/ directory. For brevity I will skip the actual template but this can simply be a vibe-coded HTML page.

import fastapi, random, uvicorn, os
import fastapi.templating

app = fastapi.FastAPI()
templates = fastapi.templating.Jinja2Templates(directory="templates")

@app.get("/")
def index(request: fastapi.Request):
    # HTML response based on Jinja template
    return templates.TemplateResponse(
        "index.html", {
            "request": request,
            "number": random.randint(5, 20)
        }
    )

if __name__ == "__main__":
    port = int(os.environ.get("PORT", 8080))
    uvicorn.run(app, host="0.0.0.0", port=port)
Enter fullscreen mode Exit fullscreen mode

Now it's time for a Dockerfile. I will install FastAPI, Uvicorn and Jinja directly with pip but it's better to use actual requirements.txt in real scenarios.

FROM    python:3.14-alpine
ENV     PORT 8080
WORKDIR /app/
RUN     pip install fastapi uvicorn jinja2
COPY    . /app/
CMD     ["sh", "-c", "uvicorn main:app --host 0.0.0.0 --port ${PORT}"]
Enter fullscreen mode Exit fullscreen mode

Now we can use docker provider in Terraform to build and push our application. In order to make it react to changes, I will also use a variable and increase version with each change. But in order to push, let's also define a new repository in Artifact Registry.

variable "app_version" { type = string }

resource "google_artifact_registry_repository" "website_repo" {
  location      = "europe-west4"
  repository_id = "website-repository"
  description   = "Docker repository for the website image"
  format        = "DOCKER"
  docker_config { immutable_tags = false }
}

resource "docker_image" "website_image" {
  name = "${google_artifact_registry_repository.website_repo.registry_uri}/website-image:${var.app_version}"
  build {
    context  = "./website"
    platform = "linux/amd64"
  }
}

resource "docker_registry_image" "website_image" {
  name          = docker_image.website_image.name
  keep_remotely = true
}
Enter fullscreen mode Exit fullscreen mode

Now as you apply, the new image from website/ directory should be built and pushed to Artifacts Registry in Google Cloud. This way we can further create a service online that will run serve this image.

Creating Cloud Run service

First I will create a basic Cloud Run service with the image created above. Let's approach this step by step. As later we will use this service to connect to some other services, we will attach a service account in advance. Service Account is equivalent to AWS IAM Role. I will configure also the port on which it should listen. The same port will be exported to PORT environment variable. To make the Cloud Run function accessible only via Load Balancer later (and not with .run.app link), I set ingress configuration to INGRESS_TRAFFIC_INTERNAL_LOAD_BALANCER. Also the scaling will be limited to at most 3 instances.

resource "google_service_account" "cloud_run_sa" {
  account_id   = "cloud-run-website-sa"
  display_name = "Service Account for Cloud Run Website"
}

resource "google_cloud_run_v2_service" "website_service" {
  name                = "website-service"
  location            = "europe-west4"
  ingress             = "INGRESS_TRAFFIC_INTERNAL_LOAD_BALANCER"
  deletion_protection = false
  depends_on          = [docker_image.website_image, docker_registry_image.website_image]

  template {
    service_account = google_service_account.cloud_run_sa.email
    containers {
      image = "${google_artifact_registry_repository.website_repo.registry_uri}/website-image:${var.app_version}"
      ports { container_port = 8080 }
    }
  }

  scaling {
    min_instance_count    = 0
    max_instance_count    = 3
  }
}
Enter fullscreen mode Exit fullscreen mode

Before we can expose Cloud Run app in the load balancer, we still need to create two things. First, we need to allow anyone to use the Service without authentication. Secondly, we need to define network endpoint group that will expose our Cloud Run Service for this region (you can use single Load Balancer Backend with multiple Network Endpoint Groups to serve same Service in multiple regions).

resource "google_cloud_run_v2_service_iam_member" "noauth" {
  location = google_cloud_run_v2_service.website_service.location
  project  = google_cloud_run_v2_service.website_service.project
  name     = google_cloud_run_v2_service.website_service.name
  role     = "roles/run.invoker"
  member   = "allUsers"
}

resource "google_compute_region_network_endpoint_group" "serverless_neg" {
  name                  = "serverless-neg"
  network_endpoint_type = "SERVERLESS"
  region                = "europe-west4"
  cloud_run { service = google_cloud_run_v2_service.website_service.name }
}

resource "google_compute_backend_service" "website_backend" {
  name                  = "website-backend"
  load_balancing_scheme = "EXTERNAL_MANAGED"
  protocol              = "HTTP"
  backend { group = google_compute_region_network_endpoint_group.serverless_neg.id }
  # security_policy = ... we will revisit it later
}
Enter fullscreen mode Exit fullscreen mode

Accessing our website from Load Balancer

Now we can attach the new backend to the Load Balancer. We will replace the default routes in URL map and keep /images/ being served by the bucket backend.

resource "google_compute_url_map" "url_map" {
  name            = "global-app-lb"
  default_service = google_compute_backend_service.website_backend.id
  # Changed the following ๐Ÿ‘†
  # ...

  path_matcher {
    # And this one too ๐Ÿ‘‡
    name            = "path-matcher"
    default_service = google_compute_backend_service.website_backend.id

    path_rule {
      paths   = ["/images", "/images/*"]
      service = google_compute_backend_bucket.photos_backend_bucket.id
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The root of the load balancer should now serve your new Cloud Run website like on the image below.

Index of our project

Protecting the site with Cloud Armor

If you go to logs explorer in Google Console, you will see a lot of requests coming from malicious individuals running scanners against IPs. This is the unfortunate fact about IPv4 - it's so limited that the range to scan is very small and makes sense for hackers who want to steal your .env.

Bad actors in the logs

In general you would block individual IPs or ranges but in our case, we can simply use our own IP as allowlist. That's what Cloud Armor is for (it actually can do much more, like block requests for .env for example ๐Ÿคฉ). The rules are applied to the backends and because we have two types of backends - compute for our Cloud Run and bucket for Storage Bucket, we need also two types of policies. Actually, they will look exactly the same with only change in one argument. The default behavior will be to deny and throw HTTP error code 403.

variable "ip_range" {
  type        = string
  default     = "0.0.0.0/0" # Set this to your IP range
}

resource "google_compute_security_policy" "policy" {
  name = "backend-allowed-ips"
  type = "CLOUD_ARMOR"

  rule {
    action   = "allow"
    priority = "1000"
    match {
      versioned_expr = "SRC_IPS_V1"
      config { src_ip_ranges = [var.ip_range] }
    }
    description = "Allow access from my IP range"
  }

  rule {
    action   = "deny(403)"
    priority = "2147483647"
    match {
      versioned_expr = "SRC_IPS_V1"
      config { src_ip_ranges = ["*"] }
    }
    description = "Default deny rule"
  }
}

resource "google_compute_security_policy" "edge_policy" {
  name = "edge-allowed-ips"
  type = "CLOUD_ARMOR_EDGE"

  # Exactly same rules as above!
  # ...
}
Enter fullscreen mode Exit fullscreen mode

Now we can apply the new policies to our backend services. Let's revisit the resources for Cloud Run function and for the images bucket.

resource "google_compute_backend_service" "website_backend" {
  name                  = "website-backend"
  load_balancing_scheme = "EXTERNAL_MANAGED"
  protocol              = "HTTP"
  backend { group = google_compute_region_network_endpoint_group.serverless_neg.id }
  # Changed this one ๐Ÿ‘‡, use normal CLOUD_ARMOR policy here
  security_policy = google_compute_security_policy.policy.id
}

resource "google_compute_backend_bucket" "photos_backend_bucket" {
  name        = "backend-bucket-${random_string.bucket_name.result}"
  bucket_name = google_storage_bucket.photos_bucket.name
  enable_cdn  = true
  # Changed this one ๐Ÿ‘‡, use CLOUD_ARMOR_EDGE policy here
  edge_security_policy = google_compute_security_policy.edge_policy.id
}
Enter fullscreen mode Exit fullscreen mode

If you try to access the load balancer for example using VPN, it should now show Forbidden but should work for your home IP. If you don't know what you IP is, do the following: go to https://api.ipify.org/, replace last two numbers with 0 and 0 and add /16. That way the policy will be flexible enough if you have dynamic IP. So 208.81.188.10 should become 208.81.0.0/16.

Access forbidden for VPN connections

Going further

Phew! That was some long writing. In the next part we will focus on the upload process and triggering Cloud Run Function which will describe our image with Vision API.

Top comments (0)