DEV Community

Cover image for GCP-Nuke to clean your sandbox projects
Jérôme Dx
Jérôme Dx

Posted on

GCP-Nuke to clean your sandbox projects

GCP Nuke is a tool you can use to clean up your GCP Projects. It’s a Go client, under MIT licence, that will remove all the resources from your project, or you can restrict them with filters.

As “nuke” let you guess, it can be a very destructive tool.

Some equivalents exist for AWS and Azure, provided by the same author.

It is particularly interesting, by example, if you have a Sandbox project you want to clean regularly, for cost reasons or even sustainibility considerations.

I will show you concretly how you can use it, for this kind of use.

A basic Sandbox usage

FIrst you can install it simply with brew :

brew install ekristen/tap/gcp-nuke
Enter fullscreen mode Exit fullscreen mode

Ensure you have all this services enabled :

gcloud services enable cloudresourcemanager.googleapis.com
gcloud services enable compute.googleapis.com
gcloud services enable storage.googleapis.com
gcloud services enable run.googleapis.com
gcloud services enable cloudfunctions.googleapis.com
gcloud services enable pubsub.googleapis.com
gcloud services enable iam.googleapis.com

# to check everything is correct :
gcloud services list --enabled
Enter fullscreen mode Exit fullscreen mode

You will have the possibility to customize the behavior of the tool with a config file.

Here is the list of available resources handle by the tool.

For my usage, I choosed to select some resources :

  • Cloud Functions
  • CloudRun
  • GKE
  • Compute Instances & disks
  • Storage Buckets
  • BigQuery Datasets
  • CloudSQL instances

It will parse the 3 regions I use most, and it let me choose to keep some resources with this labels :

  • managed_by: terraform
  • gcp-nuke: ignore
regions:
  - global
  - us-east1 # US: South Carolina
  - europe-west1 # EU: Belgium
  - europe-west9 # EU: Paris

accounts:
  my-gcp-project:
      includes:
        - CloudFunction
        - CloudFunction2
        - CloudRun
        - GKECluster
        - ComputeInstance
        - ComputeFirewall
        - ComputeForwardingRule
        - ComputeSSLCertificate
        - ComputeDisk
        - StorageBucket
        - BigQueryDataset
        - CloudSQLInstance

    filters:
      __global__:
        - property: label:gcp-nuke
          value: "ignore"
        - property: label:managed_by
          value: "terraform"
      ComputeFirewall:
        - property: "name"
          value: "default-*"
          invert: true

Enter fullscreen mode Exit fullscreen mode

To authenticate, you can create a Service Account and declare the environment variable, or connect with the "auth login" option :

export GOOGLE_APPLICATION_CREDENTIALS=creds.json
# or
gcloud auth login
Enter fullscreen mode Exit fullscreen mode

You can do a “dry-launch” launch to see what will be destroyed (dry-run is defined by default to limit the risks) :

gcp-nuke run --config config.yml --no-prompt --project-id <current-gcp-project-id>
Enter fullscreen mode Exit fullscreen mode

When you’re sure, you can launch the “nuke” 💥 :

gcp-nuke run --config config.yml --no-prompt --no-dry-run --project-id <current-gcp-project-id>
Enter fullscreen mode Exit fullscreen mode

Automation

If you want, you can define a regular job that will clear up the resources, once a day at 2 pm, for instance.

This can be achieved with a CLoudRun job.

Here is the Dockerfile :

FROM docker.io/library/golang:1.23 AS builder

# Install gcp-nuke
RUN go install github.com/ekristen/gcp-nuke@latest

FROM docker.io/library/debian:bookworm-slim

# Install ca-certificates for HTTPS requests and gcloud CLI
RUN apt-get update && \
    apt-get install -y ca-certificates curl gnupg && \
    echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
    curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && \
    apt-get update && \
    apt-get install -y google-cloud-cli && \
    rm -rf /var/lib/apt/lists/*

COPY --from=builder /go/bin/gcp-nuke /usr/local/bin/gcp-nuke

WORKDIR /app
COPY config.yml .
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh

CMD ["./entrypoint.sh"]
Enter fullscreen mode Exit fullscreen mode

And the entrypoint.sh :

#!/bin/bash
set -e

echo "Setting project..."
gcloud config set project ${PROJECT_ID}

echo "Running gcp-nuke..."
gcp-nuke run --config config.yml --no-prompt --no-dry-run --project-id ${PROJECT_ID}

echo "gcp-nuke completed."
Enter fullscreen mode Exit fullscreen mode

Push it on the Artifact Registry

gcloud artifacts repositories create gcp-nuke-repo \
  --repository-format=docker \
  --location=europe-west9 \
  --description="Docker Repo for gcp-nuke"

podman build -t europe-west9-docker.pkg.dev/<your-project-id>/gcp-nuke-repo/gcp-nuke-job:latest .

gcloud auth configure-docker europe-west9-docker.pkg.dev
podman push europe-west9-docker.pkg.dev/<your-project-id>/gcp-nuke-repo/gcp-nuke-job:latest
Enter fullscreen mode Exit fullscreen mode

Here is the Terraform for deployement, that will create the CloudRun Job, the Service Account and the Cloud Scheduler :

# ----------------------------
# Cloud Run Job
# ----------------------------
resource "google_cloud_run_v2_job" "gcp_nuke_job" {
  name     = "gcp-nuke-job"
  location = var.region

  deletion_protection = false

  template {
    template {
      service_account = google_service_account.gcp_nuke.email

      containers {
        image = "${var.region}-docker.pkg.dev/${var.project_id}/gcp-nuke-repo/gcp-nuke-job:latest"

        env {
          name  = "PROJECT_ID"
          value = var.project_id
        }

        resources {
          limits = {
            cpu    = "1"
            memory = "512Mi"
          }
        }
      }

      max_retries = 3
      timeout     = "300s"
    }

    parallelism = 1

    labels = {
      managed_by = "terraform"
    }
  }
}

# ----------------------------
# Service account
# ----------------------------

resource "google_service_account" "gcp_nuke" {
  account_id   = "sa-gcp-nuke"
  display_name = "Service Account for gcp-nuke"
}

resource "google_project_iam_member" "gcp_nuke_owner" {
  project = var.project_id
  role    = "roles/owner"
  member  = "serviceAccount:${google_service_account.gcp_nuke.email}"
}

# ----------------------------
# Cloud Scheduler
# ----------------------------
resource "google_cloud_scheduler_job" "gcp_nuke_schedule" {
  name        = "gcp-nuke-schedule"
  description = "Run gcp-nuke every day at 2am"
  schedule    = "0 2 * * *"
  time_zone   = "Europe/Paris"

  http_target {
    http_method = "POST"
    uri         = "https://${var.region}-run.googleapis.com/apis/run.googleapis.com/v1/namespaces/${var.project_id}/jobs/${google_cloud_run_v2_job.gcp_nuke_job.name}:run"

    oauth_token {
      service_account_email = google_service_account.gcp_nuke.email
    }
  }
}

Enter fullscreen mode Exit fullscreen mode

GCP-Nuke is a convenient tool to clean up a GCP Project, you have now some keys to use it locally, or to automate it.

If you want to regularly clean up a Sandbox project, this is a case where it's particularly suitable, as it reduces financial costs and the ecological footprint, always with the same idea to build only what you really consume.

Top comments (0)