(Introduction)
Modern observability pipelines require more than just moving data from point A to point B; they require enterprise-grade security. When running Vector.dev on Google Cloud Platform (GCP), many engineers fall into the trap of using static JSON Service Account keys. These keys are a security liability.
In this tutorial, I’ll show you how to implement a more secure approach using GCP Workload Identity, allowing Vector to authenticate natively and securely.
(The Architecture)
Our setup involves:
Vector.dev running on Kubernetes (GKE).
Google Service Account (GSA) with specific permissions (e.g., Pub/Sub Publisher).
Workload Identity Federation to link the Kubernetes Service Account (KSA) to the GSA.
(Step 1: Create the Google Service Account)
First, create a dedicated service account for Vector and grant it only the necessary permissions:
Bash
gcloud iam service-accounts create vector-aggregator \
--display-name="Vector Aggregator Service Account"
gcloud projects add-iam-policy-binding [YOUR_PROJECT_ID] \
--member="serviceAccount:vector-aggregator@[YOUR_PROJECT_ID].iam.gserviceaccount.com" \
--role="roles/pubsub.publisher"
(Step 2: Bind Kubernetes to GCP IAM)
Now, we allow the Kubernetes service account to act as the Google service account:
Bash
gcloud iam service-accounts add-iam-policy-binding \
vector-aggregator@[YOUR_PROJECT_ID].iam.gserviceaccount.com \
--role="roles/iam.workloadIdentityUser" \
--member="serviceAccount:[YOUR_PROJECT_ID].svc.id.goog[vector-namespace/vector-ksa]"
(Step 3: Configure Vector to use Identity)
In your vector.yaml, you don't need to specify a key_file. Vector is smart enough to use the environment's default credentials provided by Workload Identity:
YAML
sinks:
gcp_pubsub:
type: google_cloud_pubsub
inputs:
- your_log_source
project: "[YOUR_PROJECT_ID]"
topic: "vector-logs-topic"
# No credentials_path needed!
(Conclusion)
By removing static keys, you reduce the risk of credential leakage and align your infrastructure with Zero-Trust principles. This setup is scalable, secure, and easier to manage in large-scale GCP environments.
Top comments (0)