Deploy a Polyglot Stack on Google Cloud
Deploying multiple services written in different languages can be tricky, but with Google Cloud managed services, you can run a polyglot stack efficiently. In this guide, we’ll deploy:
- Backend: Spring Boot (Java)
- AI/Service: FastAPI (Python)
- Database: Cloud SQL (PostgreSQL)
- Cache: Memorystore (Redis)
- Networking: Serverless VPC Access Connector
- CI/CD: Cloud Build + Artifact Registry
- Hosting: Cloud Run (serverless)
This setup ensures production-grade isolation, scalability, private networking, and clean CI/CD pipelines.
All commands use placeholders like
PROJECT_ID
andREGION
. Replace them with your own values. Store sensitive values in Secret Manager, never in shell history.
1. Install the Google Cloud CLI
To install the Google Cloud CLI on your system, follow the official guide for your platform: Google Cloud SDK Installation
Initialize:
gcloud init
2. Set Environment Variables
export PROJECT_ID="your-gcp-project-id"
export REGION="asia-southeast1"
export ARTIFACT_REGION="asia-southeast1"
# Networking
export NETWORK="default"
export SUBNET="default"
export VPC_CONNECTOR="serverless-connector-01"
export VPC_CONNECTOR_RANGE="10.8.0.0/28"
# Cloud SQL (Postgres)
export SQL_INSTANCE_NAME="app-postgres"
export SQL_TIER="db-f1-micro"
export SQL_DB_BACKEND="backend_db"
export SQL_DB_AI="ai_db"
export SQL_USER="app_user"
export SQL_PASS="$(openssl rand -base64 24)" # store in Secret Manager
# Memorystore (Redis)
export REDIS_INSTANCE="app-redis"
export REDIS_TIER="BASIC"
export REDIS_SIZE_GB=1
# Artifact Registry
export REPO_BACKEND="apps-backend"
export REPO_AI="apps-ai"
# Images and services
export BACKEND_IMAGE="${ARTIFACT_REGION}-docker.pkg.dev/${PROJECT_ID}/${REPO_BACKEND}/backend:$(git rev-parse --short HEAD)"
export AI_IMAGE="${ARTIFACT_REGION}-docker.pkg.dev/${PROJECT_ID}/${REPO_AI}/ai:$(git rev-parse --short HEAD)"
export SERVICE_BACKEND="backend"
export SERVICE_AI="ai"
Enable required APIs:
gcloud services enable run.googleapis.com sqladmin.googleapis.com redis.googleapis.com \
artifactregistry.googleapis.com vpcaccess.googleapis.com secretmanager.googleapis.com
3. Create Cloud SQL (PostgreSQL)
# Create instance
gcloud sql instances create ${SQL_INSTANCE_NAME} \
--project=${PROJECT_ID} \
--region=${REGION} \
--database-version=POSTGRES_15 \
--tier=${SQL_TIER}
# Create databases
gcloud sql databases create ${SQL_DB_BACKEND} --instance=${SQL_INSTANCE_NAME}
gcloud sql databases create ${SQL_DB_AI} --instance=${SQL_INSTANCE_NAME}
# Create user
gcloud sql users create ${SQL_USER} --instance=${SQL_INSTANCE_NAME} --password="${SQL_PASS}"
# Get connection name for Cloud Run
export SQL_CONNECTION_NAME=$(gcloud sql instances describe ${SQL_INSTANCE_NAME} --format='value(connectionName)')
Tip: Use Private IP for production and connect via a VPC connector for better security.
4. Create Memorystore (Redis)
gcloud redis instances create ${REDIS_INSTANCE} \
--size=${REDIS_SIZE_GB} \
--region=${REGION} \
--tier=${REDIS_TIER} \
--network=${NETWORK}
export REDIS_HOST=$(gcloud redis instances describe ${REDIS_INSTANCE} --region=${REGION} --format='value(host)')
export REDIS_PORT=$(gcloud redis instances describe ${REDIS_INSTANCE} --region=${REGION} --format='value(port)')
5. Set Up Serverless VPC Access Connector
gcloud compute networks vpc-access connectors create ${VPC_CONNECTOR} \
--region=${REGION} \
--network=${NETWORK} \
--range=${VPC_CONNECTOR_RANGE}
This allows Cloud Run services to reach private resources like Cloud SQL and Redis.
6. Artifact Registry
gcloud artifacts repositories create ${REPO_BACKEND} \
--repository-format=docker \
--location=${ARTIFACT_REGION}
gcloud artifacts repositories create ${REPO_AI} \
--repository-format=docker \
--location=${ARTIFACT_REGION}
gcloud auth configure-docker ${ARTIFACT_REGION}-docker.pkg.dev
7. Build and Push Docker Images
- Spring Boot Backend:
docker build -t ${BACKEND_IMAGE} -f backend/Dockerfile.cloudrun backend
docker push ${BACKEND_IMAGE}
- FastAPI AI Service:
docker build -t ${AI_IMAGE} -f backend-ai/Dockerfile.cloudrun backend-ai
docker push ${AI_IMAGE}
8. Deploy on Cloud Run
Use VPC connector, Cloud SQL, and Secret Manager for secure connections.
Spring Boot Backend
gcloud run deploy ${SERVICE_BACKEND} \
--image=${BACKEND_IMAGE} \
--region=${REGION} \
--platform=managed \
--allow-unauthenticated \
--min-instances=0 \
--vpc-connector=${VPC_CONNECTOR} \
--vpc-egress=all-traffic \
--add-cloudsql-instances=${SQL_CONNECTION_NAME} \
--set-env-vars=SPRING_PROFILES_ACTIVE=prod \
--set-env-vars=CLOUD_SQL_CONNECTION_NAME=${SQL_CONNECTION_NAME} \
--set-env-vars=SPRING_DATASOURCE_URL="jdbc:postgresql://${SQL_DB_BACKEND}?cloudSqlInstance=${SQL_CONNECTION_NAME}&socketFactory=com.google.cloud.sql.postgres.SocketFactory" \
--set-env-vars=SPRING_DATASOURCE_USERNAME=${SQL_USER} \
--set-secrets=SPRING_DATASOURCE_PASSWORD=sql-db-password:latest \
--set-env-vars=SPRING_DATA_REDIS_HOST=${REDIS_HOST} \
--set-env-vars=SPRING_DATA_REDIS_PORT=${REDIS_PORT} \
--set-secrets=SPRING_CUSTOM_SECURITY_JWTSECRET=jwt-secret:latest
FastAPI AI Service
gcloud run deploy ${SERVICE_AI} \
--image=${AI_IMAGE} \
--region=${REGION} \
--platform=managed \
--allow-unauthenticated \
--min-instances=0 \
--vpc-connector=${VPC_CONNECTOR} \
--vpc-egress=all-traffic \
--add-cloudsql-instances=${SQL_CONNECTION_NAME} \
--set-env-vars=DEBUG=False \
--set-env-vars=DATABASE_URL="postgresql+psycopg2://${SQL_USER}:<from-secret>@/${SQL_DB_AI}?host=/cloudsql/${SQL_CONNECTION_NAME}" \
--set-env-vars=REDIS_URL="redis://${REDIS_HOST}:${REDIS_PORT}/0" \
--set-secrets=JWT_SECRET_KEY=jwt-secret:latest \
--set-env-vars=JWT_ALGORITHM=HS256 \
--set-env-vars=JWT_ACCESS_TOKEN_EXPIRE_MINUTES=60
Replace
<from-secret>
with Secret Manager reference.
9. CI/CD with Cloud Build
Automate builds and deployment using Cloud Build triggers on main
branch.
- Backend
cloudbuild.yaml
(template):
steps:
- name: gcr.io/cloud-builders/docker
args: ['build','-t','${ARTIFACT_REGION}-docker.pkg.dev/${PROJECT_ID}/${REPO_BACKEND}/backend:$COMMIT_SHA','-f','backend/Dockerfile.cloudrun','backend']
- name: gcr.io/cloud-builders/docker
args: ['push','${ARTIFACT_REGION}-docker.pkg.dev/${PROJECT_ID}/${REPO_BACKEND}/backend:$COMMIT_SHA']
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
entrypoint: gcloud
args: ['run','deploy','backend','--image','${ARTIFACT_REGION}-docker.pkg.dev/${PROJECT_ID}/${REPO_BACKEND}/backend:$COMMIT_SHA','--region','${REGION}','--platform','managed']
options:
logging: CLOUD_LOGGING_ONLY
timeout: '1200s'
- AI
cloudbuild.yaml
(template):
steps:
- name: gcr.io/cloud-builders/docker
args: ['build','-t','${ARTIFACT_REGION}-docker.pkg.dev/${PROJECT_ID}/${REPO_AI}/ai:$COMMIT_SHA','-f','backend-ai/Dockerfile.cloudrun','backend-ai']
- name: gcr.io/cloud-builders/docker
args: ['push','${ARTIFACT_REGION}-docker.pkg.dev/${PROJECT_ID}/${REPO_AI}/ai:$COMMIT_SHA']
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
entrypoint: gcloud
args: ['run','deploy','ai','--image','${ARTIFACT_REGION}-docker.pkg.dev/${PROJECT_ID}/${REPO_AI}/ai:$COMMIT_SHA','--region','${REGION}','--platform','managed']
options:
logging: CLOUD_LOGGING_ONLY
timeout: '1200s'
Ensure the Cloud Build service account has access to Artifact Registry and Cloud Run.
Create a Cloud Build trigger
Create a GitHub-based trigger that runs on main
and uses your cloudbuild.yaml
:
gcloud beta builds triggers create github \
--name="polyglot-deploy" \
--repo-owner="GITHUB_OWNER" \
--repo-name="GITHUB_REPO" \
--branch-pattern="^main$" \
--build-config="cloudbuild.yaml" \
--substitutions=_REGION=${REGION},_ARTIFACT_REGION=${ARTIFACT_REGION},_REPO_BACKEND=${REPO_BACKEND},_REPO_AI=${REPO_AI}
10. Best Practices
- Use Secret Manager for DB passwords, JWTs, API keys.
- Prefer Private IP for Cloud SQL and Redis; connect via VPC connector.
- Lock down Cloud Run ingress and enforce IAM roles.
- Tune min/max instances and concurrency to balance cost and cold starts.
- Monitor using Cloud Logging and Cloud Monitoring dashboards.
With this setup, you have a scalable, secure, and production-ready polyglot deployment with CI/CD and private networking. Both Java and Python services can now run efficiently on GCP serverless infrastructure.
Top comments (0)