DEV Community

Alvaro David
Alvaro David

Posted on • Edited on

GCP Cloud SQL Secure connection

Update

It's Google I/O 21! and new features are coming :D

One of them is about Cloud Run and Secret Manager, so here comes the update:

  • Yep, ENV_VARs are a little bit dangerous because in some way they would be in a plain text in some place (like the Cloud Run Console or a file if you use volumes).

But now we can set our Envs directly from Secret Manager, that means we won't be able to see the secrets on the console (or in a file).

# Just add those secrets as envs when you deploy your service.
# For this example I'm using 'latest', it's recommended to use a specific version. 

gcloud beta run deploy pets-api \  
  --image gcr.io/Project-A/pets-api:v0.1 \  
  --update-secrets=db_password=db_password_secret:latest
Enter fullscreen mode Exit fullscreen mode

Another tip I want to show you is from Wietse Venema from his book Building Cloud Applications with Google Cloud Run.

"To add another barrier to prevent direct connections [to our Cloud SQL instance], you can request a SSL/TLS for those [connections]:"

gcloud sql instance patch --require-ssl my-pets-instance
Enter fullscreen mode Exit fullscreen mode

That's it guys!
Remember to follow the Best Practices from documentation.


A secure connection is mostly a networking topic because it takes infrastructure tasks. GCP has resources and tools that we can use to resolve those tasks as Devs.

In this case, we have a API that returns a list of pets from a MySQL Database.

Our objective is to connect our API from Project A to a MySQL Database on Project B using the ready-to-use tools from GCP.

Sure, there are other ways to connect to a DB instance like VPN, VPC for privates IPs (for serverless resources you can use the Serverless VPC Access connector e.g. Cloud Functions), Firewall and etc. However, as developers we can take advantage of GCP resources, not need to worry about infrastructure, just code!

By the way, this connection can be used in only one GCP project too

Architecture

In this architecture we have two GCP Projects (A and B)

Architecture

As you can see in the image:

  • Our API is running on Cloud Run in Project A.
  • Our Cloud SQL instance (MySQL) is running on Project B.

Both are managed by GCP, so we are cool. Here comes security tools:

  1. Cloud SQL Proxy
  2. Secret Manager

Cloud SQL Proxy

The Cloud SQL proxy allows you to authorize and secure your connections using Identity and Access Management (IAM) permissions. The proxy validates connections using credentials for a user or service account, and wrapping the connection in a SSL/TLS layer that is authorized for a Cloud SQL instance. Docs

Secret Manager

Secret Manager is a secure and convenient storage system for API keys, passwords, certificates, and other sensitive data. Secret Manager provides a central place and single source of truth to manage, access, and audit secrets across Google Cloud. Docs

Time to code!

Enough theory, let's code a little bit :D

Database (On Project B)

Actually, create a Cloud SQL instance with MySQL on GCP is very simple, just follow this Quickstart

Schema

A simple table for Pets

CREATE DATABASE petsbook;

USE petsbook;
CREATE TABLE pets (
    id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, 
    name VARCHAR(255) NOT NULL
);

INSERT INTO pets (name) values ("Mel");


SELECT id, name from pets;
-- 1, Mel (My pet)
Enter fullscreen mode Exit fullscreen mode

Pets API (On Project A)

main.py

from flask import Flask, jsonify
import sqlalchemy

app = Flask(__name__)


def init_connection_engine():

    db_user = "my-super-user"
    db_pass = "my-super-password"
    db_name = "my-pets-database"
    db_socket_dir = '/cloudsql'

    # <PROJECT-NAME>:<INSTANCE-REGION>:<INSTANCE-NAME>"
    cloud_sql_connection_name = "Project-B:southamerica-east1:my-pets-instance"

    db_config = {
        "pool_size": 5,
        "max_overflow": 2,
        "pool_timeout": 30,  # 30 seconds
        "pool_recycle": 1800,  # 30 minutes
    }

    pool = sqlalchemy.create_engine(
        sqlalchemy.engine.url.URL(
            drivername="mysql+pymysql",
            username=db_user,
            password=db_pass,
            database=db_name,
            query={
                "unix_socket": "{}/{}".format(
                    db_socket_dir,
                    cloud_sql_connection_name)
            }
        ),
        **db_config
    )

    return pool


@app.before_first_request
def create_connection():
    global db
    db = init_connection_engine()


@app.route("/", methods=['GET'])
def get_pets():
    """
    This method returns a list of pets from DB on Cloud SQL
    """
    pets = []

    with db.connect() as conn:
        pets_result = conn.execute("SELECT id, name from pets;").fetchall()
        for row in pets_result:
            pets.append({"id": row[0], "name": row[1]})

    # Response
    return jsonify(pets), 200
Enter fullscreen mode Exit fullscreen mode

Implement the Cloud SQL Proxy

[On Project A] Create a Service Account.

# Create Service Account (on Project A)
gcloud iam service-accounts create pets-api-cred \
  --description "Service Account for Pets API Service" \ 
  --display-name "Pets API Service Account"
Enter fullscreen mode Exit fullscreen mode

[On Project B] Grant Cloud SQL > Client role to the Service account you created.

# Grant Cloud SQL Client role (on Project B)
gcloud projects add-iam-policy-binding Project-B \ 
   --member=serviceAccount:pets-api-cred@Project-A.iam.gserviceaccount.com \ 
   --role="roles/cloudsql.client"
Enter fullscreen mode Exit fullscreen mode

[On Project A] Build and deploy the API on Cloud Run

# Build image
docker build -t gcr.io/Project-A/pets-api:v0.1 . 

# Push to Container Register 
docker push gcr.io/Project-A/pets-api:v0.1 

# Deploy to Cloud Run
gcloud run deploy pets-api \ 
  --image gcr.io/Project-A/pets-api:v0.1 \
  --region southamerica-east1 \
  --platform managed \
  --allow-unauthenticated \
  --add-cloudsql-instances Project-B:southamerica-east1:my-pets-instance \
  --update-env-vars INSTANCE_CONNECTION_NAME="Project-B:southamerica-east1:my-pets-instance" \
  --service-account pets-api-cred@Project-A.iam.gserviceaccount.com
Enter fullscreen mode Exit fullscreen mode
  • --add-cloudsql-instances and --update-env-vars indicates to Cloud Run where the Cloud SQL Instance is located.

  • --service-account indicates to Cloud Run to use that Service Account.

This is a powerful advantage! because we don't need to download any key for the Service Account (e.g. a JSON file).

Let's try our Pets API

curl https://pets-api-[your-hash]-rj.a.run.app
# [{"id":1,"name":"Mel"}]
Enter fullscreen mode Exit fullscreen mode

It's working!
But... wait a minute... the user and password are exposed on the code .-.

Yep, we can use environment variables to hide the information but in some way the user and the password would be in plain text in some place.

The idea is that all sensitive information should be hidden for everyone, everywhere, even on the CI/CD.

Implement Secret Manager

All our sensitive informations (e.g. user and password) should be protected, Secret Manager is a great resource for that.

First, create our Secrets

# Enable Secret Manager
gcloud services enable secretmanager.googleapis.com

# Create a secret for each sensitive information
gcloud secrets create db_user_secret --replication-policy="automatic"
gcloud secrets create db_password_secret --replication-policy="automatic"
gcloud secrets create db_name_secret --replication-policy="automatic"
gcloud secrets create cloud_sql_connection_name_secret --replication-policy="automatic"

# Create a version (contains the actual contents of a secret) for each secret
gcloud secrets versions add db_user_secret --data-file="db_user_secret.txt"
gcloud secrets versions add db_password_secret --data-file="db_password_secret.txt"
gcloud secrets versions add db_name_secret --data-file="db_name_secret.txt"
gcloud secrets versions add cloud_sql_connection_name_secret --data-file="cloud_sql_connection_name_secret.txt"

# Add a secret version from the contents of a file on disk.
# You can also add a secret version directly on the command line, 
# but this is discouraged because the plaintext will appear in your shell history.

# Grant the Manager Secret Accessor role to our Service Account
gcloud projects add-iam-policy-binding Project-A \ 
   --member=serviceAccount:pets-api-cred@Project-A.iam.gserviceaccount.com \ 
   --role="roles/secretmanager.secretAccessor"
Enter fullscreen mode Exit fullscreen mode

Then we have to access to the information by code. Docs

...
def init_connection_engine():
    # Create the Secret Manager client.
    client = secretmanager.SecretManagerServiceClient()

    # Build the resource name of the secret version.
    db_user_secret = f"projects/{PROJECT_ID}/secrets/db_user_secret/versions/latest"
    db_password_secret = f"projects/{PROJECT_ID}/secrets/db_password_secret/versions/latest"
    db_name_secret = f"projects/{PROJECT_ID}/secrets/db_name_secret/versions/latest"
    cloud_sql_conn_name_secret = f"projects/{PROJECT_ID}/secrets/cloud_sql_connection_name_secret/versions/latest"

    # Access the secret version.
    db_user_response = client.access_secret_version(request={"name": db_user_secret})
    db_password_response = client.access_secret_version(request={"name": db_password_secret})
    db_name_response = client.access_secret_version(request={"name": db_name_secret})
    cloud_sql_conn_name_response = client.access_secret_version(request={"name": cloud_sql_conn_name_secret})

    db_user = db_user_response.payload.data.decode("UTF-8")
    db_pass = db_password_response.payload.data.decode("UTF-8")
    db_name = db_name_response.payload.data.decode("UTF-8")
    db_socket_dir = '/cloudsql'
    cloud_sql_connection_name = cloud_sql_conn_name_response.payload.data.decode("UTF-8")

    db_config = {
        "pool_size": 5,
        "max_overflow": 2,
        "pool_timeout": 30,  # 30 seconds
        "pool_recycle": 1800,  # 30 minutes
    }

    pool = sqlalchemy.create_engine(
        sqlalchemy.engine.url.URL(
            drivername="mysql+pymysql",
            username=db_user,
            password=db_pass,
            database=db_name,
            query={
                "unix_socket": "{}/{}".format(
                    db_socket_dir,
                    cloud_sql_connection_name)
            }
        ),
        **db_config
    )

    return pool


@app.before_first_request
def create_connection():
    global db
    db = init_connection_engine()

...
Enter fullscreen mode Exit fullscreen mode

Great! Everything is set, our connections is secure and our sensitive information is protected :)

All the code is on GitHub
https://github.com/AlvarDev/Cloud-SQL-Connection

Hope it helps you!

Top comments (0)