DEV Community

Cover image for Scaling PostgreSQL Cloud SQL Connections w/ PgBouncer & Kubernetes
Alauddin Al Azad
Alauddin Al Azad

Posted on

Scaling PostgreSQL Cloud SQL Connections w/ PgBouncer & Kubernetes

We have an ag tech service running on GCP and we have a microservice that we do maintain consisting of Cloud SQL, Cloud Run, Cloud Functions. When we are developing and scaling our services, from the beginning we have been facing issues with Cloud SQL, mostly connection limitations. We have one Cloud SQL instance and as different services interact with DB, we are looking for a cheap solution that is scalable and which will act as a global pooler.

From that point, I have found an exciting solution that I have read in this 3 part article from FutureTech Industries about using Helm, Kubernetes, PgBouncer, and CloudSQL to drastically increase the connections a Postgresql DB can handle. Although it was very informative, as I was new to Kubernetes and not extremely versed in Helm and Kubernetes. So I chose a simpler solution with only Kubernetes.

So, what is PgBouncer actually?

PgBouncer is a lightweight connection pool manager for Greenplum and PostgreSQL. PgBouncer maintains a pool for connections for each database and user combination. PgBouncer either creates a new database connection for a client or reuses an existing connection for the same user and database.
Source: https://www.pgbouncer.org/

Setup Cloud SQL

  1. You will need a Cloud SQL instance (postgresql). This is easy to create as gcp has tons of informations regarding this.
  2. Creating Cloud SQL Instance (Be sure to place it in the same region as your Kubernetes Cluster!)
  3. Create a DB User for PgBouncer

Create a kubernetes cluster

Go to kubernetes engine and create and select "Autopilot: Google manages your cluster (Recommended)
" and you are good to go as Autopilot mode will take care of everything.

Connecting to Kubernetes Cluster that you have created

gcloud container clusters get-credentials YOUR_CLUSTER_NAME --region us-central1
Enter fullscreen mode Exit fullscreen mode

Now you can access your kubernetes cluster with kubectl from localhost.

Let's get started!

  • Create namespace
kubectl create namespace pgb-namespace 
Enter fullscreen mode Exit fullscreen mode
  • Set namespace
kubectl config set-context --current --namespace=pgb-namespace
Enter fullscreen mode Exit fullscreen mode
  • Storing service account json file as secret. Place your json in root folder, rename your service account as postgres-sql-credential.json and run this command.
kubectl create secret generic cloudsql-instance-credentials \
   --from-file=credentials.json=postgres-sql-credential.json
Enter fullscreen mode Exit fullscreen mode
  • Create pgbouncer.ini file and paste the content there.
[databases]
* = host=localhost port=5432 user=postgres password=YOUR_PASSWORD

[pgbouncer]
listen_port=6432
listen_addr=0.0.0.0
auth_file=/opt/bitnami/pgbouncer/conf/userlist.txt
auth_type=md5
pidfile=/opt/bitnami/pgbouncer/tmp/pgbouncer.pid
logfile=/opt/bitnami/pgbouncer/logs/pgbouncer.log
admin_users=postgres
client_tls_sslmode=disable
server_tls_sslmode=disable
pool_mode=transaction
server_reset_query = DISCARD ALL
ignore_startup_parameters = extra_float_digits
application_name_add_host = 1
max_client_conn = 10000
autodb_idle_timeout = 3600
default_pool_size = 20
max_db_connections = 80
max_user_connections = 80

Enter fullscreen mode Exit fullscreen mode

Explanation:

Suppose your database has maximum 100 connections. Leaving 20% connection for super user, so set max_db_connections = 80 and max_user_connections = 80

max_client_conn = 10000 , pgbouncer can handle 10000 incoming connections!
default_pool_size = 20 , suppose you have 4 databases in cloud sql , so 4 * 20 = max_db_connection

  • Create userlist.txt file and paste this content,
md545f2603610af569b6155c45067268c6b
Enter fullscreen mode Exit fullscreen mode

Explanation: This md5 is made from username and password combination.
For this md5, username: admin and password: 1234.
Follow this doc to make your own

  • Now store this newly created pgbouncer.ini and userlist.txt in secret. Make sure to place both pgbouncer.ini and userlist.txt file in root folder.
kubectl create secret generic pgb-configuration \
   --from-file=pgbouncer.ini --from-file=userlist.txt
Enter fullscreen mode Exit fullscreen mode
  • Storing your db user and pass as secret.
kubectl create secret generic db-credentials \
   --from-literal=username=postgres --from-literal=password=YOUR_DB_PASS
Enter fullscreen mode Exit fullscreen mode
  • Create kube_pgb_proxy.yaml file and paste this content
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pgproxy
  namespace: pgb-namespace
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pgproxy
  revisionHistoryLimit: 1
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: pgproxy
        tier: backend
    spec:
      securityContext:
        runAsUser: 0
        runAsNonRoot: false
      containers:
        - name: cloudsql-proxy
          resources:
            requests:
              memory: "500Mi"
              cpu: "500m"
              ephemeral-storage: "1Gi"
            limits:
              memory: "1000Mi"
              cpu: "1000m"
              ephemeral-storage: "1Gi"
          image: gcr.io/cloudsql-docker/gce-proxy:1.11
          command:
            [
              "/cloud_sql_proxy",
              "--dir=/cloudsql",
              "-instances=**YOUR_INSTANCE_NAME_STRING**=tcp:5432",
              "-credential_file=/secrets/cloudsql/credentials.json",
            ]
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
            - name: cloudsql
              mountPath: /cloudsql
        - name: pgproxy
          env:
            - name: POSTGRESQL_HOST
              value: localhost
            - name: POSTGRESQL_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: db-credentials
                  key: password
            - name: POSTGRESQL_USERNAME
              valueFrom:
                secretKeyRef:
                  name: db-credentials
                  key: username
          volumeMounts:
            - name: pgb-configuration
              mountPath: /bitnami/pgbouncer/conf
              readOnly: true
          image: bitnami/pgbouncer:latest
          lifecycle:
            preStop:
              exec:
                command:
                  - /bin/sh
                  - -c
                  - killall -INT pgbouncer && sleep 120
          ports:
            - containerPort: 6432
      volumes:
        - name: cloudsql-instance-credentials
          secret:
            secretName: cloudsql-instance-credentials
        - name: pgb-configuration
          secret:
            secretName: pgb-configuration
        - name: cloudsql
          emptyDir:

---
apiVersion: v1
kind: Service
metadata:
  name: pgproxy
  namespace: pgb-namespace
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
spec:
  type: LoadBalancer
  selector:
    app: pgproxy
  ports:
    - port: 6432
      targetPort: 6432

Enter fullscreen mode Exit fullscreen mode

Leave all the values as it is, except the db string. You will find this in GCP cloud sql

Screenshot 2022-10-24 at 12.58.43 AM.png

  • Now apply the created yaml file.
kubectl apply -f kube_pgb_proxy.yaml
Enter fullscreen mode Exit fullscreen mode

If everything works well, you will find 1 workloads and 1 service in your kubernetes engine.

Kubernetes workloads
Kubernetes services

Congrats! Now you have your pgbouncer server running with the

db_host: 10.148.0.34

db_port: 6432

db_username: admin

db_pass: 1234

  • Now you have a running pgbouncer cluster, lets create horizontal pod autoscaler. Create hpa.yaml file and paste this content there.
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: pgb-hpa
  namespace: pgb-namespace
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: pgproxy
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 75

Enter fullscreen mode Exit fullscreen mode

Explanation:

Here minReplicas 1 and maxReplicas 10 and targetCPUUtilizationPercentage 75. So when your pod will exceeds 75% cpu utilization, it will create another pod to meet the on demand connections.

  • Now execute this command to apply.
kubectl apply -f hpa.yaml
Enter fullscreen mode Exit fullscreen mode

Conclusion

I hope this article help you!

Top comments (0)