DEV Community

Gerson Morales
Gerson Morales

Posted on

Automated RDS PostgreSQL 🐘Restoration Using GitHub Actions Workflow_Dispatch

This document provides a step-by-step guide for integrating GitHub Actions to automate the restoration of a production AWS RDS PostgreSQL database for use in development environments.

Infrastructure

  1. EKS cluster.
  2. Postgres database (I am using AWS RDS in this example.)
  3. Github Repository.
  4. AWS IAM service.
  5. AWS S3 bucket.

Scenario

We have an Amazon RDS cluster running a PostgreSQL database, referred to as gerx_db_prod. Additionally, there are several lower-tier environments—such as gerx_db_dev_1, gerx_db_dev_2, and gerx_db_dev_3 that need to be kept up to date with the latest data from the production database. The goal is to enable on-demand synchronization of these environments by leveraging a GitHub Actions workflow.

To create a backup of your RDS PostgreSQL database using pg_dump, run a command similar to the following from a container that has network access to the RDS cluster: pg_dump -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d gerx_db_prod -f gerx24_dump_$(date +%Y%m%d).sql
This command will generate a SQL dump file named using the current date, for example: gerx24_dump_20250515.sql that you would need to upload to the bucket that you would like to use to restore your database from in the job. Additionally you can include in your bucket a sql script to grant permissions to your database role to all tables in all schemas which job uses in one of the last steps below using privileges.sql file.

DO $$
BEGIN
   -- Grant privileges on all tables in all schemas
   EXECUTE (
     SELECT string_agg('GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA ' || quote_ident(schemaname) || ' TO postgres_owner_role;', ' ')
     FROM pg_catalog.pg_tables
     WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
   );
END $$;
Enter fullscreen mode Exit fullscreen mode

AWS IAM configuration

We require the use of AWS IAM to create two roles with an established trust relationship. Additionally, we need to define two IAM policies:

A policy granting permission to retrieve a pgdump file from an S3 bucket, which will be used for database restoration.

A policy providing access to an Amazon EKS cluster.

I created this document to configure IRSA Configure IRSA using EKS to access S3 from a POD. but I would make this easy by adding how the code would look like for each role.

Role to allow pulling objects from S3 bucket would be like this

Role name: postgres-db-dev-restore-IRSA

Trusted entities

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::XXXXXXXXXXXXXXXX:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/SFAFJFJAFKAFKAFKAFLLSFLAFLAFLAFA"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringLike": {
                    "oidc.eks.us-east-1.amazonaws.com/id/SFAFJFJAFKAFKAFKAFLLSFLAFLAFLAFA:sub:sub": "system:serviceaccount:<EKS-NAMESPACE>:<EKS-SERVICE_ACCOUNT>",
                    "oidc.eks.us-east-1.amazonaws.com/id/SFAFJFJAFKAFKAFKAFLLSFLAFLAFLAFA:sub:aud": "sts.amazonaws.com"
                }
            }
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

Policy attached to the role postgres-db-dev-restore-IRSA

{
    "Statement": [
        {
            "Action": [
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutObject"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::pgdump_my_bucket_name",
                "arn:aws:s3:::pgdump_my_bucket_name/*"
            ]
        }
    ],
    "Version": "2012-10-17"
}
Enter fullscreen mode Exit fullscreen mode

Role to allow access from github repo using main branch to the EKS cluster

Role name: postgres-db-dev-eks-restore

Trusted entities

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::xxxxxxxxxxx:oidc-provider/token.actions.githubusercontent.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringLike": {
          "token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
          "token.actions.githubusercontent.com:sub": "repo:myrepo:ref:refs/heads/main"
        }
      }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Policy attached to the role postgres-db-dev-eks-restore

{
    "Statement": [
        {
            "Action": [
                "eks:DescribeCluster",
                "eks:ListClusters",
                "eks:AccessKubernetesApi"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:eks:us-east-1:xxxxxxxx:cluster/dev-usva-gerx24-cluster"
        },
        {
            "Action": [
                "sts:AssumeRole"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ],
    "Version": "2012-10-17"
}
Enter fullscreen mode Exit fullscreen mode

With the two new IAM roles created—each configured with its respective trust relationship and attached policies—you will need to retrieve their ARNs. These ARNs are required for use in two specific areas, which I will outline next.

The role ARN for *postgres-db-dev-restore-IRSA *(arn:aws:iam::xxxxxx:role/postgres-db-dev-restore-IRSA) should be associated with the Kubernetes service account responsible for retrieving the pgdump file from S3. This file will be used to perform the database restoration.

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: restore-db-sa
  namespace: restore-db
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxx:role/postgres-db-dev-restore-IRSA
Enter fullscreen mode Exit fullscreen mode

The role ARN for postgres-db-dev-eks-refresh (arn:aws:iam::xxxxxx:role/postgres-db-dev-eks-restore) must be added to the EKS cluster by updating the aws-auth ConfigMap in the kube-system namespace. The modification should be as follows:

    - "groups":
      - "github-ci-group"
      "rolearn": "arn:aws:iam::xxxxxx:role/postgres-db-dev-eks-refresh"
      "username": "github:db-restore"
Enter fullscreen mode Exit fullscreen mode

Finally, we need to configure RBAC to grant the role access exclusively to the namespace where the GitHub-triggered job responsible for database restoration will be deployed.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: restore-db
  name: postgres-db-restore-role
rules:
  - apiGroups: [""]
    resources: ["pods", "services"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: ["batch"]
    resources: ["jobs"]
    verbs: ["get", "list", "watch", "create", "delete"]
Enter fullscreen mode Exit fullscreen mode
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: postgres-db-restore-rolebinding
  namespace: restore-db
subjects:
  - kind: User
    name: github:db-restore
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: restore-db
  apiGroup: rbac.authorization.k8s.io
Enter fullscreen mode Exit fullscreen mode

At this stage, we should be ready to configure the GitHub Actions workflow to execute the database restoration process.

Github workflow configuration

Let’s begin by configuring a GitHub Actions workflow in the repository from which the process will be triggered. This workflow should allow the selection of a lower-tier environment e.g. dev that needs to be refreshed with the latest data from the production database.

name: db restore dev [gerx_db_prod]


on:
  workflow_dispatch:
    inputs:
      database:
        description: "gerx_db_dev_x"
        required: true
        type: string
      environment:
        description: "environment"
        default: int
        type: string
      date:
        description: "Backup date format e.g 20250512 yyyymmdd"
        required: true
        type: string

jobs:
  db-restore-int:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read

    steps:
      - name: 🔑 Get AWS Creds
        id: aws-creds
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-region: us-east-1
          role-to-assume: arn:aws:iam::xxxxxx:role/postgres-db-dev-eks-restore

      - name: Update kubeconfig for EKS
        run: |
          aws eks update-kubeconfig --name ${{ inputs.environment }}-usva-gerx24-cluster --region us-east-1

      - name: Deploy Job
        run: |
          export DB_NAME=${{ inputs.database }}
          export ENV=${{ inputs.environment }}
          export DATE=${{ inputs.date }}
          export PGPASSWORD=${{ secrets.PGPASSWORD }}

          cat <<EOF | envsubst | kubectl apply -f -
          apiVersion: batch/v1
          kind: Job
          metadata:
            name: db-restore-job-$DB_NAME
            namespace: restore-db
            labels:
              app: db-restore-job
          spec:
            ttlSecondsAfterFinished: 300
            template:
              metadata:
                name: db-restore-job
                labels:
                  app: db-restore-job
              spec:
                initContainers:
                - name: copying-pgdump
                  image: amazon/aws-cli
                  command:
                    - /bin/sh
                    - -c
                    - |
                      echo "Copying files from pgdump_my_bucket_name"
                      aws s3 cp s3://pgdump_my_bucket_name/ /pg-dump --recursive
                  volumeMounts:
                    - name: pg-dump
                      mountPath: /pg-dump
                containers:
                - name: db-restore
                  image: gerx24/centos-tools:3.0.0
                  env:
                    - name: PGPASSWORD
                      value: "$PGPASSWORD"
                    - name: DB_NAME
                      value: "$DB_NAME"
                    - name: ENV
                      value: "$ENV"
                    - name: DATE
                      value: "$DATE"
                  command: ["/bin/bash", "-c"]
                  args:
                    - |
                      echo "Dropping old database..."
                      PGPASSWORD=$PGPASSWORD psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c "ALTER DATABASE $DB_NAME OWNER TO root;"

                      PGPASSWORD=$PGPASSWORD psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c " SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = '$DB_NAME AND pid <> pg_backend_pid(); DROP DATABASE $DB_NAME;"

                      echo "Creating new database..."
                      PGPASSWORD=$PGPASSWORD psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c "CREATE DATABASE $DB_NAME;"

                      echo "Changing ownership..."
                      PGPASSWORD=$PGPASSWORD psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c "ALTER DATABASE $DB_NAME OWNER TO postgres_owner_green;"

                      echo "Restoring database..."
                      PGPASSWORD=$PGPASSWORD psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d $DB_NAME -f /pg-dump/postgres_dump_$DATE.sql

                      echo "Altering schema ownership..."
                      PGPASSWORD=$PGPASSWORD psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d $DB_NAME -c "ALTER SCHEMA public OWNER TO postgres_owner_role; ALTER SCHEMA client_side OWNER TO postgres_owner_role; ALTER SCHEMA settings OWNER TO postgres_owner_role;"

                      echo "Running script"
                      PGPASSWORD=$PGPASSWORD psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d $DB_NAME -f /pg-dump/privileges.sql
                  volumeMounts:
                    - name: pg-dump
                      mountPath: /pg-dump
                volumes:
                - name: pg-dump
                  emptyDir: {}
                restartPolicy: OnFailure
                serviceAccountName: restore-db-sa
          EOF

      - name: Wait for Job to Succeed [5 minutes check]
        run: |
          echo "Checking status of job db-restore-job-${{ inputs.database }}"
          for i in {1..30}; do
            STATUS=$(kubectl get job db-restore-job-${{ inputs.database }} -n postgres-db-restore -o jsonpath="{.status.conditions[?(@.type=='Complete')].status}")
            if [[ "$STATUS" == "True" ]]; then
              echo "✅ Job db-restore-job-${{ inputs.database }} completed successfully."
              exit 0
            fi

            FAILED=$(kubectl get job db-restore-job-${{ inputs.database }} -n postgres-db-restore -o jsonpath="{.status.failed}")
            if [[ "$FAILED" -ge 1 ]]; then
              echo "❌ Job db-restore-job-${{ inputs.database }} failed."
              exit 1
            fi

            echo "⏳ Job db-restore-job-${{ inputs.database }} not complete yet... waiting 10 seconds"
            sleep 10
          done

          echo "⏰ Timed out waiting for job to complete."
          exit 1


      - name: Delete Job
        run: |
          kubectl delete job db-restore-job-${{ inputs.database }} -n postgres-db-restore
          echo "Job db-restore-job-${{ inputs.database }} completed"
Enter fullscreen mode Exit fullscreen mode

Basically the job above would do the following

A. It is expected that the GitHub Actions workflow includes defined inputs with current default values under the workflow_dispatch trigger to enable environment selection at runtime. e.g

 export DB_NAME=gerx_db_dev_1
 export ENV=dev
 export DATE=20250515 (Lets assumed you are using a dump created today)
 export PGPASSWORD=${{ secrets.PGPASSWORD }} (This is a secret in the repo with the password you can use to connect to the database)
Enter fullscreen mode Exit fullscreen mode

Image description
B. A new Kubernetes Job named db-restore-job-gerx_db_dev_1 will be created in the restore-db namespace. This Job will connect to the target database at rds.gerx24.usva.dev.gersonplace.com, execute the necessary psql commands, and restore the database using the latest backup.

The backup file will be retrieved from the S3 bucket my-bucket via an initContainer, which will download the file and mount it to a shared volume. The main container will then use this mounted file during the database restoration step.

C. Job Status Check: A step will continuously monitor the status of the Kubernetes Job using kubectl, ensuring that it completes successfully within a 5-minute window.Once the Job is confirmed to have succeeded, a final step will remove the Job from the cluster by executing kubectl delete job.

Top comments (0)