<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gerson Morales</title>
    <description>The latest articles on DEV Community by Gerson Morales (@gerson_morales_3e89188d50).</description>
    <link>https://dev.to/gerson_morales_3e89188d50</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gerson_morales_3e89188d50"/>
    <language>en</language>
    <item>
      <title>Implementing AWS S3 Cross-Account Replication.</title>
      <dc:creator>Gerson Morales</dc:creator>
      <pubDate>Tue, 20 May 2025 15:22:43 +0000</pubDate>
      <link>https://dev.to/gerson_morales_3e89188d50/implementing-aws-s3-cross-account-replication-1eka</link>
      <guid>https://dev.to/gerson_morales_3e89188d50/implementing-aws-s3-cross-account-replication-1eka</guid>
      <description>&lt;p&gt;The purpose of this post is to offer a detailed, step-by-step guide to setting up an example of S3 Cross-Account Replication between buckets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;1- Two AWS accounts (Account A and Account B).&lt;br&gt;
2- Two Amazon S3 buckets (one in each account).&lt;br&gt;
3- Appropriate IAM policies for access control.&lt;br&gt;
4- IAM roles with necessary trust relationships and permissions.&lt;br&gt;
5- Terraform for infrastructure as code deployment.&lt;/p&gt;
&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;We have two AWS accounts: Account A (111111111111) and Account B (222222222222). Our objective is to replicate objects from the source bucket, &lt;strong&gt;gerx24-source-bucket&lt;/strong&gt; (in Account A), to the destination bucket, &lt;strong&gt;gerx24-destination-bucket&lt;/strong&gt; (in Account B), using Amazon S3 Cross-Account Replication.&lt;/p&gt;

&lt;p&gt;Additionally, we want to apply a lifecycle policy to buckets to automatically expire replicated objects after 10 days.&lt;/p&gt;
&lt;h2&gt;
  
  
  Folder Structure
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3gag86g2oqdqayceif9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3gag86g2oqdqayceif9w.png" alt="Image description" width="317" height="359"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Terraform files [Module]
&lt;/h2&gt;

&lt;p&gt;Reusable Terraform module for provisioning Amazon S3 buckets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "s3_bucket" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "4.9.0"
  bucket  = var.bucket_name

  control_object_ownership = true
  object_ownership         = "BucketOwnerPreferred"

  acl = "private"

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
  versioning = {
    status = "Enabled"
  }

  lifecycle_rule = var.lifecycle_rules

  server_side_encryption_configuration = {
    rule = {
      apply_server_side_encryption_by_default = {
        sse_algorithm = "AES256"
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "s3_bucket_id" {
  description = "Bucket ID"
  value       = module.s3_bucket.s3_bucket_id
}
output "s3_bucket_arn" {
  description = "Bucket ARN"
  value       = module.s3_bucket.s3_bucket_arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "environment" {
  default     = null
  description = "Environment"
  type        = string
}

variable "bucket_name" {
  default     = null
  description = "Bucket name"
  type        = string
}

variable "lifecycle_rules" {
  default     = {}
  description = "Lifecycle rules object"
  type        = any
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Terraform files [Source Bucket]
&lt;/h2&gt;

&lt;p&gt;S3 Cross-Account Replication source bucket.&lt;/p&gt;

&lt;p&gt;1- Creates bucket [gerx24-source-bucket].&lt;br&gt;
2- Creates a Role replication_role [source-replication-role].&lt;br&gt;
3- Creates replication_policy [source-replication-policy].&lt;br&gt;
4- Creates replication_bucket_config and bind it with [gerx24-source-bucket] bucket. &lt;br&gt;
5- Includes in [gerx24-source-bucket] a lifecycle_rules to expire files after 10 days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "source_replication_bucket" {
  source = "../module/bucket"
  environment        = "dev"
  bucket_name        = "gerx24-source-bucket"
  lifecycle_rules = [
    {
      id      = "Delete files older than 10 days"
      enabled = true
      expiration = {
        days = 10
      }
    }
  ]
}

locals {
  source_replication_bucket_arn  = module.source_replication_bucket.s3_bucket_id
  bucket_name                    = module.source_replication_bucket.s3_bucket_arn
  destination_bucket_arn         = "arn:aws:s3:::gerx24-destination-bucket"
  source_replication_policy_name = "source-replication-policy"
  source_replication_role_name   = "source-replication-role"


  common_tags = {
    environment = "dev"
    maintainer  = "gerx24"
  }
}

resource "aws_iam_role" "replication_role" {
  name = local.source_replication_role_name

  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Action = "sts:AssumeRole",
        Principal = {
          Service = "s3.amazonaws.com"
        },
        Effect = "Allow",
        Sid = ""
      }
    ]
  })

  tags = local.common_tags
}

resource "aws_iam_policy" "replication_policy" {
  name = local.source_replication_policy_name

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Action = [
          "s3:GetReplicationConfiguration",
          "s3:ListBucket"
        ],
        Resource = [
          "${local.source_replication_bucket_arn}/*",
          "${local.source_replication_bucket_arn}",
        ]
      },
      {
        Effect = "Allow",
        Action = [
          "s3:GetObjectVersionForReplication",
          "s3:GetObjectVersionAcl",
          "s3:GetObjectVersionTagging"
        ],
        Resource = [
          "${local.source_replication_bucket_arn}/*",
          "${local.source_replication_bucket_arn}"
        ]
      },
      {
        Effect = "Allow",
        Action = [
          "s3:ReplicateObject",
          "s3:ReplicateDelete",
          "s3:ReplicateTags",
          "s3:ObjectOwnerOverrideToBucketOwner"
        ],
        Resource = [
          "${local.destination_bucket_arn}/*",
          "${local.destination_bucket_arn}"
        ]
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "replication" {
  role       = aws_iam_role.replication_role.name
  policy_arn = aws_iam_policy.replication_policy.arn
}

resource "aws_s3_bucket_replication_configuration" "replication_bucket_config" {
  depends_on = [module.source_replication_bucket]

  role   = aws_iam_role.replication_role.arn
  bucket = local.bucket_name

  rule {
    id = "cross-replication"
    delete_marker_replication {
      status = "Disabled"
    }
    source_selection_criteria {
      replica_modifications {
        status = "Enabled"
      }

    }
    filter {
      prefix = ""
    }

    status = "Enabled"

    destination {
      bucket        = local.destination_bucket_arn
      storage_class = "STANDARD"
      access_control_translation {
        owner = "Destination"
      }
      account = "222222222222"

      metrics {
        status = "Enabled"

        event_threshold {
          minutes = 15
        }
      }
      replication_time {
        status = "Enabled"

        time {
          minutes = 15
        }
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;provider.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-east-1"
  default_tags {
    tags = {
      terraform   = "true"
      application = "replication"
      environment = "dev"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Terraform files [Destination Bucket]
&lt;/h2&gt;

&lt;p&gt;S3 Cross-Account Replication destination bucket.&lt;/p&gt;

&lt;p&gt;1- Creates bucket [gerx24-destination-bucket].&lt;br&gt;
2- Creates destination_bucket_replication_policy and attach it to [gerx24-destination-bucket] bucket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "destination_replication_bucket" {
  source = "../module/bucket"

  bucket_name = "gerx24-destination-bucket"
  environment = "dev"
  lifecycle_rules = [
    {
      id      = "Delete files older than 10 days"
      enabled = true
      expiration = {
        days = 10
      }
    }
  ]
}

locals {
  destination_bucket_name = module.destination_replication_bucket.s3_bucket_id
  source_replication_role = "arn:aws:iam::111111111111:role/source-replication-role"
}

resource "aws_s3_bucket_policy" "destination_bucket_replication_policy" {
  bucket = local.destination_bucket_name

  policy = jsonencode({
    Version = "2012-10-17"
    Id      = ""
    Statement = [
      {
        Sid    = "AllowReplicationfromSourceBucket"
        Effect = "Allow"
        Principal = {
          AWS = local.source_replication_role
        }
        Action = [
          "s3:ReplicateObject",
          "s3:ReplicateDelete",
          "s3:ReplicateTags",
          "s3:ObjectOwnerOverrideToBucketOwner",
          "s3:List*",
          "s3:GetBucketVersioning",
          "s3:PutBucketVersioning"
        ]
        Resource = [
          "arn:aws:s3:::${local.destination_bucket_name}/*",
          "arn:aws:s3:::${local.destination_bucket_name}"
        ]
      }
    ]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;provider.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-east-1"
  default_tags {
    tags = {
      terraform   = "true"
      application = "replication"
      environment = "dev"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://repost.aws/knowledge-center/s3-troubleshoot-replication" rel="noopener noreferrer"&gt;&lt;strong&gt;Troubleshooting doc&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Automated RDS PostgreSQL Restoration Using GitHub Action🐘</title>
      <dc:creator>Gerson Morales</dc:creator>
      <pubDate>Fri, 16 May 2025 17:20:09 +0000</pubDate>
      <link>https://dev.to/gerson_morales_3e89188d50/automated-rds-postgresql-restoration-using-github-action-273c</link>
      <guid>https://dev.to/gerson_morales_3e89188d50/automated-rds-postgresql-restoration-using-github-action-273c</guid>
      <description>&lt;p&gt;This document provides a step-by-step guide for integrating GitHub Actions to automate the restoration of a production AWS RDS PostgreSQL database for use in development environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;EKS cluster.&lt;/li&gt;
&lt;li&gt;RDS database running in AWS (Just for this case scenario).&lt;/li&gt;
&lt;li&gt;Github Repository.&lt;/li&gt;
&lt;li&gt;AWS IAM service.&lt;/li&gt;
&lt;li&gt;AWS S3 bucket.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;We have an Amazon RDS cluster running a PostgreSQL database, referred to as &lt;strong&gt;gerx_db_prod&lt;/strong&gt;. Additionally, there are several lower-tier environments—such as &lt;strong&gt;gerx_db_dev_1&lt;/strong&gt;, &lt;strong&gt;gerx_db_dev_2&lt;/strong&gt;, and &lt;strong&gt;gerx_db_dev_3&lt;/strong&gt; that need to be kept up to date with the latest data from the production database. The goal is to enable on-demand synchronization of these environments by leveraging a GitHub Actions workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS IAM configuration
&lt;/h2&gt;

&lt;p&gt;We require the use of AWS IAM to create two roles with an established trust relationship. Additionally, we need to define two IAM policies:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First Policy:&lt;/strong&gt; Policy granting permission to retrieve a pgdump file from an S3 bucket, which will be used for database restoration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second Policy:&lt;/strong&gt; Policy providing access to an Amazon EKS cluster from github workflow.&lt;/p&gt;

&lt;p&gt;I created this document to &lt;strong&gt;configure IRSA&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/gerson_morales_3e89188d50/configure-irsa-using-eks-to-access-s3-from-a-pod-513a"&gt;https://dev.to/gerson_morales_3e89188d50/configure-irsa-using-eks-to-access-s3-from-a-pod-513a&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But good news is that I would make this easy for you by adding how the code would look like for each role.&lt;/p&gt;

&lt;p&gt;Role to allow pulling objects from S3 bucket would be like this&lt;/p&gt;

&lt;p&gt;Role name: &lt;strong&gt;postgres-db-dev-restore-IRSA&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Trusted entities
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::XXXXXXXXXXXXXXXX:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/SFAFJFJAFKAFKAFKAFLLSFLAFLAFLAFA"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringLike": {
                    "oidc.eks.us-east-1.amazonaws.com/id/SFAFJFJAFKAFKAFKAFLLSFLAFLAFLAFA:sub:sub": "system:serviceaccount:&amp;lt;EKS-NAMESPACE&amp;gt;:&amp;lt;EKS-SERVICE_ACCOUNT&amp;gt;",
                    "oidc.eks.us-east-1.amazonaws.com/id/SFAFJFJAFKAFKAFKAFLLSFLAFLAFLAFA:sub:aud": "sts.amazonaws.com"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Policy attached to the role &lt;strong&gt;postgres-db-dev-restore-IRSA&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Statement": [
        {
            "Action": [
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutObject"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::pgdump_my_bucket_name",
                "arn:aws:s3:::pgdump_my_bucket_name/*"
            ]
        }
    ],
    "Version": "2012-10-17"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Role to allow access from github repo using &lt;strong&gt;main branch&lt;/strong&gt; to the EKS cluster.&lt;/p&gt;

&lt;p&gt;Role name: &lt;strong&gt;postgres-db-dev-eks-refresh&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Trusted entities
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::xxxxxxxxxxx:oidc-provider/token.actions.githubusercontent.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringLike": {
          "token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
          "token.actions.githubusercontent.com:sub": "repo:myrepo:ref:refs/heads/main"
        }
      }
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Policy attached to the role &lt;strong&gt;postgres-db-dev-eks-refresh&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Statement": [
        {
            "Action": [
                "eks:DescribeCluster",
                "eks:ListClusters",
                "eks:AccessKubernetesApi"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:eks:us-east-1:xxxxxxxx:cluster/dev-usva-gerx24-cluster"
        },
        {
            "Action": [
                "sts:AssumeRole"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ],
    "Version": "2012-10-17"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the two new IAM roles created each configured with its respective trust relationship and attached policies you will need to retrieve their ARNs. These ARNs are required for use in two specific areas, which I will outline next.&lt;/p&gt;

&lt;p&gt;The role ARN for &lt;strong&gt;postgres-db-dev-restore-IRSA&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;(arn:aws:iam::xxxxxx:role/postgres-db-dev-restore-IRSA) should be associated with the Kubernetes service account responsible for retrieving the pgdump file (postgres_dump_$DATE.sql) from S3. This file will be used to perform the database restoration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: restore-db-sa
  namespace: restore-db
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxx:role/postgres-db-dev-restore-IRSA
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The role ARN for &lt;strong&gt;postgres-db-dev-eks-refresh&lt;/strong&gt; (arn:aws:iam::xxxxxx:role/postgres-db-dev-eks-refresh) must be added to the EKS cluster by updating the &lt;strong&gt;aws-auth&lt;/strong&gt; &lt;strong&gt;ConfigMap&lt;/strong&gt; in the &lt;strong&gt;kube-system&lt;/strong&gt; namespace. The modification should be as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    - "groups":
      - "github-ci-group"
      "rolearn": "arn:aws:iam::xxxxxx:role/postgres-db-dev-eks-refresh"
      "username": "github:db-restore"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we need to configure &lt;strong&gt;RBAC&lt;/strong&gt; to grant the role access &lt;strong&gt;exclusively&lt;/strong&gt; to the namespace where the GitHub triggered job responsible for database restoration will be deployed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: restore-db
  name: postgres-db-restore-role
rules:
  - apiGroups: [""]
    resources: ["pods", "services"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: ["batch"]
    resources: ["jobs"]
    verbs: ["get", "list", "watch", "create", "delete"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: postgres-db-restore-rolebinding
  namespace: restore-db
subjects:
  - kind: User
    name: github:db-restore
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: restore-db
  apiGroup: rbac.authorization.k8s.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure ConfigMap to run SQL script to grant privileges to &lt;strong&gt;postgres_owner_role&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-db-restore-dev
  namespace: restore-db
data:
  privileges.sql: |
    DO $$
    BEGIN
      -- Grant privileges on all tables in all schemas
      EXECUTE (
        SELECT string_agg(
          'GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA ' || quote_ident(schemaname) || ' TO postgres_owner_role;',
          ' '
        )
        FROM pg_catalog.pg_tables
        WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
      );
    END
    $$;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure k8s secret or external secret in this case which contains the password to access database that would be use in variable &lt;strong&gt;PGPASSWORD&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: db-refresh-dev
  namespace: restore-db
  annotations:
    argocd.argoproj.io/sync-wave: "-12"
spec:
  dataFrom:
  - extract:
      key: db-refresh-dev/db-refresh-dev-secret
  refreshInterval: 1h
  secretStoreRef:
    kind: ClusterSecretStore
    name: aws-secrets-manager
  target:
    name: db-refresh-dev
    template:
      engineVersion: v2
      type: Opaque
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this stage, we should be ready to configure the GitHub Actions workflow to execute the database restoration process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Github workflow configuration
&lt;/h2&gt;

&lt;p&gt;Let’s begin by configuring a GitHub Actions workflow in the repository from which the process will be triggered. This workflow should allow the selection of a lower-tier environment e.g. dev that needs to be refreshed with the latest data from the production database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: db restore dev [gerx_db_prod]


on:
  workflow_dispatch:
    inputs:
      database:
        description: "gerx_db_dev_x"
        required: true
        type: string
      environment:
        description: "environment"
        default: int
        type: string
      date:
        description: "Backup date format e.g 20250512 yyyymmdd"
        required: true
        type: string

  ## This can be also used as workflow_call ##
  workflow_call:
    inputs:
      database:
        description: "gerx_db_dev_x"
        required: true
        type: string
      environment:
        description: "environment"
        default: dev
        type: string
      date:
        description: "Backup date format e.g 20250515 year/month/day"
        required: true
        type: string

jobs:
  db-restore-int:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read

    steps:
      - name: 🔑 Get AWS Creds
        id: aws-creds
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-region: us-east-1
          role-to-assume: arn:aws:iam::xxxxxx:role/postgres-db-dev-eks-refresh

      - name: Update kubeconfig for EKS
        run: |
          aws eks update-kubeconfig --name ${{ inputs.environment }}-usva-gerx24-cluster --region us-east-1

      - name: Deploy Job
        run: |
          export DB_NAME=${{ inputs.database }}
          export ENV=${{ inputs.environment }}
          export DATE=${{ inputs.date }}
          export PGPASSWORD=${{ secrets.PGPASSWORD }}

          cat &amp;lt;&amp;lt;EOF | envsubst | kubectl apply -f -
          apiVersion: batch/v1
          kind: Job
          metadata:
            name: db-restore-job-$DB_NAME
            namespace: postgres-db-restore
            labels:
              app: db-restore-job
          spec:
            ttlSecondsAfterFinished: 300
            template:
              metadata:
                name: db-restore-job
                labels:
                  app: db-restore-job
              spec:
                initContainers:
                - name: copying-pgdump
                  image: amazon/aws-cli
                  command:
                    - /bin/sh
                    - -c
                    - |
                      echo "Copying files from my-bucket"
                      aws s3 cp s3://pgdump_my_bucket_name/ /pg-dump --recursive
                  volumeMounts:
                    - name: pg-dump
                      mountPath: /pg-dump
                containers:
                - name: db-restore
                  image: gerx24/centos-tools:3.0.0
                  env:
                    - name: PGPASSWORD
                       valueFrom:
                          secretKeyRef:
                            name: db-refresh-dev
                            key: PGPASSWORD
                    - name: DB_NAME
                      value: "$DB_NAME"
                    - name: ENV
                      value: "$ENV"
                    - name: DATE
                      value: "$DATE"
                  command: ["/bin/bash", "-c"]
                  args:
                    - |
                      echo "Dropping old database..."
                      psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c "ALTER DATABASE $DB_NAME OWNER TO root;"

                      psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c " SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = '$DB_NAME AND pid &amp;lt;&amp;gt; pg_backend_pid(); DROP DATABASE $DB_NAME;"

                      echo "Creating new database..."
                      psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c "CREATE DATABASE $DB_NAME;"

                      echo "Changing ownership..."
                      psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c "ALTER DATABASE $DB_NAME OWNER TO postgres_owner_role;"
                      psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c "GRANT postgres_owner_role TO postgres_owner_green;"
                      psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c "ALTER USER postgres_owner_green SET ROLE postgres_owner_role;"
                      psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c "GRANT ALL PRIVILEGES ON DATABASE $DB_NAME TO postgres_owner_role;"
                      echo "Restoring database..."
                      psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d $DB_NAME -f /pg-dump/postgres_dump_$DATE.sql

                      echo "Altering schema ownership..."
                      psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d $DB_NAME -c "ALTER SCHEMA public OWNER TO postgres_owner_role; ALTER SCHEMA client_side OWNER TO postgres_owner_role; ALTER SCHEMA settings OWNER TO postgres_owner_role;"

                      echo "Running script"
                      psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d $DB_NAME -f /script/privileges.sql
                  volumeMounts:
                    - name: pg-dump
                      mountPath: /pg-dump
                    - name: privileges-script
                      mountPath: /script
                volumes:
                  - name: pg-dump
                    emptyDir: {}
                  - name: privileges-script
                    configMap:
                      name: postgres-db-restore-dev
                restartPolicy: OnFailure
                serviceAccountName: restore-db-sa
          EOF

      - name: Wait for Job to Succeed [5 minutes check]
        run: |
          echo "Checking status of job db-restore-job-${{ inputs.database }}"
          for i in {1..30}; do
            STATUS=$(kubectl get job db-restore-job-${{ inputs.database }} -n postgres-db-restore -o jsonpath="{.status.conditions[?(@.type=='Complete')].status}")
            if [[ "$STATUS" == "True" ]]; then
              echo "✅ Job db-restore-job-${{ inputs.database }} completed successfully."
              exit 0
            fi

            FAILED=$(kubectl get job db-restore-job-${{ inputs.database }} -n postgres-db-restore -o jsonpath="{.status.failed}")
            if [[ "$FAILED" -ge 1 ]]; then
              echo "❌ Job db-restore-job-${{ inputs.database }} failed."
              exit 1
            fi

            echo "⏳ Job db-restore-job-${{ inputs.database }} not complete yet... waiting 10 seconds"
            sleep 10
          done

          echo "⏰ Timed out waiting for job to complete."
          exit 1


      - name: Delete Job
        run: |
          kubectl delete job db-restore-job-${{ inputs.database }} -n postgres-db-restore
          echo "Job db-restore-job-${{ inputs.database }} completed"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The job above would do the following:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A.&lt;/strong&gt; You are expected to include the input parameters for GitHub Actions, using values like below including database_name, environment and date using yyyymmdd format as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;          export DB_NAME=gerx_db_dev_1
          export ENV=dev
          export DATE=20250515 (Lets assumed you are using a dump created today)
          export PGPASSWORD=${{ secrets.PGPASSWORD }} (This is a secret in the repo with the password you can use to connect to the database)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;secrets.PGPASSWORD&lt;/strong&gt; -&amp;gt; This would be the secret password to access database save in the repo as a secret.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjokl0ynhilm5y9eplp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjokl0ynhilm5y9eplp3.png" alt="Image description" width="383" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;B.&lt;/strong&gt; The GitHub Action run will initiate a Kubernetes job that begins with an initContainer. This container will download the latest &lt;strong&gt;postgres_dump_$DATE.sql&lt;/strong&gt; backup from &lt;strong&gt;s3://pgdump_my_bucket_name/&lt;/strong&gt; (At this point we assumed that you have it on your bucket) and place those in a shared volume. The main container will then use this shared volume to execute a series of psql commands that would do the following.&lt;/p&gt;

&lt;p&gt;1- Drop the existing database.&lt;br&gt;
2- Recreate the database.&lt;br&gt;
3- Change ownership.&lt;br&gt;
4- Restore database. (gerx_db_dev_1) from (postgres_dump_$DATE.sql)&lt;br&gt;
5- Add schema permissions.&lt;br&gt;
6- Finally run the script to grant privileges on all tables in all schemas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;C.&lt;/strong&gt; Finally, the GitHub Action will perform a verification check using kubectl within a &lt;strong&gt;5 minute&lt;/strong&gt; window to confirm the status of the Kubernetes job. Once the job is confirmed to have succeeded, the action will proceed to clean up by deleting the job from the Kubernetes cluster.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>github</category>
      <category>devops</category>
    </item>
    <item>
      <title>Automated RDS PostgreSQL 🐘Restoration Using GitHub Actions Workflow_Dispatch</title>
      <dc:creator>Gerson Morales</dc:creator>
      <pubDate>Fri, 16 May 2025 17:04:24 +0000</pubDate>
      <link>https://dev.to/gerson_morales_3e89188d50/automated-rds-postgresql-restoration-using-github-actions-workflowdispatch-1lfp</link>
      <guid>https://dev.to/gerson_morales_3e89188d50/automated-rds-postgresql-restoration-using-github-actions-workflowdispatch-1lfp</guid>
      <description>&lt;p&gt;This document provides a step-by-step guide for integrating GitHub Actions to automate the restoration of a production AWS RDS PostgreSQL database for use in development environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;EKS cluster.&lt;/li&gt;
&lt;li&gt;Postgres database (I am using AWS RDS in this example.)&lt;/li&gt;
&lt;li&gt;Github Repository.&lt;/li&gt;
&lt;li&gt;AWS IAM service.&lt;/li&gt;
&lt;li&gt;AWS S3 bucket.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;We have an Amazon RDS cluster running a PostgreSQL database, referred to as gerx_db_prod. Additionally, there are several lower-tier environments—such as &lt;strong&gt;gerx_db_dev_1&lt;/strong&gt;, &lt;strong&gt;gerx_db_dev_2&lt;/strong&gt;, and &lt;strong&gt;gerx_db_dev_3&lt;/strong&gt; that need to be kept up to date with the latest data from the production database. The goal is to enable on-demand synchronization of these environments by leveraging a GitHub Actions workflow.&lt;/p&gt;

&lt;p&gt;To create a backup of your RDS PostgreSQL database using pg_dump, run a command similar to the following from a container that has network access to the RDS cluster: &lt;strong&gt;pg_dump -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d gerx_db_prod -f gerx24_dump_$(date +%Y%m%d).sql&lt;/strong&gt;&lt;br&gt;
This command will generate a SQL dump file named using the current date, for example: &lt;strong&gt;gerx24_dump_20250515.sql&lt;/strong&gt; that you would need to upload to the bucket that you would like to use to restore your database from in the job. Additionally you can include in your bucket a sql script to grant permissions to your database role to all tables in all schemas which job uses in one of the last steps below using &lt;strong&gt;privileges.sql&lt;/strong&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DO $$
BEGIN
   -- Grant privileges on all tables in all schemas
   EXECUTE (
     SELECT string_agg('GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA ' || quote_ident(schemaname) || ' TO postgres_owner_role;', ' ')
     FROM pg_catalog.pg_tables
     WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
   );
END $$;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  AWS IAM configuration
&lt;/h2&gt;

&lt;p&gt;We require the use of AWS IAM to create two roles with an established trust relationship. Additionally, we need to define two IAM policies:&lt;/p&gt;

&lt;p&gt;A policy granting permission to retrieve a pgdump file from an S3 bucket, which will be used for database restoration.&lt;/p&gt;

&lt;p&gt;A policy providing access to an Amazon EKS cluster.&lt;/p&gt;

&lt;p&gt;I created this document to configure IRSA &lt;a href="https://dev.to/gerson_morales_3e89188d50/configure-irsa-using-eks-to-access-s3-from-a-pod-513a"&gt;Configure IRSA using EKS to access S3 from a POD.&lt;/a&gt; but I would make this easy by adding how the code would look like for each role.&lt;/p&gt;

&lt;p&gt;Role to allow pulling objects from S3 bucket would be like this&lt;/p&gt;

&lt;p&gt;Role name: &lt;strong&gt;postgres-db-dev-restore-IRSA&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Trusted entities
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::XXXXXXXXXXXXXXXX:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/SFAFJFJAFKAFKAFKAFLLSFLAFLAFLAFA"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringLike": {
                    "oidc.eks.us-east-1.amazonaws.com/id/SFAFJFJAFKAFKAFKAFLLSFLAFLAFLAFA:sub:sub": "system:serviceaccount:&amp;lt;EKS-NAMESPACE&amp;gt;:&amp;lt;EKS-SERVICE_ACCOUNT&amp;gt;",
                    "oidc.eks.us-east-1.amazonaws.com/id/SFAFJFJAFKAFKAFKAFLLSFLAFLAFLAFA:sub:aud": "sts.amazonaws.com"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Policy attached to the role &lt;strong&gt;postgres-db-dev-restore-IRSA&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Statement": [
        {
            "Action": [
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutObject"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::pgdump_my_bucket_name",
                "arn:aws:s3:::pgdump_my_bucket_name/*"
            ]
        }
    ],
    "Version": "2012-10-17"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Role to allow access from github repo using &lt;strong&gt;main branch&lt;/strong&gt; to the EKS cluster&lt;/p&gt;

&lt;p&gt;Role name: &lt;strong&gt;postgres-db-dev-eks-restore&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Trusted entities
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::xxxxxxxxxxx:oidc-provider/token.actions.githubusercontent.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringLike": {
          "token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
          "token.actions.githubusercontent.com:sub": "repo:myrepo:ref:refs/heads/main"
        }
      }
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Policy attached to the role &lt;strong&gt;postgres-db-dev-eks-restore&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Statement": [
        {
            "Action": [
                "eks:DescribeCluster",
                "eks:ListClusters",
                "eks:AccessKubernetesApi"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:eks:us-east-1:xxxxxxxx:cluster/dev-usva-gerx24-cluster"
        },
        {
            "Action": [
                "sts:AssumeRole"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ],
    "Version": "2012-10-17"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the two new IAM roles created—each configured with its respective trust relationship and attached policies—you will need to retrieve their ARNs. These ARNs are required for use in two specific areas, which I will outline next.&lt;/p&gt;

&lt;p&gt;The role ARN for *&lt;em&gt;postgres-db-dev-restore-IRSA *&lt;/em&gt;(arn:aws:iam::xxxxxx:role/postgres-db-dev-restore-IRSA) should be associated with the Kubernetes service account responsible for retrieving the pgdump file from S3. This file will be used to perform the database restoration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: restore-db-sa
  namespace: restore-db
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxx:role/postgres-db-dev-restore-IRSA
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The role ARN for postgres-db-dev-eks-refresh (arn:aws:iam::xxxxxx:role/postgres-db-dev-eks-restore) must be added to the EKS cluster by updating the &lt;strong&gt;aws-auth&lt;/strong&gt; &lt;strong&gt;ConfigMap&lt;/strong&gt; in the &lt;strong&gt;kube-system&lt;/strong&gt; namespace. The modification should be as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    - "groups":
      - "github-ci-group"
      "rolearn": "arn:aws:iam::xxxxxx:role/postgres-db-dev-eks-refresh"
      "username": "github:db-restore"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we need to configure RBAC to grant the role access exclusively to the namespace where the GitHub-triggered job responsible for database restoration will be deployed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: restore-db
  name: postgres-db-restore-role
rules:
  - apiGroups: [""]
    resources: ["pods", "services"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: ["batch"]
    resources: ["jobs"]
    verbs: ["get", "list", "watch", "create", "delete"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: postgres-db-restore-rolebinding
  namespace: restore-db
subjects:
  - kind: User
    name: github:db-restore
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: restore-db
  apiGroup: rbac.authorization.k8s.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this stage, we should be ready to configure the GitHub Actions workflow to execute the database restoration process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Github workflow configuration
&lt;/h2&gt;

&lt;p&gt;Let’s begin by configuring a GitHub Actions workflow in the repository from which the process will be triggered. This workflow should allow the selection of a lower-tier environment e.g. &lt;strong&gt;dev&lt;/strong&gt; that needs to be refreshed with the latest data from the production database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: db restore dev [gerx_db_prod]


on:
  workflow_dispatch:
    inputs:
      database:
        description: "gerx_db_dev_x"
        required: true
        type: string
      environment:
        description: "environment"
        default: int
        type: string
      date:
        description: "Backup date format e.g 20250512 yyyymmdd"
        required: true
        type: string

jobs:
  db-restore-int:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read

    steps:
      - name: 🔑 Get AWS Creds
        id: aws-creds
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-region: us-east-1
          role-to-assume: arn:aws:iam::xxxxxx:role/postgres-db-dev-eks-restore

      - name: Update kubeconfig for EKS
        run: |
          aws eks update-kubeconfig --name ${{ inputs.environment }}-usva-gerx24-cluster --region us-east-1

      - name: Deploy Job
        run: |
          export DB_NAME=${{ inputs.database }}
          export ENV=${{ inputs.environment }}
          export DATE=${{ inputs.date }}
          export PGPASSWORD=${{ secrets.PGPASSWORD }}

          cat &amp;lt;&amp;lt;EOF | envsubst | kubectl apply -f -
          apiVersion: batch/v1
          kind: Job
          metadata:
            name: db-restore-job-$DB_NAME
            namespace: restore-db
            labels:
              app: db-restore-job
          spec:
            ttlSecondsAfterFinished: 300
            template:
              metadata:
                name: db-restore-job
                labels:
                  app: db-restore-job
              spec:
                initContainers:
                - name: copying-pgdump
                  image: amazon/aws-cli
                  command:
                    - /bin/sh
                    - -c
                    - |
                      echo "Copying files from pgdump_my_bucket_name"
                      aws s3 cp s3://pgdump_my_bucket_name/ /pg-dump --recursive
                  volumeMounts:
                    - name: pg-dump
                      mountPath: /pg-dump
                containers:
                - name: db-restore
                  image: gerx24/centos-tools:3.0.0
                  env:
                    - name: PGPASSWORD
                      value: "$PGPASSWORD"
                    - name: DB_NAME
                      value: "$DB_NAME"
                    - name: ENV
                      value: "$ENV"
                    - name: DATE
                      value: "$DATE"
                  command: ["/bin/bash", "-c"]
                  args:
                    - |
                      echo "Dropping old database..."
                      PGPASSWORD=$PGPASSWORD psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c "ALTER DATABASE $DB_NAME OWNER TO root;"

                      PGPASSWORD=$PGPASSWORD psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c " SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = '$DB_NAME AND pid &amp;lt;&amp;gt; pg_backend_pid(); DROP DATABASE $DB_NAME;"

                      echo "Creating new database..."
                      PGPASSWORD=$PGPASSWORD psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c "CREATE DATABASE $DB_NAME;"

                      echo "Changing ownership..."
                      PGPASSWORD=$PGPASSWORD psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d postgres -c "ALTER DATABASE $DB_NAME OWNER TO postgres_owner_green;"

                      echo "Restoring database..."
                      PGPASSWORD=$PGPASSWORD psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d $DB_NAME -f /pg-dump/postgres_dump_$DATE.sql

                      echo "Altering schema ownership..."
                      PGPASSWORD=$PGPASSWORD psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d $DB_NAME -c "ALTER SCHEMA public OWNER TO postgres_owner_role; ALTER SCHEMA client_side OWNER TO postgres_owner_role; ALTER SCHEMA settings OWNER TO postgres_owner_role;"

                      echo "Running script"
                      PGPASSWORD=$PGPASSWORD psql -h rds.gerx24.usva.$ENV.gersonplace.com -p 5432 -U root -d $DB_NAME -f /pg-dump/privileges.sql
                  volumeMounts:
                    - name: pg-dump
                      mountPath: /pg-dump
                volumes:
                - name: pg-dump
                  emptyDir: {}
                restartPolicy: OnFailure
                serviceAccountName: restore-db-sa
          EOF

      - name: Wait for Job to Succeed [5 minutes check]
        run: |
          echo "Checking status of job db-restore-job-${{ inputs.database }}"
          for i in {1..30}; do
            STATUS=$(kubectl get job db-restore-job-${{ inputs.database }} -n postgres-db-restore -o jsonpath="{.status.conditions[?(@.type=='Complete')].status}")
            if [[ "$STATUS" == "True" ]]; then
              echo "✅ Job db-restore-job-${{ inputs.database }} completed successfully."
              exit 0
            fi

            FAILED=$(kubectl get job db-restore-job-${{ inputs.database }} -n postgres-db-restore -o jsonpath="{.status.failed}")
            if [[ "$FAILED" -ge 1 ]]; then
              echo "❌ Job db-restore-job-${{ inputs.database }} failed."
              exit 1
            fi

            echo "⏳ Job db-restore-job-${{ inputs.database }} not complete yet... waiting 10 seconds"
            sleep 10
          done

          echo "⏰ Timed out waiting for job to complete."
          exit 1


      - name: Delete Job
        run: |
          kubectl delete job db-restore-job-${{ inputs.database }} -n postgres-db-restore
          echo "Job db-restore-job-${{ inputs.database }} completed"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Basically the job above would do the following&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A.&lt;/strong&gt; It is expected that the GitHub Actions workflow includes defined inputs with current default values under the workflow_dispatch trigger to enable environment selection at runtime. e.g&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; export DB_NAME=gerx_db_dev_1
 export ENV=dev
 export DATE=20250515 (Lets assumed you are using a dump created today)
 export PGPASSWORD=${{ secrets.PGPASSWORD }} (This is a secret in the repo with the password you can use to connect to the database)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2680vscvzvtu28cuns9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2680vscvzvtu28cuns9u.png" alt="Image description" width="383" height="421"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;B.&lt;/strong&gt; A new Kubernetes Job named &lt;strong&gt;db-restore-job-gerx_db_dev_1&lt;/strong&gt; will be created in the restore-db namespace. This Job will connect to the target database at rds.gerx24.usva.dev.gersonplace.com, execute the necessary psql commands, and restore the database using the latest backup.&lt;/p&gt;

&lt;p&gt;The backup file will be retrieved from the S3 bucket &lt;strong&gt;my-bucket&lt;/strong&gt; via an initContainer, which will download the file and mount it to a shared volume. The main container will then use this mounted file during the database restoration step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;C.&lt;/strong&gt; Job Status Check: A step will continuously monitor the status of the Kubernetes Job using kubectl, ensuring that it completes successfully within a 5-minute window.Once the Job is confirmed to have succeeded, a final step will remove the Job from the cluster by executing kubectl delete job.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Configure IRSA using EKS to access S3 from a POD in terraform</title>
      <dc:creator>Gerson Morales</dc:creator>
      <pubDate>Sun, 05 Jan 2025 04:43:04 +0000</pubDate>
      <link>https://dev.to/gerson_morales_3e89188d50/configure-irsa-using-eks-to-access-s3-from-a-pod-513a</link>
      <guid>https://dev.to/gerson_morales_3e89188d50/configure-irsa-using-eks-to-access-s3-from-a-pod-513a</guid>
      <description>&lt;p&gt;This post will provide a detailed, step-by-step guide for configuring  IRSA using terraform which would allow a POD running in EKS to connect to S3 service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;AWS Account.&lt;/li&gt;
&lt;li&gt;S3 bucket.&lt;/li&gt;
&lt;li&gt;Terraform.&lt;/li&gt;
&lt;li&gt;EKS cluster.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Scope
&lt;/h2&gt;

&lt;p&gt;The final goal would be to allowed a pod to copy or put files to and from an S3 bucket called krakenmoto.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;Initial steps&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;💻&lt;code&gt;Install terraform&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  OSX
&lt;/h2&gt;

&lt;p&gt;brew install hashicorp/tap/terraform&lt;/p&gt;

&lt;h2&gt;
  
  
  Windows
&lt;/h2&gt;

&lt;p&gt;choco install terraform&lt;/p&gt;

&lt;h2&gt;
  
  
  Linux
&lt;/h2&gt;

&lt;p&gt;sudo apt-get install terraform&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform module configuration
&lt;/h2&gt;

&lt;p&gt;This module would create necessary resources to get IRSA working&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 Bucket.&lt;/li&gt;
&lt;li&gt;IAM Role with trust-relationship for eks service-account and namespace.&lt;/li&gt;
&lt;li&gt;IAM Policy for S3 access.&lt;/li&gt;
&lt;li&gt;Data resource to get EKS access.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Module would be save into &lt;code&gt;modules/irsa&lt;/code&gt; folder and it would contain 3 files &lt;code&gt;data.tf&lt;/code&gt;, &lt;code&gt;main.tf&lt;/code&gt;, and &lt;code&gt;variables.tf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;data.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_eks_cluster" "eks" {
  count = var.eks_cluster_id == null ? 0 : 1
  name  = var.eks_cluster_id
}

data "aws_partition" "current" {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;main.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  application = "gersonplace-irsa"
  name        = var.project_name

  eks_oidc_issuer                       = var.eks_cluster_id == null ? "" : join("/", slice(split("/", one(data.aws_eks_cluster.eks).identity[0].oidc[0].issuer), 2, 5))
  eks_cluster_oidc_arn                  = "arn:${data.aws_partition.current.partition}:iam::${var.aws_account}:oidc-provider/${local.eks_oidc_issuer}"
  eks_namespace                         = "gersonplace"
  service_account                       = "${local.eks_namespace}-sa"
  common_tags = {
    application = local.application
  }
}

module "s3_bucket" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "4.3.0"
  bucket  = var.bucket_name

  control_object_ownership = true
  object_ownership         = "BucketOwnerPreferred"

  acl = "private"

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true

  server_side_encryption_configuration = {
    rule = {
      apply_server_side_encryption_by_default = {
        sse_algorithm = "AES256"
      }
    }
  }

  tags = local.common_tags
}

resource "aws_iam_role" "irsa" {
  name        = "gersonplace-irsa-role"
  description = "${local.name} EKS IRSA role"

  assume_role_policy = &amp;lt;&amp;lt;-EOT
  {
    "Version": "2012-10-17",
    "Statement": [
      {
        "Action": "sts:AssumeRoleWithWebIdentity",
        "Condition": {
          "StringLike": {
            "${local.eks_oidc_issuer}:sub": "system:serviceaccount:${local.eks_namespace}:${local.service_account}",
            "${local.eks_oidc_issuer}:aud": "sts.amazonaws.com"
          }
        },
        "Principal": {
          "Federated": "${local.eks_cluster_oidc_arn}"
        },
        "Effect": "Allow",
        "Sid": ""
      }
    ]
  }
  EOT

  tags = local.common_tags
}

resource "aws_iam_policy" "irsa" {
  name        = "${local.name}-irsa-policy"
  description = "${local.name}-integration with EKS Pods"
  policy = jsonencode(
    {
      Version = "2012-10-17"
      Statement = [
        {
          Action = [
            "s3:*",
          ]
          Effect = "Allow"
          Resource = [
            "arn:aws:s3:::${var.bucket_name}",
            "arn:aws:s3:::${var.bucket_name}/*"
          ],
        }
      ]
    }
  )
  tags = local.common_tags
}

resource "aws_iam_role_policy_attachment" "irsa" {
  role       = aws_iam_role.irsa.name
  policy_arn = aws_iam_policy.irsa.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;variables.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "aws_account" {
  description = "AWS account ID"
  type        = string
}

variable "region" {
  description = "AWS Region"
  type        = string
}

variable "bucket_name" {
  type = string
}

variable "project_name" {
  type = string
}

variable "eks_cluster_id" {
  type = string
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These 3 files are going to create infrastructure necessary to be able to access S3 from EKS pod in &lt;code&gt;namespace&lt;/code&gt; &lt;code&gt;gersonplace&lt;/code&gt; bucket-name would be configure as an input in &lt;code&gt;main.tf&lt;/code&gt; file that  would call the module to create the whole infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywjvmibw08ntrms1x404.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywjvmibw08ntrms1x404.png" alt="Image description" width="173" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform code that would set variables to call the module
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;main.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "IRSA" {
  source = "./modules/irsa"
  aws_account    = "112223334445"
  region         = "us-east-1"
  bucket_name    = "krakenmoto"
  project_name   = "gersonplace-irsa"
  eks_cluster_id = "gersonplace-eks-project"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we are ready to run terraform, place your self in folder where &lt;code&gt;main.tf&lt;/code&gt; is located and run the following commands&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform init -reconfigure -upgrade&lt;/code&gt;&lt;br&gt;
&lt;code&gt;terraform validate&lt;/code&gt;&lt;br&gt;
&lt;code&gt;terraform plan&lt;/code&gt;&lt;br&gt;
&lt;code&gt;terraform apply&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;At this point we should have a new S3 bucket named &lt;code&gt;krakenmoto&lt;/code&gt; and &lt;code&gt;IAM role&lt;/code&gt; and &lt;code&gt;Iam policy&lt;/code&gt; with a &lt;code&gt;trust-relationship&lt;/code&gt; which contains the something like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::112223334445:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/8DF14F971F8dfdDSJDSJDJSJDJA"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringLike": {
                    "oidc.eks.us-east-1.amazonaws.com/id/8DF14F971F8dfdDSJDSJDJSJDJA:sub": "system:serviceaccount:gersonplace:gersonplace-sa",
                    "oidc.eks.us-east-1.amazonaws.com/id/8DF14F971F8dfdDSJDSJDJSJDJA:aud": "sts.amazonaws.com"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration enables access for the gersonplace-sa &lt;code&gt;service account&lt;/code&gt; in the gersonplace &lt;code&gt;namespace&lt;/code&gt;. By associating the service account with the role through the appropriate annotation, any resource using this service account will be granted permissions to access the resources specified in the role’s policy. This approach ensures that the Deployment/Pod can securely access the permitted resources in alignment with the role's policy. I'll explain this next with Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes
&lt;/h2&gt;

&lt;p&gt;Now that we have the required infrastructure and permissions in AWS it is time to deploy EKS resources and test connectivity to S3 from pod. For this example I would configure a Deployment running one pod in gersonplace namespace with a service account that includes the annotation for the role with necessary permissions for S3.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;code&gt;EKS Resources&lt;/code&gt;
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Deployment:&lt;/code&gt; gersonplace-irsa&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Service-Account:&lt;/code&gt; gersonplace-sa&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Namespace:&lt;/code&gt; gersonplace&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;namespace.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: Namespace
metadata:
  name: gersonplace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;service-account.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: gersonplace-sa
  namespace: gersonplace
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::112223334445:role/gersonplace-role
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Annotation &lt;code&gt;eks.amazonaws.com/role-arn:&lt;/code&gt; with the role arn has been added.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Deployment.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: gersonplace-irsa
  namespace: gersonplace
  labels:
    app: gersonplace
spec:
  selector:
    matchLabels:
      app: gersonplace
  replicas: 1
  template:
    metadata:
      labels:
        app: gersonplace
    spec:
      containers:
      - name: gersonplace-irsa
        image: amazon/aws-cli:latest
        imagePullPolicy: Always
        command: ["sleep", "infinity"]
      serviceAccountName: gersonplace-sa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Service-Account has been added to the deployment.&lt;/p&gt;

&lt;p&gt;⚠Important:&lt;br&gt;
&lt;code&gt;serviceAccountName&lt;/code&gt; should be set to &lt;code&gt;service-account&lt;/code&gt; set in the trust-policy.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;service-account&lt;/code&gt; should have the &lt;code&gt;annotation&lt;/code&gt; &lt;code&gt;eks.amazonaws.com/role-arn:&lt;/code&gt; with the role that contains the policy that allows access to S3.&lt;/p&gt;

&lt;p&gt;At this point you can apply k8s manifest using &lt;code&gt;kubectl&lt;/code&gt; and login into the pod and test access to S3 bucket &lt;code&gt;krakenmoto&lt;/code&gt; and list files inside of it.&lt;/p&gt;

&lt;p&gt;💻 ✅## Validation&lt;br&gt;
Shell into the deployment pod and run &lt;code&gt;aws s3 ls s3://krakenmoto&lt;/code&gt; you should be able to list files, put and copy files as policy allows to do it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydfcmq36erunj8ppewxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydfcmq36erunj8ppewxh.png" alt="Image description" width="800" height="106"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>eks</category>
      <category>s3</category>
      <category>aws</category>
      <category>irsa</category>
    </item>
    <item>
      <title>Setting up IAM Anywhere using terraform</title>
      <dc:creator>Gerson Morales</dc:creator>
      <pubDate>Sat, 04 Jan 2025 22:43:02 +0000</pubDate>
      <link>https://dev.to/gerson_morales_3e89188d50/setting-up-iam-anywhere-using-terraform-3nf</link>
      <guid>https://dev.to/gerson_morales_3e89188d50/setting-up-iam-anywhere-using-terraform-3nf</guid>
      <description>&lt;p&gt;This post would walk you through the steps to configured IamAnywhere in AWS using Terraform and custom certificates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;AWS Account.&lt;/li&gt;
&lt;li&gt;Terraform.&lt;/li&gt;
&lt;li&gt;Certificate Authority [&lt;code&gt;PrivateCA.pem&lt;/code&gt;, &lt;code&gt;client.key&lt;/code&gt; and &lt;code&gt;client.pem&lt;/code&gt;].&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aws_signing_helper&lt;/code&gt; from AWS &lt;a href="https://docs.aws.amazon.com/rolesanywhere/latest/userguide/credential-helper.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/rolesanywhere/latest/userguide/credential-helper.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Scope
&lt;/h2&gt;

&lt;p&gt;We are going to configure IamAnywhere to allow access to &lt;code&gt;krakenmoto&lt;/code&gt; bucket in S3 without &lt;code&gt;access-key&lt;/code&gt; or &lt;code&gt;secret-access-key&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;Initial steps&lt;/code&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Configure &lt;code&gt;Certificate Authority&lt;/code&gt;.
To use AWS IAM anywhere we need an X.509 certificate issued by a CA (Certificate Authority).You can use script below to be able to create  &lt;code&gt;Bundle-Certificate&lt;/code&gt; including in  &lt;code&gt;PrivateCA.pem&lt;/code&gt; and &lt;code&gt;client.pem&lt;/code&gt; and &lt;code&gt;client.key&lt;/code&gt; required to access once we have the infrastructure ready.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;certificate.sh&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

SERVER="${SERVER:-client}"

OUTPUT_PATH=${OUTPUT_PATH:-certificates}
mkdir -p $OUTPUT_PATH

CORPORATION=GERSONPLACE
GROUP="Engineering"
CITY="Cartago"
STATE="Paraiso"
COUNTRY=CR

CERT_AUTH_PASS=`openssl rand -base64 32`
echo $CERT_AUTH_PASS &amp;gt; cert_auth_password
CERT_AUTH_PASS=`cat cert_auth_password`

cat -&amp;lt;&amp;lt;EOF &amp;gt; config.cnf
[ req ]
distinguished_name  = req_distinguished_name
attributes      = req_attributes

[ req_distinguished_name ]
countryName         = Country Name (2 letter code)
countryName_min         = 2
countryName_max         = 2
stateOrProvinceName     = State or Province Name (full name)
localityName            = Locality Name (eg, city)
0.organizationName      = Organization Name (eg, company)
organizationalUnitName      = Organizational Unit Name (eg, section)
commonName          = Common Name (eg, fully qualified host name)
commonName_max          = 64
emailAddress            = Email Address
emailAddress_max        = 64

[ req_attributes ]
challengePassword       = A challenge password
challengePassword_min       = 4
challengePassword_max       = 20

[ v3_ca ]
basicConstraints        = critical, CA:TRUE
subjectKeyIdentifier    = hash
authorityKeyIdentifier  = keyid:always, issuer:always
keyUsage                = critical, cRLSign, digitalSignature, keyCertSign

[SAN]
subjectAltName=DNS:$SERVER"
EOF

echo "Create the certificate authority"
openssl genrsa -out $OUTPUT_PATH/PrivateCA.key 4096
openssl \
  req \
  -subj "/CN=$SERVER.ca/OU=$GROUP/O=$CORPORATION/L=$CITY/ST=$STATE/C=$COUNTRY" \
  -new \
  -x509 \
  -passout pass:$CERT_AUTH_PASS \
  -key $OUTPUT_PATH/PrivateCA.key \
  -out $OUTPUT_PATH/PrivateCA.pem \
  -config config.cnf \
  -extensions v3_ca \
  -days 36500

echo "Create client private key (used to decrypt the cert we get from the CA)"
openssl genrsa -out $OUTPUT_PATH/$SERVER.key 4096

cat -&amp;lt;&amp;lt;EOF &amp;gt; client.ext
basicConstraints = CA:FALSE
authorityKeyIdentifier = keyid,issuer
keyUsage = nonRepudiation, digitalSignature, keyEncipherment, dataEncipherment
EOF

echo "Create the CSR(Certitificate Signing Request)"

openssl req -new -key $OUTPUT_PATH/$SERVER.key -out $SERVER.csr -nodes \
  -subj "/CN=$SERVER/OU=$GROUP/O=$CORPORATION/L=$CITY/ST=$STATE/C=$COUNTRY" \
  -sha256

echo "Sign the certificate with the certificate authority"
openssl x509 -req -in $SERVER.csr -CA $OUTPUT_PATH/PrivateCA.pem -CAkey $OUTPUT_PATH/PrivateCA.key -CAcreateserial -out $OUTPUT_PATH/$SERVER.pem \
  -days 36500 \
  -extfile client.ext \
  -passin pass:$CERT_AUTH_PASS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the script &lt;code&gt;./certificate.sh&lt;/code&gt; and it would generate the required certificates inside &lt;code&gt;/certificates&lt;/code&gt; folder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufsb4ynuyj7xdxsp28x4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufsb4ynuyj7xdxsp28x4.png" alt="Image description" width="181" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have your &lt;code&gt;custom&lt;/code&gt; certificates ready it is time to set up terraform code.&lt;/p&gt;

&lt;p&gt;💻## Install terraform&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## OSX ##
brew install hashicorp/tap/terraform

## Windows ##
choco install terraform

## Linux ##
sudo apt-get install terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🛑&lt;code&gt;Important&lt;/code&gt;= If you are using MAC M1/M2 you would probably run into this error when you run the module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;⚠Error: Incompatible provider version
Provider registry.terraform.io/hashicorp/template v2.2.0 does not have a package available for your current platform, darwin_arm64.
Provider releases are separate from Terraform CLI releases, so not all providers are available for all platforms. Other versions
of this provider may have different platforms supported.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🛠No worries lets get it fixed by running these 3 commands:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;brew install kreuzwerker/taps/m1-terraform-provider-helper&lt;/code&gt;&lt;br&gt;
&lt;code&gt;m1-terraform-provider-helper activate&lt;/code&gt; -&amp;gt; [In case you have not activated the helper]&lt;br&gt;
&lt;code&gt;m1-terraform-provider-helper install hashicorp/template -v 2.10.0&lt;/code&gt; --&amp;gt; [Install and compile]&lt;/p&gt;
&lt;h2&gt;
  
  
  Terraform module configuration
&lt;/h2&gt;

&lt;p&gt;Code would create the following resources in AWS &lt;code&gt;IAM&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Trust Anchor:&lt;/code&gt; In trust Anchor, we establish trust between AWS IAM Role Anywhere and CA. An application running outside AWS authenticates against a trust anchor with X.509 client certificate to get temporary AWS credentials.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;IAM Role:&lt;/code&gt; Trust Anchors assumes the AWS IAM role to grant allowed IAM policy permissions. To use a role we must trust the IAM Role Anywhere service principle in the role.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Profile:&lt;/code&gt; In the profile, we define an IAM role to be assumed by the client. We can set additional permissions boundaries on active sessions with AWS managed policies and condition blocks.&lt;/p&gt;

&lt;p&gt;In this case we are going to make use of a &lt;code&gt;custom module&lt;/code&gt; that creates all these resources at once, it starts by creating &lt;code&gt;modules/custom&lt;/code&gt; folder including .tf files with the resources that terraform will create in AWS &lt;code&gt;anchor.tf&lt;/code&gt;, &lt;code&gt;iam.tf&lt;/code&gt;, &lt;code&gt;outputs.tf&lt;/code&gt;, &lt;code&gt;tls-crt.tf&lt;/code&gt; and &lt;code&gt;variables.tf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;anchor.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Trust anchors
resource "aws_rolesanywhere_trust_anchor" "trust_anchor" {
  name    = "${local.project_name}-trust_anchor"
  enabled = true
  source {
    source_data {
      x509_certificate_data = file("${path.module}/certificates/PrivateCA.pem")
    }
    source_type = "CERTIFICATE_BUNDLE"
  }
}

# Profile
resource "aws_rolesanywhere_profile" "profile" {
  enabled             = true
  name                = "${local.project_name}-profile"
  role_arns           = [aws_iam_role.roles.arn]
  managed_policy_arns = [aws_iam_policy.profile_managed_policies.arn]
}


# Profile policies
#Managed policies limit the permissions granted by the role's permissions policy and are assigned to the role session when the role is assumed.
resource "aws_iam_policy" "profile_managed_policies" {
  name        = "${local.project_name}-user-profile-policies"
  path        = "/"
  description = "Allows access to S3"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = [
        "s3:*",
      ]
      Resource = [
            "arn:aws:s3:::${var.bucket_name}",
            "arn:aws:s3:::${var.bucket_name}/*"
      ]
      Effect = "Allow"
    }]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;iam.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  project_name = var.project_name

}
resource "aws_iam_role" "roles" {
  name = "${local.project_name}-iamanywhere-trust-role"
  path = "/"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow",
        Principal = {
          Service = "rolesanywhere.amazonaws.com",
        },
        Action = [
          "sts:AssumeRole",
          "sts:TagSession",
          "sts:SetSourceIdentity"
        ],
        Condition = {
          ArnEquals = {
            "aws:SourceArn" = "arn:aws:rolesanywhere:${var.region}:${var.aws_account}:trust-anchor/${aws_rolesanywhere_trust_anchor.trust_anchor.id}"

          }
        }
      }
    ]
  })
}

# Permission policies in the role of iamanywhere-trust-role
resource "aws_iam_policy" "s3_full_access" {
  name        = "${local.project_name}-iamanywhere-trust-role-policies"
  path        = "/"
  description = "Allows access to S3"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = [
        "s3:*",
      ]
      Resource = [
        "arn:aws:s3:::${var.bucket_name}",
        "arn:aws:s3:::${var.bucket_name}/*"
      ]
      Effect = "Allow"
    }]
  })
}

resource "aws_iam_role_policy_attachment" "roles_s3_access" {
  role       = aws_iam_role.roles.name
  policy_arn = aws_iam_policy.s3_full_access.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;outputs.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "anchor" {
  value = aws_rolesanywhere_trust_anchor.trust_anchor.arn
}

output "profile" {
  value = aws_rolesanywhere_profile.profile.arn
}

output "awsiam" {
  value = aws_iam_role.roles.arn
}

data "template_file" "aws_export_profile" {
  template = &amp;lt;&amp;lt;-EOT
[profile iam_anywhere]
region=us-east-1
credential_process = aws_signing_helper credential-process --trust-anchor-arn ${aws_rolesanywhere_trust_anchor.trust_anchor.arn} --profile-arn ${aws_rolesanywhere_profile.profile.arn} --role-arn ${aws_iam_role.roles.arn} --certificate /path/client.pem --private-key /path/client.key
EOT
  vars = {
    trust_anchor_arn = aws_rolesanywhere_trust_anchor.trust_anchor.arn
    profile_arn      = aws_rolesanywhere_profile.profile.arn
    role_arn         = aws_iam_role.roles.arn
  }
}

resource "local_file" "aws_export_profile" {
  content  = data.template_file.aws_export_profile.rendered
  filename = "./aws-config.txt"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;tls-crt.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "tls_private_key" "roles" {
  algorithm = "RSA"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;variables.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "aws_account" {
  description = "AWS account ID"
  type        = string
}

variable "region" {
  description = "AWS Region"
  type        = string
}

variable "bucket_name" {
  type = string
}

variable "project_name" {
  type = string
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Those 5 files configure the required terraform resources that you would need in order to create AWS infrastructure to access S3 via IAMAnywhere you can modify &lt;code&gt;iam.tf&lt;/code&gt; to set additional policies if needed but in this example the access would be only to &lt;code&gt;krakenmoto&lt;/code&gt; bucket that would be included in the &lt;code&gt;main.tf&lt;/code&gt; as an input.&lt;/p&gt;

&lt;p&gt;Below is how the terraform file structure should look like. Notice that &lt;code&gt;/certificates&lt;/code&gt; folder from running &lt;code&gt;./certificates.sh&lt;/code&gt; is inside the root module as anchor resource would make use of it to configure certificate bundle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcd76erfgu3qqx24d5e7t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcd76erfgu3qqx24d5e7t.png" alt="Image description" width="205" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform code that would set variables to call the module
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;main.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "Iamanywhere" {
  source = "./modules/custom"
  aws_account  = "112223334445"
  region       = "us-east-1"
  bucket_name  = "krakenmoto"
  project_name = "gersonplace"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we are ready to run terraform so place your self in folder there &lt;code&gt;main.tf&lt;/code&gt; is located and run&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform init -reconfigure -upgrade&lt;/code&gt;&lt;br&gt;
&lt;code&gt;terraform validate&lt;/code&gt;&lt;br&gt;
&lt;code&gt;terraform plan&lt;/code&gt;&lt;br&gt;
&lt;code&gt;terraform apply&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once you applied the changes to AWS you would see a new file generated by terraform with the name &lt;code&gt;aws-config.txt&lt;/code&gt; you can use the data on it in order to create a profile to connect to your AWS account to access bucket &lt;code&gt;krakenmoto&lt;/code&gt; in S3.&lt;/p&gt;

&lt;p&gt;Save this profile into &lt;code&gt;.aws/credentials&lt;/code&gt; and download &lt;code&gt;aws_signing_helper&lt;/code&gt; from &lt;a href="https://docs.aws.amazon.com/rolesanywhere/latest/userguide/credential-helper.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/rolesanywhere/latest/userguide/credential-helper.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have &lt;code&gt;aws_signing_helper&lt;/code&gt; run &lt;br&gt;
&lt;code&gt;cp aws_signing_helper /usr/local/bin&lt;/code&gt; and &lt;br&gt;
&lt;code&gt;chmod +x /usr/local/bin/aws_signing_helper&lt;/code&gt; to set permissions.&lt;/p&gt;

&lt;p&gt;At this point you have 2 options to authenticate&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Manual&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./aws_signing_helper credential-process --trust-anchor-arn arn:aws:rolesanywhere:us-east-1:112223334445:trust-anchor/957dd152-a4e2-4ac8-ab79-ff70ae66cf07 --profile-arn arn:aws:rolesanywhere:us-east-1:286514997612:profile/c25f019c-0234-4905-99cc-6dbeacc65b69 --role-arn arn:aws:iam::112223334445:role/gersonplace-iamanywhere-trust-role --certificate /path/client.pem --private-key /path/client.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Automatic&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_PROFILE=iam_anywhere
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which would use &lt;code&gt;AWS_PROFILE&lt;/code&gt; configure in &lt;code&gt;~/.aws/credentials&lt;/code&gt; e.g&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[iam_anywhere]
region=us-east-1
credential_process = aws_signing_helper credential-process --trust-anchor-arn arn:aws:rolesanywhere:us-east-1:112223334445:trust-anchor/957dd152-a4e2-4ac8-ab79-ff70ae66cf07 --profile-arn arn:aws:rolesanywhere:us-east-1:286514997612:profile/c25f019c-0234-4905-99cc-6dbeacc65b69 --role-arn arn:aws:iam::112223334445:role/gersonplace-iamanywhere-trust-role --certificate /path/client.pem --private-key /path/client.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🛑Important: You need to use &lt;code&gt;client.pem&lt;/code&gt; and &lt;code&gt;client.key&lt;/code&gt; in order to be able to use the &lt;code&gt;anchor&lt;/code&gt; to authenticate with AWS.&lt;/p&gt;

&lt;p&gt;At this point in your terminal and after setting up AWS profile to &lt;code&gt;iam_anywhere&lt;/code&gt; you would be able to run &lt;code&gt;aws s3 ls s3://krakenmoto&lt;/code&gt; and get files inside bucket.&lt;/p&gt;

&lt;p&gt;✅💻## Verification&lt;br&gt;
&lt;code&gt;aws sts  get-caller-identity&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "UserId": "AROAUFNM75FWHT6NA3BGE:3ebe8913d3f56b97407de5e686207ae4f8d99057",
    "Account": "112223334445",
    "Arn": "arn:aws:sts::112223334445:assumed-role/gersonplace-iamanywhere-trust-role/3ebe8913d3f56b97407de5e686207ae4f8d99057"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;aws s3 ls s3://krakenmoto&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf8s2q6ch92ce7ghvm92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf8s2q6ch92ce7ghvm92.png" alt="Image description" width="800" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>iamanywhere</category>
      <category>iam</category>
      <category>s3</category>
    </item>
    <item>
      <title>Migrate 🪳Coackroach DB into Postgres🐘</title>
      <dc:creator>Gerson Morales</dc:creator>
      <pubDate>Tue, 17 Dec 2024 18:27:31 +0000</pubDate>
      <link>https://dev.to/gerson_morales_3e89188d50/migrate-coackroach-db-to-postgres-43f4</link>
      <guid>https://dev.to/gerson_morales_3e89188d50/migrate-coackroach-db-to-postgres-43f4</guid>
      <description>&lt;p&gt;Migrating from CockroachDB to PostgreSQL involves several steps because CockroachDB and PostgreSQL share some similarities, but they also have important differences, particularly in terms of distributed systems, consistency models, and features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Cockroach DB Cluster running in AWS [EC2] with 3 AZ.&lt;/li&gt;
&lt;li&gt;Amazon S3 (Simple Storage Service).&lt;/li&gt;
&lt;li&gt;Amazon EKS (Elastic Kubernetes Service).&lt;/li&gt;
&lt;li&gt;Cockroach DB running local on your computer.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Important Details
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Database name 'lynx_core_prod'&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  CRDB Backup Preparation
&lt;/h2&gt;

&lt;p&gt;1- Login to CockroachDB via Session Manager.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo -i
su - cockroach
cockroach cert create-client root --certs-dir=certs/ --ca-key=ca.key
cockroach sql --certs-dir=certs/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2- Create Cockroach Backup.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;BACKUP DATABASE lynx_core_prod INTO 'userfile://defaultdb.public.userfiles_$user/lynx_core_prod-backup' AS OF SYSTEM TIME '-10s';&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;3- Save new backup [lynx_core_prod-backup] to local directory and push it to AWS S3.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cockroach userfile get --certs-dir=certs/ 'userfile://defaultdb.public.userfiles_$user/lynx_core_prod-backup'&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;



&lt;p&gt;&lt;code&gt;aws s3 cp lynx_core_prod-backup s3://gersonsplace-bucket/crdb-backup --recursive&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;At this point we already have a backup in S3 of the Database we need to migrate to Postgres now it is time to export DDL and make it compatible with Postgres. [This can be tricky and you need to probably make some manual work depending on how big your DB is]&lt;/p&gt;

&lt;h2&gt;
  
  
  Restore CRDB backup into a local CRDB database on your Computer.
&lt;/h2&gt;

&lt;p&gt;1- Copy your Database from S3 to your Local computer&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws s3 cp s3://gersonsplace-bucket/crdb-backup  /crdb_to_postgres --recursive&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;2- Run a local Cockroach Database local&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;brew install cockroach&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;3- Start a single-node Cockroach DB and access SQL Shell&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cockroach start-single-node --insecure --listen-addr=localhost:26258 --http-addr=localhost:8081&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;



&lt;p&gt;&lt;code&gt;cockroach sql --insecure --host=localhost:26258&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;4- Now you need to create a new folder to be able to mount the backup  on your local CRDB and then restore backup.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd /Users/gersonsplace/cockroach-data/&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;



&lt;p&gt;&lt;code&gt;mkdir extern/backup&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Note: Copy the backup to cockroach-data/extern/backup folder.&lt;/p&gt;

&lt;p&gt;5- Validate backup on your local CRDB Database using cockroach sql.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;show backups in 'nodelocal://1/backup';&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;6- Read and shows files inside the backup using cockroach sql (Optional)&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;show backup from LATEST in 'nodelocal://1/backup';&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;7- Restore backup into your local CRDB database with &lt;code&gt;skip_localities_check&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RESTORE DATABASE lynx_core_prod  FROM LATEST IN 'nodelocal://1/backup' with skip_localities_check;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Congratulations at this point you have your Backup restored on your local CRDB database!!! 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Prepare CRDB to be migrated into Postgres
&lt;/h2&gt;

&lt;p&gt;1- Export DDL [Data definition language] create the following script and run it ./ddl.sh&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

echo "Exporting DDL for all tables to ddl.sql..."

# Query to get all table names
 psql -U root -h localhost -p 26258 -d lynx_core_prod  -Ato ddl.sql -c " show create all tables "

echo "DDL export complete. Check ddl.sql for the results."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above script is going to create &lt;code&gt;ddl.sql&lt;/code&gt; file it would contain all tables from your Database in the CockroachDB style so here comes the tricky part and probably some manual work if someone know a better approach feel free to share it but this process worked for me.&lt;/p&gt;

&lt;p&gt;Example of how 2 tables in the &lt;code&gt;ddl.sql&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfx65fslobwgi7h2iga7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfx65fslobwgi7h2iga7.png" alt=" " width="571" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example of the same 2 tables after making them postgres compatible&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0mlbk05756ho7jior64.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0mlbk05756ho7jior64.png" alt=" " width="529" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;As I mentioned this could be tricky but not impossible depending on the size of your CRDB and numbers of tables.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once you have an idea how your tables are going to look like with the postgres compatible format you can move to the next step.&lt;/p&gt;
&lt;h2&gt;
  
  
  Run Postgres Database local on your computer
&lt;/h2&gt;

&lt;p&gt;1- Install Postgres&lt;br&gt;
&lt;code&gt;brew install postgresql@14&lt;/code&gt;&lt;br&gt;
&lt;code&gt;pg_ctl -D /Users/user/postgressql -l logfile start'&lt;/code&gt;&lt;br&gt;
&lt;code&gt;psql postgres&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Connect local CRDB and Postgres together
&lt;/h2&gt;

&lt;p&gt;1- Create a &lt;code&gt;foreign server&lt;/code&gt; in postgres database and allow it to connect to CRDB&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE EXTENSION postgres_fdw;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE SERVER crdb FOREIGN DATA WRAPPER postgres_fdw OPTIONS (
    host 'localhost',
    port '26258',
    dbname 'lynx_core_prod',
    sslmode 'disable'
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE USER MAPPING FOR root SERVER crdb OPTIONS (
 user 'root', password ''
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2- Now you need to create a &lt;code&gt;foreign table&lt;/code&gt; in postgres for each table in CRDB example using the 2 tables I mentioned above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3h8cpl52nxij722t5a3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3h8cpl52nxij722t5a3.png" alt=" " width="425" height="350"&gt;&lt;/a&gt;&lt;br&gt;
Note: Notice that I use &lt;code&gt;foreign server&lt;/code&gt; to be able to access the table on CRDB from postgres.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`SERVER crdb OPTIONS (
    table_name 'table-name'`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3- Now you need to create a local table to be able to insert data from foreign tables I used prefix &lt;code&gt;local_&lt;/code&gt; to create local table and to be able to &lt;code&gt;Alter&lt;/code&gt; once I finish with the import. Example using the 2 tables I mentioned above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fcnzsviq5cu5eh8oesp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fcnzsviq5cu5eh8oesp.png" alt=" " width="618" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4- At this point you can drop &lt;code&gt;foreign server&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DROP EXTENSION postgres_fdw;
DROP SERVER crdb CASCADE;
DROP USER MAPPING FOR root SERVER crdb;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5- After &lt;code&gt;importing Data&lt;/code&gt; to the local table and dropping &lt;code&gt;foreign server&lt;/code&gt; it is time to use &lt;code&gt;Alter&lt;/code&gt; to change name to the correct one. Example using the 2 tables I mentioned above. &lt;br&gt;
&lt;code&gt;ALTER TABLE public.local_schema_migrations RENAME TO schema_migrations;&lt;/code&gt;&lt;br&gt;
&lt;code&gt;ALTER TABLE client_side.local_mounts RENAME TO mounts;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;6- Finally you can now check your local Postgres DB and it should be the same as CRDB Database including all Data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkd8hua86uqknjowithk6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkd8hua86uqknjowithk6.png" alt=" " width="114" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lets make things more interesting and do the same process but now in Kubernetes
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Minikube or EKS with IRSA to access S3 and be able to download backup using initcontainer and &lt;code&gt;emptyDir&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CRDB Backup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;3 files &lt;code&gt;create_local_tables.sql&lt;/code&gt;, &lt;code&gt;create_foreign_tables.sql&lt;/code&gt; and &lt;code&gt;rename_tables.sql&lt;/code&gt; these files are the same files that we explained before in order to be able to make CRDB to run in Postgres but know we would use them to automate process in Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this example I would use &lt;code&gt;Minikube&lt;/code&gt; taking into consideration that we already have our backup file and the 3 files .sql save in the following path &lt;code&gt;cockroach-data/extern/backup&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Start Minikube with &lt;code&gt;--mount-string&lt;/code&gt; flag pointing to where you have the CRDB backup &lt;code&gt;/Users/user/cockroach-data/extern/backup&lt;/code&gt;
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube start --mount-string="/Users/user/cockroach-data/extern/backup:/cockroach-backup" --mount
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Let's create k8s manifests
&lt;/h2&gt;

&lt;h2&gt;
  
  
  RBAC
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: crdb-to-postgres-migration
  namespace: crdb-to-postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: crdb-to-postgres-migration-role
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: crdb-to-postgres-migration-crb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: crdb-to-postgres-migration-role
subjects:
  - kind: ServiceAccount
    name: crdb-to-postgres-migration
    namespace: crdb-to-postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deployment
&lt;/h2&gt;

&lt;p&gt;I explain deployment so you would find the following&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;cockroach container&lt;/code&gt; (Restore CRDB Backup)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;postgres container&lt;/code&gt; (Create foreign server, create foreign tables, create local tables, import data from CRDB, drops foreign server and rename local tables to the actual table name.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;export-pg-dump&lt;/code&gt; (To be able to restore postgres into other DB)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: migration-demo
  namespace: crdb-to-postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: db
  template:
    metadata:
      labels:
        app: db
    spec:
      containers:
      - name: cockroach
        image: cockroachdb/cockroach:v24.2.0
        command: ["/bin/sh", "-c"]
        args:
          - |
            cockroach start-single-node --insecure --http-port=8080 --port=26257 --store=/cockroach-data &amp;amp;

            echo "#### Waiting for CockroachDB to start... ####" &amp;amp;&amp;amp;
            while ! curl -s http://localhost:8080/_status/cluster; do
              sleep 1;
            done &amp;amp;&amp;amp;

            echo "### CockroachDB is up and running! ###" &amp;amp;&amp;amp;
            cockroach sql --insecure --execute "RESTORE DATABASE lynx_core_prod FROM LATEST IN 'nodelocal://1/backup' WITH skip_localities_check;" &amp;amp;&amp;amp;

            echo "### Restore operation completed. ####" &amp;amp;&amp;amp;
            tail -f /dev/null # Keep the container running
        volumeMounts:
        - name: cockroach-backup
          mountPath: /cockroach-data/extern/backup
        ports:
        - containerPort: 8080
        - containerPort: 26257
        resources:
          requests:
            cpu: 50m
            memory: 100Mi
          limits:
            cpu: 1000m
            memory: 2000Mi
      - name: postgres
        image: postgres:latest
        command: ["sh", "-c"]
        args:
          - |
            docker-entrypoint.sh postgres &amp;amp;

            echo " #### Waiting for PostgreSQL to start... #### " &amp;amp;&amp;amp;

            until pg_isready -h localhost; do
              sleep 15;
            done &amp;amp;&amp;amp;

            echo " #### PostgreSQL is up and running! #### " &amp;amp;&amp;amp;

            psql -U root -c "CREATE DATABASE lynx_core_prod;" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "CREATE EXTENSION IF NOT EXISTS postgres_fdw;" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "CREATE SERVER crdb FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'localhost', port '26257', dbname 'lynx_core_prod', sslmode 'disable');" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "CREATE USER MAPPING FOR root SERVER crdb OPTIONS (user 'root', password '');" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "CREATE SCHEMA client_side;" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "CREATE SCHEMA settings;" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "IMPORT FOREIGN SCHEMA public FROM SERVER crdb INTO public;" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "IMPORT FOREIGN SCHEMA client_side FROM SERVER crdb INTO client_side;" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "IMPORT FOREIGN SCHEMA settings FROM SERVER crdb INTO settings;" &amp;amp;&amp;amp;

            echo "#### Creating foreign_tables... ####" &amp;amp;&amp;amp;
            sleep 5 &amp;amp;&amp;amp;
            psql -U root -d lynx -f /scripts/create_foreign_tables.sql &amp;amp;&amp;amp;

            echo "#### Creating local_tables... ####" &amp;amp;&amp;amp;
            sleep 5 &amp;amp;&amp;amp;
            psql -U root -d lynx -f /scripts/create_local_tables.sql &amp;amp;&amp;amp;

            echo "#### Dropping  CRDB_Server... ####" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "DROP SERVER crdb CASCADE;" &amp;amp;&amp;amp;
            sleep 2 &amp;amp;&amp;amp;

            echo "#### Renaming_tables... ####" &amp;amp;&amp;amp;
            psql -U root -d lynx -f /scripts/rename_tables.sql &amp;amp;&amp;amp;

            echo "#### All Database commands executed.... ####" &amp;amp;&amp;amp;
            sleep 10 &amp;amp;&amp;amp;

            echo "#### lynx Database dump created at /pg_dump/lynx_dump_$(date +%Y%m%d).sql ####" &amp;amp;&amp;amp;
            pg_dump -U root -d lynx -f /pg_dump/lynx_dump_$(date +%Y%m%d).sql &amp;amp;&amp;amp;

            tail -f /dev/null
        env:
        - name: POSTGRES_DB
          value: root
        - name: POSTGRES_USER
          value: root
        - name: POSTGRES_PASSWORD
          value: root
        ports:
        - containerPort: 5432
        resources:
          requests:
            cpu: 50m
            memory: 100Mi
          limits:
            cpu: 1000m
            memory: 2000Mi
        volumeMounts:
        - name: postgres-data
          mountPath: /var/lib/postgresql/data
        - name: cockroach-backup
          mountPath: /scripts
        - name: pg-dump
          mountPath: /pg_dump
      - name: export-pg-dump
        image: bitnami/kubectl:latest
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
        command:
        - /bin/sh
        - -c
        - |
          sleep 60
          echo "#### Copying "lynx_dump_$(date +%Y%m%d).sql" to /extern/backup ####"

          kubectl cp /pg_dump/lynx_dump_$(date +%Y%m%d).sql /Users/gerson/cockroach-data/extern/backup/
          tail -f /dev/null # Keep the container running
        volumeMounts:
        - name: pg-dump
          mountPath: /pg_dump
        resources:
          requests:
            cpu: 50m
            memory: 100Mi
          limits:
            cpu: 100m
            memory: 200Mi
      volumes:
      - name: postgres-data
        emptyDir: {}
      - name: cockroach-backup
        hostPath:
          path: /cockroach-backup 
          type: Directory
      - name: pg-dump
        emptyDir: {}
      serviceAccountName: crdb-to-postgres-migration
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  After the whole process finish you can export the pg_dump and run it in another postgres database useful for development
&lt;/h2&gt;

&lt;p&gt;1- Copy pg_dump to local PC&lt;br&gt;
&lt;code&gt;kubectl cp pod-name:/pg_dump/lynx_dump_$(date +%Y%m%d).sql /Users/user/lynx_dump_$(date +%Y%m%d).sql -c export-pg-dump -n crdb-to-postgres&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;2- Create Database in postgres&lt;br&gt;
&lt;code&gt;createdb lynx_core_prod&lt;/code&gt;&lt;br&gt;
&lt;code&gt;psql -U root  -d lynx_core_prod &amp;lt;  lynx_dump_20241217.sql&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Congratulations you were able to migrate CRDB🪳 database into Postgres🐘 DB :)Pura Vida!
&lt;/h2&gt;

</description>
      <category>kubernetes</category>
      <category>coackroachdb</category>
      <category>postgressql</category>
      <category>devops</category>
    </item>
    <item>
      <title>Migrate Coackroach DB to Postgres</title>
      <dc:creator>Gerson Morales</dc:creator>
      <pubDate>Tue, 17 Dec 2024 18:27:31 +0000</pubDate>
      <link>https://dev.to/gerson_morales_3e89188d50/migrate-coackroach-db-to-postgres-mbn</link>
      <guid>https://dev.to/gerson_morales_3e89188d50/migrate-coackroach-db-to-postgres-mbn</guid>
      <description>&lt;p&gt;Migrating from CockroachDB to PostgreSQL involves several steps because CockroachDB and PostgreSQL share some similarities, but they also have important differences, particularly in terms of distributed systems, consistency models, and features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Cockroach DB Cluster running in AWS [EC2] with 3 AZ.&lt;/li&gt;
&lt;li&gt;Amazon S3 (Simple Storage Service).&lt;/li&gt;
&lt;li&gt;Amazon EKS (Elastic Kubernetes Service).&lt;/li&gt;
&lt;li&gt;Cockroach DB running local on your computer.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Important Details
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Database name 'lynx_core_prod'&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  CRDB Backup Preparation
&lt;/h2&gt;

&lt;p&gt;1- Login to CockroachDB via Session Manager.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo -i
su - cockroach
cockroach sql --certs-dir=certs/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2- Create Cockroach Backup.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;BACKUP DATABASE lynx_core_prod INTO 'userfile://defaultdb.public.userfiles_$user/lynx_core_prod-backup' AS OF SYSTEM TIME '-10s';&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;3- Save new backup [lynx_core_prod-backup] to local directory and push it to AWS S3.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cockroach userfile get --certs-dir=certs/ 'userfile://defaultdb.public.userfiles_$user/lynx_core_prod-backup'&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;



&lt;p&gt;&lt;code&gt;aws s3 cp lynx_core_prod-backup s3://gersonsplace-bucket/crdb-backup --recursive&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;At this point we already have a backup in S3 of the Database we need to migrate to Postgres now it is time to export DDL and make it compatible with Postgres. [This can be tricky and you need to probably make some manual work depending on how big your DB is]&lt;/p&gt;

&lt;h2&gt;
  
  
  Restore CRDB backup into a local CRDB database on your Computer.
&lt;/h2&gt;

&lt;p&gt;1- Copy your Database from S3 to your Local computer&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws s3 cp s3://gersonsplace-bucket/crdb-backup  /crdb_to_postgres --recursive&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;2- Run a local Cockroach Database local&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;brew install cockroach&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;3- Start a single-node Cockroach DB and access SQL Shell&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cockroach start-single-node --insecure --listen-addr=localhost:26258 --http-addr=localhost:8081&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;



&lt;p&gt;&lt;code&gt;cockroach sql --insecure --host=localhost:26258&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;4- Now you need to create a new folder to be able to mount the backup  on your local CRDB and then restore backup.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd /Users/gersonsplace/cockroach-data/&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;



&lt;p&gt;&lt;code&gt;mkdir extern/backup&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Note: Copy the backup to cockroach-data/extern/backup folder.&lt;/p&gt;

&lt;p&gt;5- Validate backup on your local CRDB Database using cockroach sql.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;show backups in 'nodelocal://1/backup';&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;6- Read and shows files inside the backup using cockroach sql (Optional)&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;show backup from LATEST in 'nodelocal://1/backup';&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;7- Restore backup into your local CRDB database with &lt;code&gt;skip_localities_check&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RESTORE DATABASE lynx_core_prod  FROM LATEST IN 'nodelocal://1/backup' with skip_localities_check;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Congratulations at this point you have your Backup restored on your local CRDB database!!! 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Prepare CRDB to be migrated into Postgres
&lt;/h2&gt;

&lt;p&gt;1- Export DDL [Data definition language] create the following script and run it ./ddl.sh&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

echo "Exporting DDL for all tables to ddl.sql..."

# Query to get all table names
 psql -U root -h localhost -p 26258 -d lynx_core_prod  -Ato ddl.sql -c " show create all tables "

echo "DDL export complete. Check ddl.sql for the results."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above script is going to create &lt;code&gt;ddl.sql&lt;/code&gt; file it would contain all tables from your Database in the CockroachDB style so here comes the tricky part and probably some manual work if someone know a better approach feel free to share it but this process worked for me.&lt;/p&gt;

&lt;p&gt;Example of how 2 tables in the &lt;code&gt;ddl.sql&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfx65fslobwgi7h2iga7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfx65fslobwgi7h2iga7.png" alt="Image description" width="571" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example of the same 2 tables after making them postgres compatible&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0mlbk05756ho7jior64.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0mlbk05756ho7jior64.png" alt="Image description" width="529" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;As I mentioned this could be tricky but not impossible depending on the size of your CRDB and numbers of tables.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once you have an idea how your tables are going to look like with the postgres compatible format you can move to the next step.&lt;/p&gt;
&lt;h2&gt;
  
  
  Run Postgres Database local on your computer
&lt;/h2&gt;

&lt;p&gt;1- Install Postgres&lt;br&gt;
&lt;code&gt;brew install postgresql@14&lt;/code&gt;&lt;br&gt;
&lt;code&gt;pg_ctl -D /Users/user/postgressql -l logfile start'&lt;/code&gt;&lt;br&gt;
&lt;code&gt;psql postgres&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Connect local CRDB and Postgres together
&lt;/h2&gt;

&lt;p&gt;1- Create a &lt;code&gt;foreign server&lt;/code&gt; in postgres database and allow it to connect to CRDB&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE EXTENSION postgres_fdw;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE SERVER crdb FOREIGN DATA WRAPPER postgres_fdw OPTIONS (
    host 'localhost',
    port '26258',
    dbname 'lynx_core_prod',
    sslmode 'disable'
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE USER MAPPING FOR root SERVER crdb OPTIONS (
 user 'root', password ''
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2- Now you need to create a &lt;code&gt;foreign table&lt;/code&gt; in postgres for each table in CRDB example using the 2 tables I mentioned above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3h8cpl52nxij722t5a3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3h8cpl52nxij722t5a3.png" alt="Image description" width="425" height="350"&gt;&lt;/a&gt;&lt;br&gt;
Note: Notice that I use &lt;code&gt;foreign server&lt;/code&gt; to be able to access the table on CRDB from postgres.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`SERVER crdb OPTIONS (
    table_name 'table-name'`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3- Now you need to create a local table to be able to insert data from foreign tables I used prefix &lt;code&gt;local_&lt;/code&gt; to create local table and to be able to &lt;code&gt;Alter&lt;/code&gt; once I finish with the import. Example using the 2 tables I mentioned above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fcnzsviq5cu5eh8oesp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fcnzsviq5cu5eh8oesp.png" alt="Image description" width="618" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4- At this point you can drop &lt;code&gt;foreign server&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DROP EXTENSION postgres_fdw;
DROP SERVER crdb CASCADE;
DROP USER MAPPING FOR root SERVER crdb;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5- After &lt;code&gt;importing Data&lt;/code&gt; to the local table and dropping &lt;code&gt;foreign server&lt;/code&gt; it is time to use &lt;code&gt;Alter&lt;/code&gt; to change name to the correct one. Example using the 2 tables I mentioned above. &lt;br&gt;
&lt;code&gt;ALTER TABLE public.local_schema_migrations RENAME TO schema_migrations;&lt;/code&gt;&lt;br&gt;
&lt;code&gt;ALTER TABLE public.local_mounts RENAME TO mounts;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;6- Finally you can now check your local Postgres DB and it should be the same as CRDB Database including all Data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkd8hua86uqknjowithk6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkd8hua86uqknjowithk6.png" alt="Image description" width="114" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lets make things more interesting and do the same process but now in Kubernetes
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Minikube or EKS with IRSA to access S3 and be able to download backup using initcontainer and &lt;code&gt;emptyDir&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CRDB Backup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;3 files &lt;code&gt;create_local_tables.sql&lt;/code&gt;, &lt;code&gt;create_foreign_tables.sql&lt;/code&gt; and &lt;code&gt;rename_tables.sql&lt;/code&gt; these files are the same files that we explained before in order to be able to make CRDB to run in Postgres but know we would use them to automate process in Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this example I would use &lt;code&gt;Minikube&lt;/code&gt; taking into consideration that we already have our backup file and the 3 files .sql save in the following path &lt;code&gt;cockroach-data/extern/backup&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Start Minikube with &lt;code&gt;--mount-string&lt;/code&gt; flag pointing to where you have the CRDB backup &lt;code&gt;/Users/user/cockroach-data/extern/backup&lt;/code&gt;
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube start --mount-string="/Users/user/cockroach-data/extern/backup:/cockroach-backup" --mount
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Let's create k8s manifests
&lt;/h2&gt;

&lt;h2&gt;
  
  
  RBAC
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: crdb-to-postgres-migration
  namespace: crdb-to-postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: crdb-to-postgres-migration-role
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: crdb-to-postgres-migration-crb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: crdb-to-postgres-migration-role
subjects:
  - kind: ServiceAccount
    name: crdb-to-postgres-migration
    namespace: crdb-to-postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deployment
&lt;/h2&gt;

&lt;p&gt;I explain deployment so you would find the following&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;cockroach container&lt;/code&gt; (Restore CRDB Backup)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;postgres container&lt;/code&gt; (Create foreign server, create foreign tables, create local tables, import data from CRDB, drops foreign server and rename local tables to the actual table name.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;export-pg-dump&lt;/code&gt; (To be able to restore postgres into other DB)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: migration-demo
  namespace: crdb-to-postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: db
  template:
    metadata:
      labels:
        app: db
    spec:
      containers:
      - name: cockroach
        image: cockroachdb/cockroach:v24.2.0
        command: ["/bin/sh", "-c"]
        args:
          - |
            cockroach start-single-node --insecure --http-port=8080 --port=26257 --store=/cockroach-data &amp;amp;

            echo "#### Waiting for CockroachDB to start... ####" &amp;amp;&amp;amp;
            while ! curl -s http://localhost:8080/_status/cluster; do
              sleep 1;
            done &amp;amp;&amp;amp;

            echo "### CockroachDB is up and running! ###" &amp;amp;&amp;amp;
            cockroach sql --insecure --execute "RESTORE DATABASE lynx_core_prod FROM LATEST IN 'nodelocal://1/backup' WITH skip_localities_check;" &amp;amp;&amp;amp;

            echo "### Restore operation completed. ####" &amp;amp;&amp;amp;
            tail -f /dev/null # Keep the container running
        volumeMounts:
        - name: cockroach-backup
          mountPath: /cockroach-data/extern/backup
        ports:
        - containerPort: 8080
        - containerPort: 26257
        resources:
          requests:
            cpu: 50m
            memory: 100Mi
          limits:
            cpu: 1000m
            memory: 2000Mi
      - name: postgres
        image: postgres:latest
        command: ["sh", "-c"]
        args:
          - |
            docker-entrypoint.sh postgres &amp;amp;

            echo " #### Waiting for PostgreSQL to start... #### " &amp;amp;&amp;amp;

            until pg_isready -h localhost; do
              sleep 15;
            done &amp;amp;&amp;amp;

            echo " #### PostgreSQL is up and running! #### " &amp;amp;&amp;amp;

            psql -U root -c "CREATE DATABASE lynx_core_prod;" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "CREATE EXTENSION IF NOT EXISTS postgres_fdw;" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "CREATE SERVER crdb FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'localhost', port '26257', dbname 'lynx_core_prod', sslmode 'disable');" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "CREATE USER MAPPING FOR root SERVER crdb OPTIONS (user 'root', password '');" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "CREATE SCHEMA client_side;" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "CREATE SCHEMA settings;" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "IMPORT FOREIGN SCHEMA public FROM SERVER crdb INTO public;" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "IMPORT FOREIGN SCHEMA client_side FROM SERVER crdb INTO client_side;" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "IMPORT FOREIGN SCHEMA settings FROM SERVER crdb INTO settings;" &amp;amp;&amp;amp;

            echo "#### Creating foreign_tables... ####" &amp;amp;&amp;amp;
            sleep 5 &amp;amp;&amp;amp;
            psql -U root -d lynx -f /scripts/create_foreign_tables.sql &amp;amp;&amp;amp;

            echo "#### Creating local_tables... ####" &amp;amp;&amp;amp;
            sleep 5 &amp;amp;&amp;amp;
            psql -U root -d lynx -f /scripts/create_local_tables.sql &amp;amp;&amp;amp;

            echo "#### Dropping  CRDB_Server... ####" &amp;amp;&amp;amp;
            psql -U root -d lynx -c "DROP SERVER crdb CASCADE;" &amp;amp;&amp;amp;
            sleep 2 &amp;amp;&amp;amp;

            echo "#### Renaming_tables... ####" &amp;amp;&amp;amp;
            psql -U root -d lynx -f /scripts/rename_tables.sql &amp;amp;&amp;amp;

            echo "#### All Database commands executed.... ####" &amp;amp;&amp;amp;
            sleep 10 &amp;amp;&amp;amp;

            echo "#### lynx Database dump created at /pg_dump/lynx_dump_$(date +%Y%m%d).sql ####" &amp;amp;&amp;amp;
            pg_dump -U root -d lynx -f /pg_dump/lynx_dump_$(date +%Y%m%d).sql &amp;amp;&amp;amp;

            tail -f /dev/null
        env:
        - name: POSTGRES_DB
          value: root
        - name: POSTGRES_USER
          value: root
        - name: POSTGRES_PASSWORD
          value: root
        ports:
        - containerPort: 5432
        resources:
          requests:
            cpu: 50m
            memory: 100Mi
          limits:
            cpu: 1000m
            memory: 2000Mi
        volumeMounts:
        - name: postgres-data
          mountPath: /var/lib/postgresql/data
        - name: cockroach-backup
          mountPath: /scripts
        - name: pg-dump
          mountPath: /pg_dump
      - name: export-pg-dump
        image: bitnami/kubectl:latest
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
        command:
        - /bin/sh
        - -c
        - |
          sleep 60
          echo "#### Copying "lynx_dump_$(date +%Y%m%d).sql" to /extern/backup ####"

          kubectl cp /pg_dump/lynx_dump_$(date +%Y%m%d).sql /Users/gerson/cockroach-data/extern/backup/
          tail -f /dev/null # Keep the container running
        volumeMounts:
        - name: pg-dump
          mountPath: /pg_dump
        resources:
          requests:
            cpu: 50m
            memory: 100Mi
          limits:
            cpu: 100m
            memory: 200Mi
      volumes:
      - name: postgres-data
        emptyDir: {}
      - name: cockroach-backup
        hostPath:
          path: /cockroach-backup 
          type: Directory
      - name: pg-dump
        emptyDir: {}
      serviceAccountName: crdb-to-postgres-migration
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  After the whole process finish you can export the pg_dump and run it in another postgres database useful for development
&lt;/h2&gt;

&lt;p&gt;1- Copy pg_dump to local PC&lt;br&gt;
&lt;code&gt;kubectl cp pod-name:/pg_dump/lynx_dump_$(date +%Y%m%d).sql /Users/user/lynx_dump_$(date +%Y%m%d).sql -c export-pg-dump -n crdb-to-postgres&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;2- Create Database in postgres&lt;br&gt;
&lt;code&gt;createdb lynx_core_prod&lt;/code&gt;&lt;br&gt;
&lt;code&gt;psql -U root  -d lynx_core_prod &amp;lt;  lynx_dump_20241217.sql&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Congratulations you were able to migrate a Database in CRDB to Postgres! Pura Vida :)
&lt;/h2&gt;

</description>
    </item>
  </channel>
</rss>
