DEV Community

exanubes
exanubes

Posted on • Originally published at exanubes.com

Connecting to RDS via Parameter Store config

In this article we will go over creating and connecting to a database from an application deployed to ECS Fargate Containers. First we will need to create an RDS instance and store the database credentials, then we will update the CI/CD pipeline in order to perform database migration whenever new versions of the application are deployed. Last but not least, we will use the AWS SDK to communicate with Systems Manager to recover database credentials and connect to it.

Here's a diagram of what we're building

diagram of ecs application with rds instance and ci/cd pipeline

We're going to pick up where we left off in ECS with CI/CD Pipeline with a Fargate cluster behind an ALB and a CI/CD pipeline for automatic deployments. In this article we will focus on implementing the second private subnet with an RDS instance that has a security group configured to allow inbound traffic on port 5432 from within the VPC. This way, CodeBuild will be able to run migration queries and Fargate Container - our App - will be able to communicate with the database. In order for the application to gain access to the database, we will grab credentials from the SSM Parameter Store.

Go to GitHub if you're looking for finished code. To follow along clone the starter repo

git clone -b start git@github.com:exanubes/connecting-to-rds-via-ssm-parameter-store-config.git
Enter fullscreen mode Exit fullscreen mode

RDS Instance

The overall creation of RDS Instance is quite straightforward. What we want to keep in mind while doing this is to place it within our own VPC.

// stacks/rds.stack.ts
interface Props extends StackProps {
  vpc: IVpc;
  dbConfig: Pick<DatabaseConfig, 'database' | 'username' | 'password'>;
  securityGroup: SecurityGroup;
}

export class RdsStack extends Stack {
  public readonly db: DatabaseInstance;

  constructor(scope: Construct, id: string, props: Props) {
    super(scope, id, props);

    this.db = new DatabaseInstance(this, 'exanubes-database', {
      engine: DatabaseInstanceEngine.POSTGRES,
      vpc: props.vpc,
      credentials: {
        username: props.dbConfig.username,
        password: SecretValue.plainText(props.dbConfig.password),
      },
      databaseName: props.dbConfig.database,
      storageType: StorageType.STANDARD,
      instanceType: InstanceType.of(
        InstanceClass.BURSTABLE3,
        InstanceSize.SMALL
      ),
      securityGroups: [props.securityGroup],
      parameterGroup: ParameterGroup.fromParameterGroupName(
        this,
        'postgres-instance-group',
        'postgresql13'
      ),
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

Here we're creating a new database instance and assigning it to a property on the RdsStack. This is also the place where we define our vpc as the location of our database. AWS will put it in a private subnet by default, however, if you wish to have more control over it, you can use subnetGroup and vpcSubnets properties.

Moving on, we define credentials for the database and the database name as well as the type and size of the instance. We're using the cheapest options with standard storage type. Last but not least we're pointing to a postgres version inside RDS parameter groups that we want to use for our database.

Should you have an error related to invalid/non-existent parameterGroup, you will have to go to RDS>Parameter Groups and create it yourself

CI/CD Pipeline

With RDS Instance created, we still need a way to synchronize the database, create tables, add columns etc. and obviously we want to automate it. To accomplish that, we will be adding an additional step to the existing CodePipeline.

// stacks/pipeline.stack.ts

private getMigrationSpec() {
    return BuildSpec.fromObject({
      version: "0.2",
      env: {
        shell: "bash",
      },
      phases: {
        install: {
          commands: ["(cd ./backend && npm install)"],
        },
        build: {
          commands: [
            "./backend/node_modules/.bin/sequelize db:migrate --debug --migrations-path ./backend/db/migrations --url postgresql://${DB_USER}:${DB_PASSWORD}@${DB_HOST}:5432/${DB_NAME}",
          ],
        },
      },
    });
 }
Enter fullscreen mode Exit fullscreen mode

First off we need a buildspec that will install our dependencies. Then we will use Sequelize CLI to run a migration command against our database.

// stacks/pipeline.stack.ts
const migrationProject = new Project(this, 'migration-project', {
  projectName: 'migration-project',
  securityGroups: [props.securityGroup],
  vpc: props.vpc,
  buildSpec: this.getMigrationSpec(),
  source,
  environment: {
    buildImage: LinuxBuildImage.AMAZON_LINUX_2_ARM_2,
    privileged: true,
  },
  environmentVariables: {
    DB_USER: {
      value: props.dbConfig.username,
    },
    DB_PASSWORD: {
      value: props.dbConfig.password,
    },
    DB_HOST: {
      value: props.dbConfig.hostname,
    },
    DB_PORT: {
      value: props.dbConfig.port,
    },
    DB_NAME: {
      value: props.dbConfig.database,
    }
  },
});
Enter fullscreen mode Exit fullscreen mode

Now, defining a separate Project for the migration step. Most of the configuration is the same as in ECS with CI/CD what changed is the fact that we actually need to assign it to our own VPC to be able to reach the RDS Instance. We're also adding a security group to be able to communicate with the database. More on that later. This is also the place where we can pass all the relevant environmental variables i.e. database credentials.

// stacks/pipeline.stack.ts
  const pipelineActions = {
    //...
    migrate: new CodeBuildAction({
      actionName: 'dbMigrate',
      project: migrationProject,
      input: artifacts.source,
    }),
  };

  const pipeline = new Pipeline(this, 'DeployPipeline', {
    pipelineName: `exanubes-pipeline`,
    stages: [
      { stageName: 'Source', actions: [pipelineActions.source] },
      { stageName: 'Build', actions: [pipelineActions.build] },
      { stageName: 'Migrate', actions: [pipelineActions.migrate] },
      {
        stageName: 'Deploy',
        actions: [pipelineActions.deploy],
      },
    ],
  });
}
Enter fullscreen mode Exit fullscreen mode

To finish it off we define a CodeBuildAction using the new project and the source Artifact and then finally add the Migrate stage to the pipeline.

Update build project

Now, because app's Dockerfile slightly changed, we have to update the build project and spec.

First we add AWS_ACCESS_KEY, AWS_SECRET_ACCESS_KEY environment variables to the buildProject.

// stacks/pipeline.stack.ts
  AWS_ACCESS_KEY: {
    value: AWS_ACCESS_KEY,
  },
  AWS_SECRET_ACCESS_KEY: {
    value: AWS_SECRET_ACCESS_KEY,
  }
Enter fullscreen mode Exit fullscreen mode

And now we have to update the build property of the buildSpec to account for the new environment variables

// stacks/pipeline.stack.ts
 build: {
    commands: [
      "echo Build started on `date`",
      "echo Build Docker image",
      "docker build -f ${CODEBUILD_SRC_DIR}/backend/Dockerfile --build-arg region=${AWS_STACK_REGION} --build-arg clientId=${AWS_ACCESS_KEY} --build-arg clientSecret=${AWS_SECRET_ACCESS_KEY} -t ${REPOSITORY_URI}:latest ./backend",
      'echo Running "docker tag ${REPOSITORY_URI}:latest ${REPOSITORY_URI}:${IMAGE_TAG}"',
      "docker tag ${REPOSITORY_URI}:latest ${REPOSITORY_URI}:${IMAGE_TAG}",
    ],
  }
Enter fullscreen mode Exit fullscreen mode

The only real difference here is that we provide --build args to the build command to set the relevant environment variables that are required to establish a connection with aws.

SSM Parameter Store

In order to be able to connect with a database we need to know the location and credentials. I have opted for Parameter Store as I find it a very convenient way to organize environment variables.

// stacks/parameter-store.stack.ts
interface Props extends StackProps {
  dbConfig: DatabaseConfig;
}

export class ParameterStoreStack extends Stack {
  constructor(scope: Construct, id: string, props: Props) {
    super(scope, id, props);
    new SecureStringParameter(this, 'database-password', {
      parameterName: '/production/database/password',
      stringValue: props.dbConfig.password,
      tier: ParameterTier.STANDARD,
      dataType: ParameterDataType.TEXT,
    });
    new StringParameter(this, 'database-user', {
      parameterName: '/production/database/username',
      stringValue: props.dbConfig.username,
      tier: ParameterTier.STANDARD,
      dataType: ParameterDataType.TEXT,
    });
    new StringParameter(this, 'database-hostname', {
      parameterName: '/production/database/hostname',
      stringValue: props.dbConfig.hostname,
      tier: ParameterTier.STANDARD,
      dataType: ParameterDataType.TEXT,
    });
    new StringParameter(this, 'database-port', {
      parameterName: '/production/database/port',
      stringValue: String(props.dbConfig.port),
      tier: ParameterTier.STANDARD,
      dataType: ParameterDataType.TEXT,
    });
    new StringParameter(this, 'database-socket-address', {
      parameterName: '/production/database/socketAddress',
      stringValue: props.dbConfig.socketAddress,
      tier: ParameterTier.STANDARD,
      dataType: ParameterDataType.TEXT,
    });
    new StringParameter(this, 'database-database', {
      parameterName: '/production/database/name',
      stringValue: props.dbConfig.database,
      tier: ParameterTier.STANDARD,
      dataType: ParameterDataType.TEXT,
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

All we do here is create string parameters for database credentials and location. Password is created as a secure string, meaning it is encrypted using AWS KMS.

SecureStringParameter is imported from @exanubes/aws-cdk-ssm-secure-string-parameter because of limitations of AWS CloudFormation. This construct utilises a Lambda and AWS SDK to create the secure string parameter.

Access and firewall management

All the resources are in place, we have a RDS instance, a migration stage in CI/CD pipeline and parameters for the application. However, we still need to handle access permissions to the database for both the pipeline stage as well as the fargate service. We also need to configure the firewall to allow traffic from those origins. This can be managed with security groups.

// stacks/security-group.stack.ts

interface Props extends StackProps {
  vpc: IVpc;
}

export class SecurityGroupStack extends Stack {
  databaseSg: SecurityGroup;
  databaseAccessSg: SecurityGroup;

  constructor(scope: Construct, id: string, props: Props) {
    super(scope, id, props);
    this.databaseAccessSg = new SecurityGroup(this, 'database-access-sg', {
      vpc: props.vpc,
      description:
        'Security group for resources that need access to rds database instance',
    });

    this.databaseSg = new SecurityGroup(this, 'rds-allow-postgres-traffic', {
      vpc: props.vpc,
      description: 'Security group for rds database instance',
    });
    this.databaseSg.addIngressRule(
      this.databaseAccessSg,
      Port.tcp(5432),
      `Allow inbound connection on port 5432 for resources with security group: "${this.databaseAccessSg.securityGroupId}"`
    );
  }
}
Enter fullscreen mode Exit fullscreen mode

Here we're creating two security groups in our vpc. The databaseSg is for opening up the 5432 port on database instance and I'm using databaseAccessSg as source. This way, every resource that has databaseAccessSg assigned will have access to the database and if I want to revoke access, I can just remove the security group from that service.

This is not all though. We still need to grant connection permissions to the instances.

// stacks/pipeline.stack.ts
props.rds.grantConnect(migrationProject.grantPrincipal);
Enter fullscreen mode Exit fullscreen mode
// stacks/elastic-container.stack.ts
const taskRole = new Role(this, 'exanubes-fargate-application-role', {
  assumedBy: new ServicePrincipal('ecs-tasks.amazonaws.com'),
});
props.rds.grantConnect(taskRole);
const taskDefinition = new FargateTaskDefinition(
  this,
  'fargate-task-definition',
  {
    runtimePlatform: {
      cpuArchitecture: CpuArchitecture.ARM64,
      operatingSystemFamily: OperatingSystemFamily.LINUX,
    },
    taskRole,
  }
);
Enter fullscreen mode Exit fullscreen mode
// stacks/elastic-container.stack.ts
this.service.connections.allowToDefaultPort(props.rds);
Enter fullscreen mode Exit fullscreen mode

First we go into pipeline stack and grant connect permission to our migration project principal. Then we have to do the same inside the elastic stack. To do this we create a task role for the ecs-tasks principal which basically defines which entity - user, app, organization, service etc. - can perform actions with this role. Then we grant connect permissions to this role and use it in Fargate Task Definition construct. Lastly, we also have to allow Fargate Service to establish a connection with RDS Instance.

You can find a list of service principals in this gist

Due to circular dependency error when using a Security Group, we have to add the rds connection manually via the .allowToDefaultPort() method.

Connecting to RDS

Now that everything is setup, we can use AWS SDK to load the database credentials.

// backend/src/config/config.provider.ts

async () => {
  const isProd = process.env.NODE_ENV === 'production';
  if (!isProd) {
    return config;
  }
  const client = new SSMClient({
    region: String(process.env.region),
    credentials: {
      accessKeyId: String(process.env.clientId),
      secretAccessKey: String(process.env.clientSecret),
    },
  });
  const command = new GetParametersByPathCommand({
    Path: '/production',
    Recursive: true,
    WithDecryption: true,
  });
  const result = await client.send(command);
  return transformParametersIntoConfig(result.Parameters || []);
};
Enter fullscreen mode Exit fullscreen mode

Here, we setup the client with credentials passed in to the Dockerfile in the Build Stage of our CI/CD pipeline. Then we can just load all the parameters prefixed with /production and transform it into a simpler to use data structure. I used the WithDecryption option in order to get a decrypted value of the database password parameter.

Deployment

While deployment is in progress, remember to push the initial image to ECR, otherwise the deployment will hang on ElasticContainerStack. Once all the stacks have been deployed, we're going to need to trigger the pipeline which will trigger the migration after which we should be able to see the data on /users endpoint. That tells us we're connected!

npm run build && npm run cdk:deploy -- --all
Enter fullscreen mode Exit fullscreen mode

Before deploying make sure that all the secrets and ARNs are your own. Double check the src/config.ts and .env files.

Don't forget to tear down the infrastructure to avoid unnecessary costs

npm run cdk:destroy -- --all
Enter fullscreen mode Exit fullscreen mode

Summary

In this article we have gone through setting up a Database Instance, configuring the user, name, size and engine. Then, we have used it for the CI/CD pipeline in order to run a migration script as part of the automated deployment strategy. To be able to connect to the database from our application, we have saved all relevant database information in SSM Parameter Store and used the AWS SDK in the app to load the config. Lastly, we have opened up the 5432 port on the RDS Instance to our migration stage and ECS Service and granted connection permissions to them, following the principle of least privilege.

Top comments (0)