DEV Community

Sagar Jauhari
Sagar Jauhari

Posted on

3 ways to handle secrets in AWS ECS tasks

No matter how much time and effort we spend on application security, it is always less. But simple workflows for things like secret management, key rotation and password expiration go a long way in making our applications and infrastructure resilient against the obvious attack vectors.

If you have used AWS Elastic Container Service (ECS) / Fargate, you’ve definitely stumbled upon the issue of passing secrets to the running container. This post describes 3 methods to pass secrets as environment variables to applications running as AWS ECS Tasks.

Method 1 — Pass secrets as environment variables in the ECS Task Definition

  • Good: Minimum infrastructure needed
  • Bad: ECS Task Definition tightly coupled with secrets. A new deployment is needed if you change the value of the secret Bad: Secrets visible to anyone with access to the ECS Task Definition

Method 2 — Fetch latest secrets using an entry-point script

Using a docker entry-point script, you can fetch secrets from wherever you want before the application boots.

  • Good: Secrets visible to the application only at runtime and not before that. Meaning that they are not part of the ECS Task Definition
  • Bad: The logic to fetch secrets is embedded repeatedly in different applications. So, while the secrets are (hopefully) maintained in your infrastructure configuration, each application need to know how and where to fetch them from. This violates both the DRY and Single Responsibility principles.

A lot of repositories on Github use this approach. An example of fetching secrets in this way from AWS S3 is also described here.

Method 3 — Use ‘valueFrom’ attribute for ECS task definition

This method puts the burden of grabbing secrets for you on AWS and the application container can expect for the environment variables to exist before the application is run

  • Good: Application doesn’t need to know how to fetch secrets — it expects them to be available as environment variables during run-time.
  • Good: Supports both AWS SSM Parameter Store and AWS Systems Manager (currently)

My Recommendation

The method I would recommend is to use the ‘valueFrom’ attribute to inject KMS encrypted AWS SSM parameters directly into the running task. This way the secrets are neither visible to someone logging into the AWS Console nor getting saved in plain text anywhere.

Standardizing the path you use for parameters that you add to the AWS SSM Parameter Store is a great idea because it’ll help you limit the scope of access to these secrets. The way you’ve designed your infrastructure would definitely determine what path formats you could possibly use. It doesn’t matter what format you decide — what is important is to just standardize ’some format’ rather than allowing each service to specify its own path format. So, overall, here are the 3 steps I recommend:

Step 1: Decide a path format of you AWS SSM Parameter Store. Examples are:

/{environment}/{service}/DATABASE_PASSWORD
/{service}/{enviroment}.DATABASE_PASSWORD

Step 2: Add your key value pairs to AWS SSM Parameter Store and use a KMS key to encrypt them

Step 3: Use the ‘secrets/valueFrom’ attribute to inject AWS SSM parameters directly into the running task

And here’s the Terraform configuration to do this. These files only contain the clauses needed for secrets — you still need to fill in the rest of the details to actually create an ECS task.
Terraform file (partial)

data "template_file" "blog" {
  template = "${file("template.json")}"

  vars {
    app_name    = "blog"
    environment = "staging"
    region      = "us-east-1"
    account     = "12341234"
  }
}

resource "aws_ecs_task_definition" "blog" {
  container_definitions = "${data.template_file.blog.rendered}"

  family             = "blog"
  task_role_arn      = "${var.iam_role_arn}"
  execution_role_arn = "${var.iam_role_arn}"
}

Template file:

[
    {
        "secrets": [
            {
                "name": "DATABASE_PASSWORD",
                "valueFrom": "arn:aws:ssm:${region}:${account}:parameter/${environment}/${appName}/DATABASE_PASSWORD"
            }
        ]
    }
]

Note: if you’re using ‘secrets/valueFrom’ in your ECS task definition, you need to also give it a ‘serviceRoleArn’ which has permissions to read SSM params needed for that scope.

Additional Reading

This post was originally published on Medium

Top comments (6)

Collapse
 
trycalmlee profile image
Calmlee

You should be using data to fetch the full arn, not relying on parsing.

data "aws_ssm_parameter" "app_database_password" {
  name = "blog/DATABASE_PASSWORD"
}

The reason you do this is because you cannot create an expandable JSON template file in terraform. The way you reference the variable is:

    "secrets": [
      {
        "name": "NAME_YOUR_ENV_VAR",
        "valueFrom": "${data.aws_ssm_parameter.app_database_password.arn}"
      },
    ]

This is much cleaner and then you don't need to be passing around account, region, etc. You declare this much higher up.

Collapse
 
farrukhnaeem14 profile image
Farrukh Naeem

@sagarjauhari , this is a very good article. Just one question, how are these variables referenced by the application itself? Like how can I pass the values of these variables to my config file. With your example, it seems like the secrets are exposed to the container but how are we going to reference them in the application code itself.

Collapse
 
sagarjauhari profile image
Sagar Jauhari

Good question. If you follow one of these approaches, your docker application would be able access these variables in the environment. If it is a python app, you can do os.getenv() or for golang value, exists := os.LookupEnv(key)

Collapse
 
bharathkumarraju profile image
Bharathkumar

@sagarjauhari worth read!!!

Collapse
 
bharathkumarraju profile image
Bharathkumar
Collapse
 
mrfksiv profile image
MrfksIv

What happens if a a value changes in the secrets manager?
Should a redeployment be forced manually on the ECS task to pick up the new value?