Have you ever faced with. this situation: Your code works fine in local development but there are errors on staging or production.
I always try to run the current deployed version via docker in my local machine and see is it works fine and via ssh try to investigate the docker container.
It would be easy to run your docker image in local, attach a shell to it and see what is happening there just with docker run --rm -it -p 8080:8080 ....
but it will be complex if you use AWS EC2 and AWS SSM.
Let me to add some salt on it. you (your company) use custom deployment tools with temp access to AWS.
As you know AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. It is more secure. also you can use your secrets for different apps easily.
the issue is when you want to run your docker image with secrets which you stored in AWS.
Solution:
AWS cli always keep his credential inside the file $HOME/.aws/credentials
. read more
Even you use custom tools to login or deploy to AWS, this file in your machine keep your credentials.
This file normally is look like this:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[CUSTOM_TEST]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
You can set the default profile for aws cli by set the Environment variable AWS_PROFILE
.
By this feature you can run your docker easily, mount your aws credential file to your docker image, and set the AWS_PROFILE
docker run --rm -it -p 8080:8080/tcp \
-v $HOME/.aws/credentials:/root/.aws/credentials:ro \
--env "NODE_ENV=production" \
--env "AWS_PROFILE=CUSTOM_TEST" \
--env "AWS_DEFAULT_REGION=eu-west-1" \
--dns 8.8.8.8 \
MY_DOCKER_IMAGE:latest
Note: I have supposed you are using the amazon linux image, you need to change -v $HOME/.aws/credentials:/root/.aws/credentials:ro
based on the your image root.
Top comments (0)