This article is about sharing my experience how we have secured the database credentials using AWS Secret manager. This has been a challenge always where and how to access the database configurations.
When i was setting up a CI/CD pipeline for a Golang based project we wanted to Store the RDS credentials in a secured location. Thanks to AWS Secret Manager we were able to overcome that challenge. Secret manager will store the RDS credentials and can simply call the AWS Secret manger through the code to retrieve them. Lets See the below architecture how the services are connected.
Basically in my above simple setup demonstrate the connectivity of all the services. In the codepool we are not storing any database credentials. When the code is building lets assume i need to do some update queries to the database, then we must need the database credentials to connect the database.For any situations like this can configure an environment variable in the code build configurations as below.
But this example i don't have any pre-build queries to run. lets go through below step by step how to configure the secrete manager for RDS credentials and how to retrieve them from your application on ECS cluster. as a summary my task needs to connect to the RDS database but we are not storing any database credentials inside the code.
First go to the secret manager and enter your RDS information as below and select your right RDS, in my example i have only one RDS.
Once you have entered the information then click next, and it will take you to the below screen.
Here you have to give a Secret Name for your credentials, my example i have mentioned as “dev/aws-secret-manager-test/postgres”. Then click next.
This screen is really interesting, lets assume in your organization that you have any security policies for password rotations then this would be great option to use.
Finally in the review page you can see the Secret Name as highlighted and AWS will provide you some sample code block how to retrieve your secrete using different programing languages. For my example i am using Golang.
below 2 functions where i am accessing the RDS and testing the connection to the database. If you are using in a local machine or server use the docker build.
func main() {
var err error
databaseAuth := getDatabaseAuth()
psql := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
databaseAuth.Host, databaseAuth.Port, databaseAuth.UserName, databaseAuth.Password, os.Getenv("DB_NAME"))
DB, err = sql.Open("postgres", psql)
if err != nil {
Logger.AddLogger(Logger.ERROR, "Database driver error")
panic(err)
}
if err = DB.Ping(); err != nil {
Logger.AddLogger(Logger.ERROR, "Database parameters error")
panic(err)
}
Logger.AddLogger(Logger.INFO, "Connected to Database")
}
func getDatabaseAuth() Models.DatabaseAuth {
secretName := os.Getenv("AWS_SECRET_NAME")
region := os.Getenv("AWS_REGION")
svc := secretsmanager.New(session.New(&aws.Config {
Region: ®ion,
}))
input := &secretsmanager.GetSecretValueInput{
SecretId: aws.String(secretName),
VersionStage: aws.String("AWSCURRENT"),
}
result, err := svc.GetSecretValue(input)
var databaseAuth = Models.DatabaseAuth{}
if err == nil {
var secretString, decodedBinarySecret string
if result.SecretString != nil {
secretString = result.SecretString
json.Unmarshal([]byte(secretString) , &databaseAuth)
} else {
decodedBinarySecretBytes := make([]byte, base64.StdEncoding.DecodedLen(len(result.SecretBinary)))
len, err := base64.StdEncoding.Decode(decodedBinarySecretBytes, result.SecretBinary)
if err != nil {
fmt.Println("Base64 Decode Error:", err)
}
decodedBinarySecret = string(decodedBinarySecretBytes[:len])
json.Unmarshal([]byte(decodedBinarySecret) , &databaseAuth)
}
}
return databaseAuth
}
Now lets check about the security which is really pivotal, if the source resources are out of AWS cloud then you can use an IAM user to access. since i am using the ECS service i have created an IAM role and attached to the task. Lets see the task now.
My role name is : dev-EcsTaskExecutionRole
For the above role attach these policies (AmazonECSTaskExecutionRolePolicy, CloudWatchEventsFullAccess) and you have to make a custom policy for access the secret manager. you can replace the Resource as per to your region and the resource ids.
“Resource”: “arn:aws:secretsmanager:ap-southeast-1:123456789123:secret:dev/aws-secret-manager-test/postgres”
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
],
"Resource": "arn:aws:secretsmanager:ap-southeast-1:123456789123:secret:dev/aws-secret-manager-test/postgres"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"secretsmanager:GetRandomPassword",
"secretsmanager:ListSecrets"
],
"Resource": ""
}
]
}
Task Definition
Make sure you select the correct role, in my setups its “dev-EcsTaskExecutionRole”
Set the Command to execute your file as “go,run,main.go”. Once you have create the task definition, run the task. Make sure your security groups and the subnets are correct.
The task should execute successfully and able to see the “Connected to Database” message in the container insight as below.
Summary
If you are concern about your database connection strings to be store in a secret location and access them securely AWS Secret manager is a good option. You can try other service like Parameter Store as well. Secret manager gives you the ability to store multiple key/values in a single secret, which is something parameter store can do, but not nearly as nicely. This is useful for many applications. please comment your thoughts and if you have any better way please comment.
Project Repo URL for testing
Docker file also included, if you have any feedback or any questions please feel free to comment.
Top comments (0)