Config for cloud native apps
In the original 12-Factor App manifest Config is listed as a third factor: https://12factor.net/config.
To state the obvious: don't hardcode any configuration settings in your application. Always set it as either configuration file or env variables.
Reduce configuration settings to bare minimum. Introduce convention over configuration and store only the settings that are absolutely necessary in your config.
Be aware that configuration settings can be:
- static - which don't really change, a classic example is a DB endpoint; changing DB endpoint would require the application to be restarted to pick-up the new value;
- dynamic - which can be turned on and off while the application is running.
Based on the configuration type different configuration management strategies apply.
Let's review them all.
Configuration files vs. env variables
Injecting env variables is usually much simpler than injecting configuration files. With env variables you can do fine-grained changes. With files this is more difficult as you would have to store and inject the whole (sometimes large) file. On the other hand, if your application requires 30+ configuration settings, then having to manage 30+ env variables is definitely going to be a challenge. Use configuration files in this case.
There is also a middle ground. Treat configuration file like a template and inject the key configuration settings at runtime. Look at a sample lukaszbudnik/migrator configuration file:
dataSource: "user=${DB_USER} password=${DB_PASSWORD} dbname=${DB_NAME} host=${DB_HOST} port=${DB_PORT}" | |
webHookHeaders: | |
- "X-Security-Token: ${SECURITY_TOKEN}" |
If your tool/framework supports env variables substitution then commit your config file to the repo, package it along with the application, and inject the actual values at runtime using env variables. This way you can have the best of the two worlds.
Metadata services
If you're deploying your app to the cloud and use services like virtual machines, then you can leverage metadata endpoints. You can query those endpoints at runtime to get a lot of useful information about the machine and its cloud environment. All big players support it: AWS EC2, Azure Virtual Machines, GCP Compute.
Internal DNS service
When you deploy your cloud native apps to AWS, Azure, GCP, Kubernetes you can leverage internal DNS services.
Internal DNS is the same for all deployments. Staging, pentest, preproduction, production. It doesn't change. Wherever your app is deployed the invoices service will always have "invoices" DNS name.
API & Secret Keys
When running on AWS, Azure, GCP, do not use API and Secret Keys. IAM roles are first class citizens even in container services. Your servers, containers, and functions can assume roles meaning you don't have to inject API and Secret Keys into them. This greatly simplifies configuration management and configuration lifecycle (one obvious benefit: no need to rotate them on a regular basis).
Configuration as a Service
Where to store configuration settings? In a dedicated Configuration as a Service solution. Configuration as a Service is a secure, highly-available, durable, encrypted storage for your configuration and secrets. Every major cloud provider offers such service. Depending on your cloud provider see: AWS Systems Manager Parameter Store,
Azure App Configuration, Azure Key Vault, or
GCP Secret Manager.
The other benefit worth mentioning is that above services come with versioning out of the box. Thanks to this you have a complete history of all the changes for auditing, governance and compliance, and/or troubleshooting purposes.
If you host your Kubernetes cluster in a cloud then I would highly recommend you to use external-secrets/kubernetes-external-secrets project to sync your Kubernetes secrets with external secrets management systems (completely transparently). kubernetes-external-secrets supports the following backends: AWS Systems Manager Parameter Store, Hashicorp Vault, Azure Key Vault, Google Secret Manager, and Alibaba Cloud KMS Secret Manager.
Feature toggles
Now that we covered static configuration, let's talk about dynamic configuration. Or as it's called feature toggles.
These are the settings that can be turned on and off dynamically at runtime.
They are very useful when you release new functionality in Alpha, Beta, GA stages for initially a small group of your customers or want to release it to your design partners first, then to a wider group, and finally make it generally available to all customers.
You can use feature toggles when you want to release new functionality using canary releases too.
Finally, feature toggles are used when you give your customers option to opt-out of some features.
Feature toggles framework that we built stores all the information in DB (in contrast to using Configuration as a Service which we use for all other configuration settings). Since we store feature toggles in DB we support the following 3 levels:
- DB - we use feature toggles in stored procedures;
- back-end - we load feature toggles from DB and wrap them with a Java service; this can be any language you use: Java, Go, JavaScript, Ruby, Python;
- front-end - we have a REST service which exposes feature toggles to JavaScript app; JavaScript app uses it to implement different behavior and/or render different components in UI.
Tenant config vs. global system config
Assume all configs and feature toggles can be changed per customer/tenant basis.
Implement a hierarchy of configs: check if tenant-specific config exists, if yes return it, if not, fallback to the global system one.
Kubernetes
Since we are talking about cloud-native apps let me finish by some Kubernetes examples.
Kubernetes ConfigMap
Kubernetes ConfigMap is used to store configuration in the form of key-value pairs and/or files. They are stored in clear-text and should not be used to store any sensitive information. For more, see the official documentation: Kubernetes ConfigMap.
We can create a configmap from a file like this:
kubectl create configmap haproxy-auth-gateway-cfg --from-file=config/haproxy.cfg |
Later, we can inject that configmap as a volume into the pod and then mount it into the container:
apiVersion: apps/v1 | |
kind: Deployment | |
metadata: | |
name: gateway | |
labels: | |
app.kubernetes.io/name: gateway | |
spec: | |
replicas: 1 | |
selector: | |
matchLabels: | |
app.kubernetes.io/name: gateway | |
template: | |
metadata: | |
labels: | |
app.kubernetes.io/name: gateway | |
spec: | |
containers: | |
- name: gateway | |
image: lukasz/yosoy | |
ports: | |
- containerPort: 80 | |
volumeMounts: | |
- name: haproxy-cfg | |
mountPath: /usr/local/etc/haproxy | |
volumes: | |
- name: haproxy-cfg | |
configMap: | |
name: haproxy-auth-gateway-cfg |
Kubernetes Secrets
Kubernetes Secret is used to store sensitive information. Just like ConfigMap it can be created in the form of key-value pairs and/or files. There are built-in secret types too. For more, see the official documentation: Kubernetes Secrets.
We can create a secret using yaml file or from literal, I will use the literal here:
kubectl create secret generic -n hotel db-credentials \ | |
--from-literal=username=someuser \ | |
--from-literal=password='PasswordWithSpecialCharsLikeThis:S!\*$=' |
Later, we can reference this secret when defining an env variable in a container:
apiVersion: apps/v1 | |
kind: Deployment | |
metadata: | |
name: bookmaker | |
labels: | |
app.kubernetes.io/name: bookmaker | |
spec: | |
replicas: 1 | |
selector: | |
matchLabels: | |
app.kubernetes.io/name: bookmaker | |
template: | |
metadata: | |
labels: | |
app.kubernetes.io/name: bookmaker | |
spec: | |
containers: | |
- name: bookmaker | |
image: lukasz/yosoy | |
env: | |
- name: DB_USER | |
valueFrom: | |
secretKeyRef: | |
name: db-credentials | |
key: username | |
- name: DB_PASSWORD | |
valueFrom: | |
secretKeyRef: | |
name: db-credentials | |
key: password | |
ports: | |
- containerPort: 80 |
Using external-secrets/kubernetes-external-secrets
The blow gist contains a step-by-step example showing how to inject AWS Systems Manager Parameter Store secrets into Kubernetes secrets using https://github.com/external-secrets/kubernetes-external-secrets.
I post it at the bottom as it's a detailed (long) example.
AWS_REGION=us-east-2 | |
CLUSTER_NAME=lukaszbudniktest1 | |
eksctl create cluster --name $CLUSTER_NAME --region $AWS_REGION --version 1.16 --fargate | |
eksctl utils associate-iam-oidc-provider --region $AWS_REGION --cluster $CLUSTER_NAME --approve | |
# below lines for setting up policy, role, and trust relationship are based on: https://github.com/godaddy/kubernetes-external-secrets/issues/383 | |
EKS_CLUSTER=$CLUSTER_NAME | |
IAM_ROLE_NAME=eksctl-$EKS_CLUSTER-iamserviceaccount-role | |
EXTERNAL_SECRETS_POLICY="kube-external-secrets" | |
cat <<EOF > policy.json | |
{ | |
"Version": "2012-10-17", | |
"Statement": [ | |
{ | |
"Sid": "VisualEditor0", | |
"Effect": "Allow", | |
"Action": [ | |
"secretsmanager:*", | |
"ssm:*" | |
], | |
"Resource": "*" | |
} | |
] | |
} | |
EOF | |
aws iam create-policy --policy-name $EXTERNAL_SECRETS_POLICY --policy-document file://policy.json || true | |
EXTERNAL_POLICY_ARN=$(aws iam list-policies | jq -r '.Policies[] | select(.PolicyName|match('\"$EXTERNAL_SECRETS_POLICY\"')) | .Arn') | |
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) | |
OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER --region $AWS_REGION --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///") | |
cat <<EOF > trust.json | |
{ | |
"Version": "2012-10-17", | |
"Statement": [ | |
{ | |
"Effect": "Allow", | |
"Principal": { | |
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" | |
}, | |
"Action": "sts:AssumeRoleWithWebIdentity", | |
"Condition": { | |
"StringLike": { | |
"${OIDC_PROVIDER}:sub": "system:serviceaccount:*" | |
} | |
} | |
} | |
] | |
} | |
EOF | |
aws iam create-role --role-name $IAM_ROLE_NAME --assume-role-policy-document file://trust.json --description "iam service account role for k8s" | |
aws iam attach-role-policy --role-name $IAM_ROLE_NAME --policy-arn=$EXTERNAL_POLICY_ARN | |
IAM_ROLE_ARN=$(aws iam list-roles | jq -r '.Roles[] | select(.RoleName|match('\"$IAM_ROLE_NAME\"')) | .Arn') | |
# deploy external-secrets/kubernetes-external-secrets | |
helm install external-secrets external-secrets/kubernetes-external-secrets \ | |
--set image.repository='lukasz/kubernetes-external-secrets' \ | |
--set image.tag='latest' \ | |
--set env.AWS_REGION=us-east-2 \ | |
--set securityContext."fsGroup"=65534 \ | |
--set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=$IAM_ROLE_ARN | |
# wait for pod to be Running | |
kubectl --namespace default get pods -l "app.kubernetes.io/name=kubernetes-external-secrets,app.kubernetes.io/instance=external-secrets" | |
# get pod name | |
POD_NAME=$(kubectl --namespace default get pods -l "app.kubernetes.io/name=kubernetes-external-secrets,app.kubernetes.io/instance=external-secrets" -o=custom-columns='DATA:metadata.name' --no-headers=true) | |
# describe to check events and confirm used image | |
kubectl describe pod $POD_NAME | |
... | |
Events: | |
Type Reason Age From Message | |
---- ------ ---- ---- ------- | |
Normal Scheduled <unknown> fargate-scheduler Successfully assigned default/external-secrets-kubernetes-external-secrets-8c8bbf6cc-m25wm to fargate-ip-192-168-109-39.us-east-2.compute.internal | |
Normal Pulling 4m19s kubelet, fargate-ip-192-168-109-39.us-east-2.compute.internal Pulling image "lukasz/kubernetes-external-secrets:latest" | |
Normal Pulled 4m12s kubelet, fargate-ip-192-168-109-39.us-east-2.compute.internal Successfully pulled image "lukasz/kubernetes-external-secrets:latest" | |
Normal Created 117s (x2 over 4m10s) kubelet, fargate-ip-192-168-109-39.us-east-2.compute.internal Created container kubernetes-external-secrets | |
Normal Pulled 117s kubelet, fargate-ip-192-168-109-39.us-east-2.compute.internal Container image "lukasz/kubernetes-external-secrets:latest" already present on machine | |
Normal Started 116s (x2 over 4m10s) kubelet, fargate-ip-192-168-109-39.us-east-2.compute.internal Started container kubernetes-external-secrets | |
# create secret in AWS SecretsManager | |
aws secretsmanager create-secret --region $AWS_REGION --name hello-service/password --secret-string "this is a test password 1234" | |
# create ExternalSecret | |
cat <<EOF > hello-service-external-secret.yml | |
apiVersion: 'kubernetes-client.io/v1' | |
kind: ExternalSecret | |
metadata: | |
name: hello-service | |
spec: | |
backendType: secretsManager | |
data: | |
- key: hello-service/password | |
name: password | |
EOF | |
kubectl apply -f hello-service-external-secret.yml | |
# wait until sync says OK | |
kubectl get externalsecret | |
NAME LAST SYNC STATUS AGE | |
hello-service 6s SUCCESS 7s | |
# get the secret and base64 decode it | |
kubectl get secret hello-service -o=custom-columns="DATA:data.password" --no-headers=true | base64 -d | |
# check pod logs | |
kubectl logs $POD_NAME | |
... | |
{"level":30,"time":1593089496560,"pid":17,"hostname":"external-secrets-kubernetes-external-secrets-8c8bbf6cc-m25wm","msg":"fetching secret property hello-service/password with role: pods role","v":1} | |
{"level":30,"time":1593089496703,"pid":17,"hostname":"external-secrets-kubernetes-external-secrets-8c8bbf6cc-m25wm","msg":"upserting secret default/hello-service","v":1} | |
{"level":30,"time":1593089496740,"pid":17,"hostname":"external-secrets-kubernetes-external-secrets-8c8bbf6cc-m25wm","msg":"stopping poller for default/hello-service","v":1} | |
{"level":30,"time":1593089496741,"pid":17,"hostname":"external-secrets-kubernetes-external-secrets-8c8bbf6cc-m25wm","msg":"starting poller for default/hello-service","v":1} | |
# update secret: | |
aws secretsmanager update-secret --region $AWS_REGION --secret-id hello-service/password --secret-string "1q2w3e4r this is a new password abcdef" | |
# check sync | |
kubectl get externalsecret | |
# get the secret and base64 decode it | |
kubectl get secret hello-service -o=custom-columns="DATA:data.password" --no-headers=true | base64 -d | |
# delete the cluster when you're done | |
eksctl delete cluster --name $CLUSTER_NAME --region $AWS_REGION |
Top comments (0)