Never. That’s the end of the story…
Okay not really, there is one great use case for using AWS Credentials and that’s on developer machines. Unlike AWS developers, us poor peons on the outside don’t have a magic tool to refresh SSO generated tokens in our credentials files. So ours expire after 12 hours. It’s really unfortunate, and a long time ago I created a package to auto-rotate all credentials on your machine in cron — Mirri.js (Named for the original MTG Cat Warrior), but that’s another story.
In the case that you aren’t using SSO on your dev machine, then creating a user:
And storing those credentials in your ~/.aws/credentials file:
But you really should be using aws sso configure and aws login to ensure that you are getting new credentials and that stolen creds cannot be used to create new infra in your account. (Probably mining bitcoin, because for some reason that is still worth more than eth — although the Berlin fork is coming so the time of mining will be over).
The credentials stored in your credentials file will power everything on your machine, you never need environment variables.
- Environment variables: Bad
- Credentials file: Good
Env variables overwrite credentials set everywhere, and AWS automatically loads the credentials from whereever they can be found. If you get permissions issues first check the library documentation for how credentials will be loaded. For example the Node SDK for AWS has this load order:
- Credentials that are explicitly set through the service-client constructor
- Environment variables
- The shared credentials file
- Credentials loaded from the ECS credentials provider (if applicable)
- Credentials that are obtained by using a credential process specified in the shared AWS config file or the shared credentials file. For more information, see Loading Credentials in Node.js using a Configured Credential Process.
- Credentials loaded from AWS IAM using the credentials provider of the Amazon EC2 instance (if configured in the instance metadata)
If you ever see a library that requests passing in the credentials to the constructor or tells you to set the environment variables for access, run far away, very fast. First of all, they won’t work when run from AWS, and second of all, credentials handling is provided by AWS SDKs and CLI, libraries don’t need to implement anything, and definitely shouldn’t care how you are authenticating to AWS resources.
Credentials in AWS services
When running applications in EC2, ECS, Lambda, CodePipeline, and the like, you’ll need to have the application/source have access to other resources, for example a call out to CloudFormation (CFN) or DynamoDB (DDB). Where do the credentials come from?
It used to be the case for EC2 that a magic endpoint gave EC2 access to an access_key and secret, however you can now turn that off and you should, because historically it has been a source of vulnerabilities for credentials that attackers can use to access all your AWS resources. They don’t even need to run code on the box, services that make API calls to configurable external services can be tricked into calling the metadata endpoint and return the credentials. This is a reminder to never just call user entered endpoints and return the data found there. That data should always be sanitized. Don’t bother checking if the url is an ip address matching the metadata, as it is free and trivially easy for me to give you a domain name that resolves there when run from inside EC2.
Instead you assign what is known as an IAM Instance Profile to your application stack. You directly assign this to Lambda and whereever else. The instance profile is a sub resource of an IAM Role that enables this link. And this is the correct and secure way to grant permissions and access to resources from inside AWS.
If you are inside AWS you NEVER need access key and secret.
Third Party Access to resources
Some third parties such as log aggregators, RPM, and other tracking tools will tell you "We can automatically fetch your data from AWS, all we need is an access key." This is dirty trick--can you really trust this third party? Maybe, but you can’t restrict their usage, they could end up costing you a lot in data transfer fees, and you want to rotate these keys anyway, so it will be a source of huge headaches. The only thing worse than giving a third party your keys, is running a third party’s stack in your AWS account to set up resources. At worst they should give you a container to run.
The solution here is to send the data to the third party platform using an aggregator already deployed. One great way to do this is using AWS EventBridge. If they have a partner integration, they can cross the account barrier easily. This has been an awesome use of AWS EventBridge. In the Authress-EventBridge integration, for instance, audit logs and authorization requests are passed from the Authress account to the delegated account, no credentials necessary for logging. The reverse is even easier.
Running Code from CI/CD platforms
The last place you can need to access AWS resources in from automation machines, GitLab, GitHub, BitBucket, CircleCI, etc (No--CodeCommit, Jenkins, Travis, are explicitly not in this list)
You don’t need access key and secret here, even though every guide in existence says you do. Some even say you need ridiculously other complicated tech to set this up. For GitHub, this isn’t so simple however, but here is more complicated example deployment that you can use to automate the solution.
There is an even easier way however...
The secret to doing this lies in the fact that AWS allows login via OpenID compliant JWTs. You can create IAM configuration using an OpenID provider to login and assume the role you want. Since services like GitLab provide you an already existing JWT in every build job, just pass this to AWS with some configuration setup. And then you are good to go.
No More Access Keys 🎉🎉🎉
Top comments (0)