I work for a group that manages a number of AWS accounts. To keep things somewhat simpler from a user-management perspective, we use IAM roles (tied to a central directory-service and mapped via the directory's group memberships) to access the various accounts rather than trying to manage per-account IAM users. This means that we can manage users via Active Directory such that, as new team members are added, existing team members' responsibilities change or team members leave, all we have to do is update their AD user-objects and the breadth and depth of their access-privileges are changed.
However, the use of IAM Roles isn't without its limitations. Role-based users' accesses are granted by way of ephemeral tokens. Tokens last, at most, 3600 seconds. Absent some browser-extensions, if you're a GUI user, it's kind of annoying because it means that across an 8+ hour work-session, you'll be prompted to refresh your tokens at least seven times (on top of the initial login). If you're working mostly via the CLI, you won't necessarily notice things as each command you run silently refreshes your token.
I use the caveat "won't necessarily notice" because there are places where using an IAM Role can bite you in the ass. Worse, sometimes it will bite you in the ass in a way that may leave you scratching your head wondering "WTF isn't this working as I expected".
The most recent adventure in "WTF isn't this working as I expected" was in the context of trying to use S3 pre-signed URLs. If you haven't used them before, pre-singed URLs allow you to provide temporary access to S3-hosted objects that you otherwise want to not make anonymously-available. In the grand scheme of things, one can do one of the following to provide access to S3-hosted data for automated tools:
- Public-read object hosted in a public S3 bucket: this is a great way to end up in new stories about accidental data leaks
- Public-read object hosted in a private S3 bucket: you're still providing unfettered access to a specific S3-hosted object/object-set, but some rando generally can't simply explore the hosting S3 bucket for interesting data. You can still end up in the newspapers, but, the scope of the damage is likely to be much smaller
Private-read object hosted in a private S3 bucket: the least-likely to get you in the newspapers, but requires some additional "magic" to allow your automation access to S3-hosted files:
- IAM User Credentials stored on the EC2 instance requiring access to the S3-hosted files: a good method, right until someone compromises that EC2 instance and steals the credentials. Then, the attacker has access to everything those credentials had access to (until such time that you discover the breach, and have deactivated or changed the credentials)
- IAM Instance-role: a good way of providing broad-scope access to S3 buckets to an instance or group of instances sharing a role. Note that, absent some additional configuration trickery, every process running on an enabled instance has access to everything that the Instance-role provides access to. Thus, probably not a good choice for systems that allow interactive logins or that run more than one, attackable service.
- Pre-signed URLs: a way to provide fine-grained, temporary access to S3-hosted objects. Primary down-fall is there's significant overhead in setting up access to a large collection of files or providing continuing access to said files. Thus, likely suitable for providing basic CloudFormation EC2 access to configuration files, but not if the EC2s are going to need ongoing access to said files (as they would in, say, an AutoScaling Group type of use-case)
There's likely more access methods one can play with - each with their own security, granularity and overhead tradeoffs. The above are simply the ones I've used most.
The automation I write tends to include a degree of user-customizable content. Which is to say, I write my automation to take care of 90% or so of a given service's configuration, then hand the automation off to others to use as they see fit. To help prevent the need for significant code-divergence in these specific use-cases, my automation generally allows a user to specify the ingestion of configuration-specification files and/or secondary configuration scripts. While what I write is generally hosted in public GitHub repositories, these customizer-files or scripts often contain data that one wouldn't want to put in a public GitHub repository. Thus, I generally recommend to the automation's users "put that data someplace safe - here's methods for doing it relatively safely via S3".
Circling back so that the title of this post makes sense... Typically, I recommend the use of pre-signed URLs for automation-users that want to provide secure access to these once-in-an-instance-lifecycle files. Pre-signed URLs' access-granting can be as little as a couple seconds to as much as seven days. Without specifying a desired time, the granted-access expires at two hours.
However, that degree of flexibility depends greatly on what type of IAM object is creating the ephemeral access-grant. A standard IAM user can grant with all of the previously mentioned time-flexibility. An IAM role, however, is constrained to however long their currently-active token is good for. Thus, if executing from the CLI using an IAM role the grantable lifetime for a pre-signed is 0-3600 seconds.
If you're confused whether the presigned URL you've created/were given was from an IAM user or Role, look for the presence of
X-Amz-\* in the URL. If you see any such elements, it was generated by an IAM Role and will only last up to 3600 seconds.