A few weeks ago I sat down to review the IAM state of a multi-account AWS Organization. It wasn't a formal audit with weeks of planning. It was the simple question every Cloud Security Engineer should be able to answer at any time:
Who has access, with what credentials, and since when?
The answer surprised me. Not because it was hard to find — but because of how easy it was to automate, and what came to light when I did.
The Context
Organizations that have been running on AWS for years accumulate security debt without realizing it. They start with one account, then two, then a team requests its own environment, another project comes along, and suddenly you have an AWS Organization with dozens of accounts, each with its own IAM history.
The problem isn't scale — it's visibility. Or rather, the lack of it.
In multi-account environments, nobody has a consolidated view of who has what access. Infrastructure teams know what they deployed. Development teams know what they needed at the time. But the Access Keys created three years ago for an integration process that no longer exists, that were never rotated, that are still active — those don't show up in any dashboard. They don't generate alerts. They don't bother anyone. They just wait.
That's exactly what I found. An active AWS Organization, with multiple production accounts, and credentials that had gone years without being centrally audited. Not because the team was careless — but because nobody had built the mechanism to see them all at once.
I decided to automate that search.
The Risk
IAM Access Keys are long-lived credentials. Unlike IAM roles — which generate temporary credentials that expire automatically — an Access Key has no expiration date. If you create one today and never rotate it, it's still valid in 2030.
That makes them one of the most common attack vectors in AWS environments. Not because they're insecure by design, but because time works against them. A key that's been active for years silently accumulates risk:
- It may have been exposed in a code repository without anyone noticing
- It may be in the hands of an employee who no longer works at the organization
- It may have permissions granted for a one-time project that were never reviewed
- It may have been used by an attacker for months — and without rotation, nobody knows
The AWS Security Maturity Model v2 is precise about this. In its Phase 1 — Quick Wins, within the Identity and Access Management domain, one of the key controls is Multi-Factor Authentication — ensuring all users with console access have MFA enabled. In Phase 2 — Foundational, the model goes further with Use Temporary Credentials — the recommendation to migrate toward IAM roles and short-lived credentials, moving away from long-term Access Keys.
The script audits both controls in a single run: which users have active Access Keys, how long they've been active, and whether they have MFA configured. An active key from 2018 combined with a user without MFA isn't just technical debt — it's an attack surface that's been open for years.
The Solution
Design decisions before code
Before writing a single line, I made three decisions that define how the script works.
First decision: cross-account via STS, not IAM users.
The obvious temptation is to create an IAM user with audit permissions in each account. But that solves a visibility problem by creating exactly the problem we're trying to eliminate — more long-lived credentials. The right solution is sts:AssumeRole: the script assumes an existing role in each account, gets temporary credentials, audits, and the credentials expire on their own.
Second decision: start from AWS Organizations.
Instead of maintaining a manual list of accounts, the script queries the Organization directly and gets all active accounts in real time. If a new account is created tomorrow, the next run includes it automatically. No lists to maintain, no accounts slipping through the cracks.
Third decision: always paginate.
IAM APIs paginate. If you don't use get_paginator(), in an account with many users you'll only see the first results and never know it. It's the kind of silent bug that destroys the reliability of an audit.
With those three decisions clear, the code practically writes itself.
The main flow
def main():
args = parse_args()
session = boto3.Session(profile_name=args.profile)
org_client = session.client('organizations')
# Get all active accounts from the Organization
accounts = get_accounts(org_client)
all_findings = []
for account in accounts:
print(f"Auditing account: {account['name']} ({account['id']})")
try:
credentials = assume_role(
session,
account['id'],
args.role,
'SecurityAudit'
)
iam_client = boto3.client(
'iam',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken']
)
findings = get_iam_users_with_keys(iam_client, account['id'], account['name'])
all_findings.extend(findings)
except Exception as e:
print(f"Error in account {account['name']}: {e}")
return all_findings
One account fails — the script continues. That's intentional: in a real Organization you'll find accounts where the role isn't deployed, or where permissions differ. The try/except ensures an error in one account doesn't stop the entire audit.
The heart of the script: what we audit per user
def get_iam_users_with_keys(iam_client, account_id, account_name):
findings = []
paginator = iam_client.get_paginator('list_users')
for page in paginator.paginate():
for user in page['Users']:
# User's Access Keys
keys_response = iam_client.list_access_keys(UserName=user['UserName'])
# MFA status
mfa_response = iam_client.list_mfa_devices(UserName=user['UserName'])
mfa_devices = mfa_response['MFADevices']
# Console access
try:
iam_client.get_login_profile(UserName=user['UserName'])
password_status = 'Configured'
except iam_client.exceptions.NoSuchEntityException:
password_status = 'Not configured'
for key in keys_response['AccessKeyMetadata']:
last_used_response = iam_client.get_access_key_last_used(
AccessKeyId=key['AccessKeyId']
)
last_used = last_used_response['AccessKeyLastUsed']
findings.append({
'account_id': account_id,
'account_name': account_name,
'username': user['UserName'],
'password_status': password_status,
'access_key_id': key['AccessKeyId'],
'status': key['Status'],
'created_date': str(key['CreateDate']),
'last_used_date': str(last_used.get('LastUsedDate', 'Never used')),
'service_name': last_used.get('ServiceName', 'N/A'),
'mfa_status': 'Virtual' if any('virtual' in d['SerialNumber'].lower()
for d in mfa_devices) else 'Hardware' if mfa_devices else 'None'
})
return findings
For each user the script captures: all their Access Keys with status and creation date, when each key was last used and for what service, whether they have MFA active and of what type, and whether they have console access configured. All of that in a single pass per account.
How to run it
python iam_audit.py --profile your-mgmt-profile --role OrganizationAccountAccessRole
The script needs two things: an AWS profile with access to the Organization's management account, and a role deployed in each member account that allows assumption from the management account.
For the management account, the IAM policy is minimal by design — only the strictly necessary permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "organizations:ListAccounts",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::*:role/ROLE-NAME-IN-CHILD-ACCOUNTS"
}
]
}
Nothing more. The role in the management account doesn't touch IAM directly — all auditing happens through the role it assumes in each child account. If you use AWS Control Tower, the AWSControlTowerExecution role already exists in all accounts and you can use it as a starting point, though in production I recommend creating a dedicated audit role with read-only permissions over IAM.
This isn't a minor detail — it's the least privilege principle applied to the tool that audits least privilege. The script's own attack surface is reduced to the minimum.
The script doesn't just audit the current state — it also reconstructs the timeline. Once you find the findings and start remediating, you need to be able to answer a second question: how are we progressing?
For that, the script queries CloudTrail in each account and extracts key IAM activity events: DeleteAccessKey, CreateAccessKey, DeleteUser. This lets you see over time whether the team is actually remediating — or whether the findings stayed in a report nobody acted on.
The final output is two CSV files: one with all IAM findings, and another with those CloudTrail events that build the remediation trend.
The Findings
I ran the script against an active AWS Organization in the region. Here's what I found:
| Finding | Result |
|---|---|
| Accounts audited | More than 20 active accounts |
| Access Keys found | Dozens |
| Oldest key | Created in 2018 — active in production |
| Users without MFA + console access | Dozens |
| Total execution time | Minutes |
Two accounts weren't audited — the audit role wasn't deployed in them. That in itself is a finding: if you can't audit an account, you can't know what's inside.
The most impactful data point isn't the volume — it's the age. An Access Key created in 2018 and still active in production means it survived everything. Not because someone protected it — but because nobody saw it.
The script saw it in minutes.
What Do You Do With This Now?
If you have an AWS Organization, you can run this script today. You don't need a SIEM, you don't need a dedicated security team, you don't need a special budget. You need Python, boto3, and an audit role with least privilege in your accounts.
What you do need is the willingness to see what's there.
The most uncomfortable result of an audit isn't the technical finding — it's realizing that the information was always there, waiting for someone to systematically look for it. Old Access Keys don't show up in any dashboard by default. They don't generate alerts. They don't bother anyone. They just wait.
Automating visibility is the first step of any serious security program. Not the most glamorous, but the most honest.
The script generates two CSVs. The natural next step is converting that data into a visual dashboard — risk widgets per account, remediation trend over time, consolidated MFA status. That's coming in the next post.
The repository with the complete code is on GitHub — link below. If you use it, find something interesting, or have feedback, reach out. The CloudSec community in LATAM is built by sharing what works in real environments.
About the Author
Gerardo Castro is an AWS Security Hero and Cloud Security Engineer focused on LATAM. He believes the best way to learn cloud security is by building real things — not memorizing frameworks. He writes about what he builds, what he finds, and what he learns along the way.
🔗 GitHub: gerardokaztro
🔗 LinkedIn: gerardokaztro
🔗 Original post (Spanish): roadtocloudsec.hashnode.dev

Top comments (0)