Ten stories. Each story shows an attack, the wall that stops it, and why the wall works and the configurations that make them abort the attack
IMDSv2 WITH HOP LIMIT 1
The attacker has root on the EC2 instance. Fifteen minutes of work. A vulnerable Struts app. They run the command they've run a hundred times before:
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
Nothing.
They've seen this before. IMDSv2. Fine. They get the token:
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/iam/security-credentials/
Credentials spill out. AccessKeyId. SecretAccessKey. SessionToken. They export to their laptop.
export AWS_ACCESS_KEY_ID=ASIA...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=...
aws s3 ls from laptop. Nothing. Timeout.
aws sts get-caller-identity from laptop. AccessDenied.
From the instance, the same command works. From laptop, dead.
Why this happened: IMDSv2 with hop limit 1 attaches a network hop counter to the token. The token is valid for requests that travel exactly one hop from the instance. When the attacker exports the token to their laptop, the request travels many hops. The token is rejected. The credentials cannot be used outside the instance itself. The attacker has root on the box but cannot move the credentials anywhere.
They give up after four hours of copying log files manually over the slow reverse shell.
S3 OBJECT LOCK IN COMPLIANCE MODE
The attacker has admin. First move: delete the evidence.
aws cloudtrail delete-trail --name prod-trail
Success. Trail gone.
Now delete the actual log files:
aws s3 rm s3://cloudtrail-logs/ --recursive
Error: Object is locked.
They try to overwrite:
aws s3 cp /dev/null s3://cloudtrail-logs/log.json.gz
Same error.
Why this happened: S3 Object Lock in compliance mode makes objects immutable for a fixed retention period. No user not root, not the account owner, not AWS support can delete or overwrite them until the period expires. The attacker deleted the trail but cannot delete the log files themselves. Every API call they made the initial reconnaissance, the privilege escalation, the failed deletion attempts is permanently recorded. They cannot destroy the evidence.
They abort. This account is a trap. The logs will show everything.
MFA ON ALL ROLE ASSUMPTIONS
The attacker finds an IAM access key in a public GitHub repository. They test it:
aws sts get-caller-identity
Active. AdministratorAccess attached. Jackpot.
They try to assume the AdminRole to get fresh credentials:
aws sts assume-role --role-arn arn:aws:iam::xxx:role/AdminRole --role-session-name pwn
Error: MFA required.
They try direct actions with the key:
aws s3 ls s3://prod-data/
AccessDenied.
Why this happened: The trust policy on the role requires aws:MultiFactorAuthPresent to be true. The stolen access key belongs to a human with a physical MFA device. Without that six-digit code rotating every 30 seconds, the key cannot assume any role or perform any sensitive action. The key alone is useless. The attacker has a string, nothing more.
An access key without its MFA device is just a string.
ORGANIZATION TRAIL TO SEPARATE LOG ACCOUNT
The attacker compromises the production account. They become root. They disable CloudTrail.
aws cloudtrail delete-trail --name prod-trail
AccessDenied.
They check who owns the trail:
aws cloudtrail describe-trails --trail-name-list prod-trail
The trail is in a different account. Account 444455556666. The log archive account.
They cannot access that account. They don't even know the email address for it. It has no IAM users. No access keys. No console login. It exists only to store logs.
They try to stop logging at the organization level:
aws organizations disable-aws-service-access --service-principal cloudtrail.amazonaws.com
AccessDenied. That requires permissions in the management account.
Why this happened: The CloudTrail is an organization trail managed by a separate log account. The compromised account has no permissions to modify or delete it. The log account has no attack surface no users, no keys, no console. The attacker cannot reach it. Every API call they make in the production account is delivered to the log account's S3 bucket before they can even try to stop it. They cannot touch the evidence.
The security team will see everything tomorrow morning. The attacker knows this. They have hours, not weeks.
KMS KEY POLICY WITH ENCRYPTION CONTEXT CONSTRAINT
The attacker finds an IAM role with kms:Decrypt permission. They assume the role.
aws sts assume-role --role-arn arn:aws:iam::xxx:role/BackupRole
Success. They list KMS keys. One key: alias/prod-database-key. They steal an encrypted RDS snapshot. They try to decrypt:
aws kms decrypt --ciphertext-blob fileb://db-snapshot-key.bin
AccessDenied.
Why this happened: The KMS key policy allows Decrypt only when the encryption context matches specific key-value pairs: service=s3 and bucket=backup-bucket. The RDS snapshot was encrypted with service=rds. The context doesn't match. The policy denies the request. The attacker has decrypt permissions but cannot decrypt anything valuable. The key policy locks them to one specific service and one specific bucket.
They have a role with decrypt permissions but cannot decrypt anything that matters.
GUARDDUTY WITH AUTOMATED REMEDIATION
The attacker compromises an EC2 instance. The instance role has no S3 permissions. They try aws s3 ls anyway, hoping for a misconfiguration.
AccessDenied.
Two seconds later, they try another command:
aws sts get-caller-identity
Error: Not authorized.
They try again. Same error. Their entire role is now denied.
Why this happened: GuardDuty detected the unauthorized s3:ListBucket attempt and generated a finding: UnauthorizedAccess:EC2/S3BlockedAccess. EventBridge triggered a Lambda. The Lambda attached a deny-all policy to the instance role. The attacker still has shell on the instance, but the IAM role now has no permissions. No API calls work. Cannot enumerate. Cannot pivot. Cannot exfiltrate.
They are sitting in a Linux shell with no way to interact with AWS. The instance is a dead end.
SECRETS MANAGER WITH AUTOMATIC ROTATION
The attacker finds a database password in a Lambda environment variable:
DATABASE_PASSWORD=5up3rS3cr3t
They connect to the RDS database from their laptop:
mysql -h prod-db.xxx.rds.amazonaws.com -u admin -p5up3rS3cr3t
Connected. They create a backdoor user:
CREATE USER attacker@'%' IDENTIFIED BY 'persistent';
GRANT ALL PRIVILEGES ON *.* TO attacker@'%';
They test the backdoor. Works. They disconnect.
The next day, they try to connect with the original password:
mysql -h prod-db.xxx.rds.amazonaws.com -u admin -p5up3rS3cr3t
Access denied. Password rotated. They try the backdoor user:
mysql -h prod-db.xxx.rds.amazonaws.com -u attacker -ppersistent
Access denied. The backdoor is gone.
They try to find the new secret. The Lambda environment variable is gone. The new secret is in Secrets Manager. They try to fetch it:
aws secretsmanager get-secret-value --secret-id prod/db/password
AccessDenied.
Why this happened: The secret rotates every 24 hours. The rotation Lambda updates the master password in RDS and also rotates all user passwords, deleting any that aren't expected. The attacker's backdoor user is removed. The new secret is stored in Secrets Manager with a resource policy that only allows access from the application role which the attacker never had. The window was 24 hours. They missed it.
The attack expired.
VPC ENDPOINT WITH SOURCE VPC CONDITION
The attacker compromises an EC2 instance in VPC-67890. They want to exfiltrate data from a sensitive S3 bucket.
They try to access the bucket:
aws s3 cp s3://sensitive-bucket/data.csv ./
Connection timeout. The subnet has no internet gateway. No NAT. Only VPC endpoints.
They find the S3 endpoint and check its policy:
{
"Effect": "Deny",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::sensitive-bucket/*",
"Condition": {
"StringNotEquals": {
"aws:SourceVpc": "vpc-12345"
}
}
}
The endpoint only allows requests from VPC-12345. Their instance is in VPC-67890.
They try to create their own endpoint in VPC-67890:
aws ec2 create-vpc-endpoint --vpc-id vpc-67890 --service-name com.amazonaws.us-east-1.s3
AccessDenied.
Why this happened: The S3 bucket is not public. It is only accessible through a specific VPC endpoint. The endpoint policy denies any request where aws:SourceVpc is not vpc-12345. The compromised instance is in a different VPC. The attacker cannot create a new endpoint because their IAM role lacks ec2:CreateVpcEndpoint permission. They are inside AWS but cannot reach the data.
Network segmentation works. The bucket might as well be on another planet.
RESOURCE CONTROL POLICY AT ORGANIZATION LEVEL
The attacker compromises Account A. They escalate to full administrator. They run the standard cleanup:
aws cloudtrail delete-trail --name account-a-trail
Access denied by RCP.
aws guardduty delete-detector --detector-id xxx
Access denied by RCP.
aws iam create-user --user-name attacker-backdoor
Access denied by RCP.
They try to leave the organization:
aws organizations leave-organization
Access denied.
Why this happened: The organization has a Service Control Policy (SCP) attached at the root. It denies specific security-disabling actions across every member account. Even with full administrator permissions in Account A, the attacker cannot override an SCP. Cannot delete CloudTrail. Cannot disable GuardDuty. Cannot create IAM users. Cannot leave the organization. Every action is still logged to a trail they cannot delete.
They have maximum permissions but cannot do anything that matters. Too much noise. Too little gain. They abandon the account.
IAM ROLES ANYWHERE WITH TRUST ANCHOR
The attacker compromises an on-premise server. They look for AWS credentials:
cat ~/.aws/credentials
Empty. No files. They search the filesystem:
find / -name "*access*" -o -name "*secret*" 2>/dev/null
Nothing.
They check running processes:
ps aux | grep aws
A process running: aws-rolesanywhere-session. It has a certificate and private key in memory.
They dump the certificate and key from memory. Now they try to assume an IAM role:
aws rolesanywhere create-session \
--trust-anchor-arn arn:aws:rolesanywhere::xxx:trust-anchor/ta-123 \
--profile-arn arn:aws:rolesanywhere::xxx:profile/prod-role \
--certificate file://stolen-cert.pem \
--private-key file://stolen-key.pem
Success. They get temporary credentials valid for 1 hour.
They exfiltrate data for 55 minutes. The session expires.
They try to create a new session with the same certificate and key. Works. Another hour.
12 hours later, they try again. Denied.
Why this happened: IAM Roles Anywhere uses X.509 certificates to grant temporary credentials. No long-term access keys exist on the server. The certificate has a 12-hour validity period. The trust anchor is configured to reject certificates older than 12 hours. The attacker can steal the certificate and key, but they cannot get new ones without compromising the certificate authority which is air-gapped. After 12 hours, the stolen certificate is useless. No persistence possible.
The window closes. The attack ends.
WHAT THESE STORIES TEACH
Each of these controls stops a specific attack path. None of them is perfect. Each has gaps.
IMDSv2 with hop limit 1 stops credential exfiltration but not local access. S3 Object Lock stops evidence tampering but not data theft. MFA on role assumptions stops stolen keys but not session hijacking. Separate log accounts stop log deletion but not the attack itself. KMS encryption context stops lateral movement but not direct access. GuardDuty remediation stops API abuse but not shell access. Secrets Manager rotation stops long-term persistence but not short-term exfiltration. VPC endpoint conditions stop cross-VPC access but not compromise within the VPC. SCPs stop security control disablement but not resource exploitation. IAM Roles Anywhere stops long-term key theft but not certificate theft.
The defender's job is not to build an impenetrable wall. It is to force the attacker to fail somewhere. To make them hit one of these walls. To ensure that when they do, the evidence remains and the attack dies.
That is what gives attackers nightmares. Not any single control. The knowledge that somewhere in your account, one of these walls is waiting for them.
Top comments (0)