Every AWS audit I run, I find the same thing:
15-40% of EBS volumes orphaned. Detached. Running. Billing.
The team's reaction is always identical: "Oh, we'll run a cleanup script this weekend."
They do. Two months later, the orphans are back.
Because cleanup is a downstream fix. It doesn't stop the upstream leak.
The actual causes:
→ Auto-scaling groups that terminate instances but can't delete attached volumes
→ Terraform runs that recreate resources but leave old volumes dangling
→ Dev scripts that spin up one-off EBS for testing, nobody deletes
→ CloudFormation stacks partially destroyed
In one audit last month, the company was paying ₹80K/month on 47 orphaned gp2 volumes. Three of them were 2 TB snapshots tagged from an engineer who left in 2024.
Fix the policy, not the mess:
→ Every volume must have an owner tag at creation
→ IaC: enable ebs_auto_delete on termination
→ Service Control Policy: block untagged volume creation entirely
→ Weekly report flagging volumes unused 7+ days, auto-stop if owner doesn't respond in 14 days
The scripts we write to clean up are proof our policies are broken.
If this reminds you of a dashboard you've been putting off, repost. There's a VPE or a CTO in your network burning ₹5L/year on this exact pattern.

Top comments (0)