DEV Community

Cover image for 20% Cheaper, 4x Faster: Migrate 100+ EBS Volumes from gp2 to gp3 in Minutes ⚡
Suhas Mallesh
Suhas Mallesh

Posted on

20% Cheaper, 4x Faster: Migrate 100+ EBS Volumes from gp2 to gp3 in Minutes ⚡

Stop overpaying for AWS storage. Automate gp2 to gp3 migration with Terraform for instant 20% savings and better performance—zero downtime required.

Here’s a question: Are you still using AWS EBS gp2 volumes?

If the answer is yes, you’re literally paying 20% more than you need to. For worse performance.

AWS launched gp3 volumes in December 2020, and they’re objectively better in every way:

  • 20% cheaper per GB
  • Baseline 3,000 IOPS (vs gp2’s size-dependent IOPS)
  • 125 MB/s throughput (vs gp2’s 250 MB/s max)
  • Predictable performance regardless of volume size

The kicker? Most AWS accounts are still running 80%+ gp2 volumes because they were the default for years and nobody bothered to migrate.

Let me show you how to fix this in literally 60 seconds with Terraform.

🎯 The Math: How Much Are You Losing?

Let’s do some quick calculations for a typical startup/SMB setup:

Your current setup (gp2):

  • 20 EC2 instances × 100 GB root volumes = 2,000 GB
  • 5 databases with 500 GB volumes = 2,500 GB
  • 50 snapshots worth of volume storage = 1,000 GB
  • Total: 5,500 GB in gp2

Monthly cost:

gp2: 5,500 GB × $0.10/GB = $550/month
gp3: 5,500 GB × $0.08/GB = $440/month

Monthly savings: $110
Annual savings: $1,320
Enter fullscreen mode Exit fullscreen mode

And that’s a small setup. Scale this to 100 TB and you’re saving $24,000/year.

For literally changing one word in your Terraform code.

🚀 The One-Line Migration

For new volumes, it’s hilariously simple:

Before (gp2):

resource "aws_ebs_volume" "app_data" {
  availability_zone = "us-east-1a"
  size              = 100
  type              = "gp2"  # ❌ Old and expensive
}
Enter fullscreen mode Exit fullscreen mode

After (gp3):

resource "aws_ebs_volume" "app_data" {
  availability_zone = "us-east-1a"
  size              = 100
  type              = "gp3"  # ✅ New and cheaper

  # Optional: Customize performance (included in base price!)
  iops       = 3000   # Default, can go up to 16,000
  throughput = 125    # MB/s, can go up to 1,000
}
Enter fullscreen mode Exit fullscreen mode

That’s it. terraform apply and you’re done.

🔄 Migrating Existing Volumes (The Real Challenge)

But what about the 100+ gp2 volumes already running in your account? You can’t just change the type and run terraform apply—AWS won’t let you modify a volume type in-place with Terraform without recreating it.

Here’s the production-safe approach:

Step 1: Identify All gp2 Volumes

First, let’s see what we’re working with:

# data_sources.tf
data "aws_ebs_volumes" "gp2_volumes" {
  filter {
    name   = "volume-type"
    values = ["gp2"]
  }

  filter {
    name   = "status"
    values = ["in-use", "available"]
  }
}

output "gp2_volumes_count" {
  value = length(data.aws_ebs_volumes.gp2_volumes.ids)
}

output "gp2_volume_ids" {
  value = data.aws_ebs_volumes.gp2_volumes.ids
}
Enter fullscreen mode Exit fullscreen mode

Run this to get your count:

terraform apply
# Outputs: gp2_volumes_count = 47
Enter fullscreen mode Exit fullscreen mode

Ouch. 47 volumes to migrate.

Step 2: Automated Migration with AWS CLI + Terraform

AWS allows live volume modification with zero downtime. Here’s the automation:

# volume_migration.tf

# Local variable to track volumes to migrate
locals {
  volumes_to_migrate = toset(data.aws_ebs_volumes.gp2_volumes.ids)
}

# Trigger migration using null_resource
resource "null_resource" "migrate_gp2_to_gp3" {
  for_each = local.volumes_to_migrate

  triggers = {
    volume_id = each.key
  }

  provisioner "local-exec" {
    command = <<-EOT
      echo "Migrating volume ${each.key} from gp2 to gp3..."
      aws ec2 modify-volume \
        --volume-id ${each.key} \
        --volume-type gp3 \
        --iops 3000 \
        --throughput 125

      echo "✅ Migration initiated for ${each.key}"
    EOT
  }
}

# Monitor migration status
resource "null_resource" "check_migration_status" {
  depends_on = [null_resource.migrate_gp2_to_gp3]

  provisioner "local-exec" {
    command = <<-EOT
      echo "Checking migration status..."
      for vol_id in ${join(" ", local.volumes_to_migrate)}; do
        aws ec2 describe-volumes-modifications \
          --volume-ids $vol_id \
          --query 'VolumesModifications[0].[VolumeId,ModificationState]' \
          --output text
      done
    EOT
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Update Terraform State (Important!)

After migration, update your existing volume resources:

# Existing volume definition - UPDATE THESE
resource "aws_ebs_volume" "database_primary" {
  availability_zone = "us-east-1a"
  size              = 500
  type              = "gp3"  # ← Changed from gp2
  iops              = 3000
  throughput        = 125

  tags = {
    Name = "database-primary"
  }

  # Prevent Terraform from trying to recreate
  lifecycle {
    ignore_changes = [type, iops, throughput]
  }
}
Enter fullscreen mode Exit fullscreen mode

Then refresh state:

terraform apply -refresh-only
Enter fullscreen mode Exit fullscreen mode

🛠️ Complete Automated Migration Solution

Here’s a production-ready module that handles everything:

# modules/gp2-to-gp3-migration/main.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

variable "dry_run" {
  description = "Set to true to see what would be migrated without actually doing it"
  type        = bool
  default     = true
}

variable "target_iops" {
  description = "Target IOPS for gp3 volumes (3000-16000)"
  type        = number
  default     = 3000

  validation {
    condition     = var.target_iops >= 3000 && var.target_iops <= 16000
    error_message = "IOPS must be between 3000 and 16000."
  }
}

variable "target_throughput" {
  description = "Target throughput in MB/s for gp3 volumes (125-1000)"
  type        = number
  default     = 125

  validation {
    condition     = var.target_throughput >= 125 && var.target_throughput <= 1000
    error_message = "Throughput must be between 125 and 1000 MB/s."
  }
}

variable "exclude_tags" {
  description = "Don't migrate volumes with these tags"
  type        = map(string)
  default     = {}
}

# Find all gp2 volumes
data "aws_ebs_volumes" "gp2_volumes" {
  filter {
    name   = "volume-type"
    values = ["gp2"]
  }

  filter {
    name   = "status"
    values = ["in-use", "available"]
  }
}

# Get detailed info for each volume
data "aws_ebs_volume" "volumes" {
  for_each = toset(data.aws_ebs_volumes.gp2_volumes.ids)

  filter {
    name   = "volume-id"
    values = [each.key]
  }
}

locals {
  # Filter out excluded volumes
  volumes_to_migrate = {
    for vol_id, vol_data in data.aws_ebs_volume.volumes :
    vol_id => vol_data
    if !contains(keys(var.exclude_tags), "DoNotMigrate")
  }

  # Calculate estimated savings
  total_gb = sum([
    for vol in local.volumes_to_migrate : vol.size
  ])

  monthly_savings = local.total_gb * (0.10 - 0.08)
  annual_savings  = local.monthly_savings * 12
}

# Migration execution
resource "null_resource" "migrate_volumes" {
  for_each = var.dry_run ? {} : local.volumes_to_migrate

  triggers = {
    volume_id = each.key
  }

  provisioner "local-exec" {
    command = <<-EOT
      echo "🔄 Migrating ${each.key} (${each.value.size}GB)..."

      aws ec2 modify-volume \
        --volume-id ${each.key} \
        --volume-type gp3 \
        --iops ${var.target_iops} \
        --throughput ${var.target_throughput} 2>&1

      if [ $? -eq 0 ]; then
        echo "✅ Successfully initiated migration for ${each.key}"
      else
        echo "❌ Failed to migrate ${each.key}"
        exit 1
      fi
    EOT
  }
}

# Outputs
output "migration_summary" {
  value = {
    total_volumes      = length(local.volumes_to_migrate)
    total_gb          = local.total_gb
    monthly_savings   = "$${format("%.2f", local.monthly_savings)}"
    annual_savings    = "$${format("%.2f", local.annual_savings)}"
    dry_run           = var.dry_run
  }
}

output "volumes_to_migrate" {
  value = {
    for vol_id, vol_data in local.volumes_to_migrate :
    vol_id => {
      size = "${vol_data.size}GB"
      az   = vol_data.availability_zone
      tags = vol_data.tags
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Usage:

# main.tf
module "gp2_migration" {
  source = "./modules/gp2-to-gp3-migration"

  dry_run           = true  # Set to false to actually migrate
  target_iops       = 3000
  target_throughput = 125

  exclude_tags = {
    DoNotMigrate = "true"
  }
}

output "migration_plan" {
  value = module.gp2_migration.migration_summary
}
Enter fullscreen mode Exit fullscreen mode

Dry run first:

terraform apply

# Output:
# migration_summary = {
#   total_volumes    = 47
#   total_gb         = 12,450
#   monthly_savings  = "$249.00"
#   annual_savings   = "$2,988.00"
#   dry_run          = true
# }
Enter fullscreen mode Exit fullscreen mode

Execute migration:

# Set dry_run = false in main.tf
terraform apply

# Confirm and watch the magic happen ✨
Enter fullscreen mode Exit fullscreen mode

📊 Monitoring Migration Progress

Create a quick status checker:

#!/bin/bash
# check_migration.sh

for vol_id in $(terraform output -json migration_plan | jq -r '.volumes_to_migrate | keys[]'); do
  echo "Checking $vol_id..."
  aws ec2 describe-volumes-modifications \
    --volume-ids $vol_id \
    --query 'VolumesModifications[0].[VolumeId,ModificationState,Progress]' \
    --output table
done
Enter fullscreen mode Exit fullscreen mode

Migration states you’ll see:

  • modifying - In progress (takes 1-6 hours depending on size)
  • optimizing - Almost done
  • completed - Done! ✅

⚠️ Important Gotchas

1. 6-Hour Cooldown Between Modifications

You can’t modify the same volume twice within 6 hours. Plan your migrations accordingly.

2. IOPS Calculation for Large gp2 Volumes

gp2 provides 3 IOPS per GB (up to 16,000 max). If you have a 2TB gp2 volume, it was getting 6,000 IOPS.

When migrating to gp3:

resource "aws_ebs_volume" "large_volume" {
  size       = 2000  # 2TB
  type       = "gp3"
  iops       = 6000  # Match previous performance
  throughput = 500   # Increase if needed
}
Enter fullscreen mode Exit fullscreen mode

The base gp3 pricing includes up to 3,000 IOPS. Additional IOPS cost $0.005/IOPS-month:

6,000 IOPS = 3,000 (free) + 3,000 (paid)
Extra cost: 3,000 × $0.005 = $15/month
Enter fullscreen mode Exit fullscreen mode

Still cheaper than gp2 for most use cases!

3. Snapshot Considerations

Snapshots are stored as gp2 by default. When you restore them:

resource "aws_ebs_volume" "from_snapshot" {
  availability_zone = "us-east-1a"
  snapshot_id       = "snap-1234567890abcdef0"
  type              = "gp3"  # ← Specify gp3 when restoring!
  iops              = 3000
  throughput        = 125
}
Enter fullscreen mode Exit fullscreen mode

4. Launch Templates and AMIs

Update your EC2 launch templates to use gp3 by default:

resource "aws_launch_template" "app_server" {
  name_prefix   = "app-server-"
  image_id      = "ami-12345678"
  instance_type = "t3.medium"

  block_device_mappings {
    device_name = "/dev/sda1"

    ebs {
      volume_size = 20
      volume_type = "gp3"  # ← New instances get gp3
      iops        = 3000
      throughput  = 125
      encrypted   = true
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

💰 Cost Calculator: Should You Upgrade IOPS/Throughput?

gp3’s killer feature is independent IOPS and throughput scaling:

Spec Base (included) Max Additional Cost
IOPS 3,000 16,000 $0.005/IOPS/month
Throughput 125 MB/s 1,000 MB/s $0.04/MB/s/month

Example: 500GB database volume needing 10,000 IOPS and 500 MB/s

gp2 equivalent:

  • Would need ~3,334 GB volume to get 10,000 IOPS (3 IOPS/GB)
  • Cost: 3,334 GB × $0.10 = $333.40/month
  • You’re paying for 2,834 GB you don’t need! 😱

gp3 optimized:

Storage:    500 GB × $0.08         = $40.00
Extra IOPS: 7,000 × $0.005         = $35.00
Extra TP:   375 MB/s × $0.04       = $15.00
Total:                               $90.00/month
Enter fullscreen mode Exit fullscreen mode

Savings: $243.40/month (73% reduction) 🎉

🎓 Migration Checklist

Run dry-run first - Validate what will be migrated

Check volume sizes - Large volumes (>500GB) take 4-6 hours

Review IOPS requirements - Match or exceed current performance

Update launch templates - New instances get gp3 by default

Update snapshot restore configs - Specify gp3 when restoring

Monitor for 24 hours - Ensure no performance degradation

Update Terraform state - Prevent drift and accidental recreation

🚨 When NOT to Migrate (Rare Cases)

Honestly, there are almost no good reasons to keep gp2, but here are edge cases:

  • Very small volumes (<10GB) where the $0.20/month savings isn’t worth your time
  • Volumes being decommissioned soon (<30 days)
  • Compliance requirements that specifically mandate gp2 (I’ve never seen this)

🎯 Quick Start: 5-Minute Implementation

Step 1: Add the migration module

mkdir -p modules/gp2-migration
# Copy the module code from above
Enter fullscreen mode Exit fullscreen mode

Step 2: Configure it

module "migrate" {
  source  = "./modules/gp2-migration"
  dry_run = true
}
Enter fullscreen mode Exit fullscreen mode

Step 3: See the savings

terraform init
terraform apply
# Check the output for estimated savings
Enter fullscreen mode Exit fullscreen mode

Step 4: Execute

# Change dry_run = false
terraform apply
Enter fullscreen mode Exit fullscreen mode

Step 5: Celebrate 🎉

# Check your AWS bill next month
Enter fullscreen mode Exit fullscreen mode

📈 Real-World Impact

I ran this migration for a client with 200+ volumes across dev, staging, and production:

Before:

  • 45 TB across 213 gp2 volumes
  • Monthly cost: $4,500

After:

  • 45 TB across 213 gp3 volumes
  • Monthly cost: $3,600
  • Annual savings: $10,800

Time spent: 2 hours (mostly testing and monitoring)

ROI: Literally infinite (one-time 2-hour investment)

💡 Pro Tips

1. Start with dev/test environments

Get comfortable with the process before touching production.

2. Migrate during low-traffic windows

While there’s no downtime, very heavy I/O during migration might see minor latency spikes.

3. Boost performance while you’re at it

Since you’re migrating anyway, consider bumping IOPS/throughput for databases. The cost is still lower than gp2.

4. Tag volumes that shouldn’t migrate

tags = {
  DoNotMigrate = "true"
  Reason       = "Legacy compliance requirement"
}
Enter fullscreen mode Exit fullscreen mode

5. Set up CloudWatch alarms

Monitor volume performance before and after to ensure no degradation.

🎬 Final Thoughts

This is the easiest AWS cost optimization you’ll ever do:

✅ No architecture changes

✅ No downtime

✅ No risk

✅ Guaranteed 20% savings minimum

✅ Better performance

✅ 60-second implementation for new resources

✅ 2-hour implementation for bulk migration

The only question is: why haven’t you done this yet?

If you’re still running gp2 volumes, you’re leaving money on the table. AWS won’t migrate them for you. They won’t even tell you that you’re overpaying.

Take the 5 minutes, run the Terraform module, and save yourself thousands of dollars per year.


Migrated your volumes? How much did you save? Share your results in the comments! 💬

Want more AWS cost optimization with Terraform? Follow me for weekly practical guides! 🚀


📚 Additional Resources

Top comments (0)