DEV Community

Cover image for Solved: Migrating many Route53 hosted zones and records to Terraform – best approach?
Darian Vance
Darian Vance

Posted on • Originally published at wp.me

Solved: Migrating many Route53 hosted zones and records to Terraform – best approach?

🚀 Executive Summary

TL;DR: Migrating hundreds of existing AWS Route53 hosted zones and their records into Terraform presents a significant challenge due to state drift, where Terraform doesn’t recognize manually created resources. The most effective and recommended solution for large-scale migrations is to utilize specialized tooling like Terraformer, which automates the generation of Terraform HCL and state files for existing infrastructure.

🎯 Key Takeaways

  • The core problem in migrating existing Route53 zones to Terraform is ‘state drift,’ where Terraform’s state file is unaware of manually created resources.
  • The process of bringing existing AWS resources under Terraform management is called ‘importing,’ preventing Terraform from attempting to recreate them.
  • For non-trivial migrations (more than a handful of zones), dedicated tools like Terraformer or Former2 are highly recommended over manual scripting due to their ability to handle zones, records, and dependencies automatically.
  • When using tools like Terraformer, always run the import in a separate, temporary directory first to inspect and clean up the generated HCL code and state file before merging into your main codebase.
  • Manually editing the terraform.tfstate file is an extremely dangerous ‘nuclear’ option that bypasses safety checks and can easily corrupt your state, making it suitable only as a last resort for single, simple resources.

Wrangling hundreds of existing Route53 zones into Terraform can feel daunting. This guide breaks down the best real-world approaches, from quick scripting to robust tooling, to get your DNS under control without causing an outage.

So, You’ve Inherited 500 Route53 Zones. Now What? A Guide to Terraform Migration.

I still remember the night. It was 2 AM, the on-call pager was screaming, and our main customer portal was down. The cause? A “simple” manual DNS change in the AWS console. Someone had updated a CNAME for portal.our-awesome-app.com but fat-fingered the destination. It was a five-minute fix once we found it, but it took an hour of frantic searching. That night, I swore: never again. All DNS changes would go through code. No exceptions. If you’re reading this, you’ve probably reached a similar conclusion and are now staring at a mountain of existing Route53 zones wondering, “How the heck do I get all of this into Terraform without breaking everything?”

I’ve seen this exact question pop up on Reddit, and it hits home. You’ve got dozens, maybe hundreds, of zones created over years by different teams, and now you’re tasked with taming the beast. Let’s walk through it.

First, Why Is This So Hard? The State of Your State

Before we dive into the “how,” let’s quickly cover the “why.” The core of this problem is state drift. Terraform maintains a state file (terraform.tfstate) that acts as its source of truth. It compares your code (the desired state) to this file (the last known state) to figure out what changes to make in AWS (the real state).

When your Route53 zones were created manually, Terraform has no idea they exist. If you just write code for them and run terraform apply, Terraform will see a mismatch: “My state file is empty, but the code says these zones should exist. Therefore, I must create them!” It then tries to create resources that already exist, leading to a spectacular explosion of errors. The key is to tell Terraform about the existing resources and bring them under its management. This process is called importing.

Here are three battle-tested ways to do it, ranging from a quick script to a full-blown strategic approach.

Solution 1: The Quick & Dirty CLI Hammer

Let’s be honest, sometimes you just need to get it done. For a one-time migration of a manageable number of zones (say, under 50), a well-crafted shell script can be your best friend. It’s not elegant, but it’s effective.

The strategy is simple: list all your hosted zones, loop through them to generate the basic Terraform HCL, and then loop through them again to generate the terraform import commands.

Step 1: Generate the HCL Files

First, you need to create the resource blocks in your .tf files. A simple shell script using the AWS CLI and jq can get you 90% of the way there. This script will list your zones and print a aws\_route53\_zone resource block for each one.

# Filename: generate_zones_hcl.sh
#!/bin/bash

# Get a clean list of zone IDs and names
aws route53 list-hosted-zones | jq -r '.HostedZones[] | "\(.Id | split("/") | .[-1]) \(.Name)"' | \
while read -r zone_id zone_name; do
    # Sanitize the zone name to create a valid Terraform resource name
    # example.com. -> example_com
    resource_name=$(echo "$zone_name" | sed 's/\.$//' | tr '.' '_')

    # Output the Terraform resource block to a file
    cat << EOF >> route53_zones.tf
resource "aws_route53_zone" "$resource_name" {
  name = "$zone_name"
}

EOF
done

echo "Done. Check route53_zones.tf"
Enter fullscreen mode Exit fullscreen mode

Step 2: Generate the Import Commands

Next, you do the same thing, but this time you generate the terraform import commands. Save this to a separate script you can run after initializing Terraform.

# Filename: generate_import_commands.sh
#!/bin/bash

# Get a clean list of zone IDs and names
aws route53 list-hosted-zones | jq -r '.HostedZones[] | "\(.Id | split("/") | .[-1]) \(.Name)"' | \
while read -r zone_id zone_name; do
    # Sanitize the zone name to match the resource name from the HCL
    resource_name=$(echo "$zone_name" | sed 's/\.$//' | tr '.' '_')

    # Output the import command
    echo "terraform import aws_route53_zone.$resource_name $zone_id"
done > import_script.sh

chmod +x import_script.sh
echo "Done. Run ./import_script.sh"
Enter fullscreen mode Exit fullscreen mode

The Catch: This only handles the zones themselves, not the thousands of records inside them. You’d need to expand these scripts to list and import every single record, which can get complicated fast. This approach is best for getting the zones under management quickly before tackling the records separately.

Solution 2: The ‘Right’ Way – The Strategic Tooling Approach

When you have hundreds of zones and thousands of records, the scripting approach becomes fragile. It’s time to bring in the heavy machinery. Tools like Terraformer or Former2 were built for this exact scenario.

These tools connect to your AWS account, inspect the existing resources, and automatically generate both the HCL code and the state file or import commands for you. My team’s go-to is Terraformer.

Using Terraformer: A Mini-Guide

Here’s the basic workflow I use:

  1. Install Terraformer: Follow the installation instructions for your OS from their GitHub page.
  2. Configure your AWS Credentials: Make sure your environment is configured with the correct AWS access keys or IAM role.
  3. Run the Import: The command is straightforward. You specify the provider (aws), the service (route53), and can even filter for specific resources.
# This command will scan your Route53 and generate TF files for ALL zones and records.
# The output will be in a 'generated/aws/route53' directory.

terraformer import aws --resources=route53 --profile=my-aws-profile --regions=us-east-1
Enter fullscreen mode Exit fullscreen mode

This command will create a beautifully organized directory structure containing .tf files for your zones and records, as well as a terraform.tfstate file. You can then copy these files into your existing Terraform project, run terraform plan, and you should see the glorious message: “No changes. Your infrastructure matches the configuration.”

Pro Tip: Always run these tools in a separate, temporary directory first. Inspect the generated code carefully. Sometimes the tools make assumptions or use older syntax. Clean it up, test it, and then merge it into your main codebase. Don’t just blindly trust the output.

Pros of Tooling Cons of Tooling
Handles records, zones, and dependencies automatically. Can be another tool to learn and install.
Massively reduces manual effort and human error. Generated code might not match your style guide and may need refactoring.
Excellent for large, complex environments. Can sometimes miss new AWS features or have bugs.

Solution 3: The ‘Nuclear’ Option (Don’t Do This… Unless You Must)

There is a third way, spoken of only in hushed tones around the virtual water cooler: manually editing the state file. I’m including this for completeness, but with a huge warning.

WARNING: Manually editing your terraform.tfstate file is like performing open-heart surgery with a butter knife. It’s extremely dangerous, bypasses all of Terraform’s safety checks, and can easily corrupt your state, leading to catastrophic resource destruction. Only consider this as a last resort for a single, simple resource that is stubbornly refusing to import.

The process involves:

  1. Writing the resource block in your HCL as if it already existed.
  2. Running terraform state pull to get a local copy of the state JSON.
  3. Manually finding the right place in the JSON and pasting in the resource definition, filling in the attributes (like zone ID) from the AWS console.
  4. Running terraform state push to upload your modified state.

Again, this is a path fraught with peril. A single misplaced comma in that JSON can render your state file useless. In 99.9% of cases, using an import tool or script is the far superior choice.

My Recommendation

So, what’s the verdict? For any non-trivial migration (more than a handful of zones), go with Solution 2 and use a tool like Terraformer. The initial time investment of learning the tool will pay for itself tenfold by preventing errors and saving you hours of manual scripting. It scales, it’s repeatable, and it’s the professional way to solve this problem.

Getting your DNS into code is a huge step towards a more reliable, auditable, and stress-free infrastructure. It turns a scary, manual process into a boring, predictable pull request. And trust me, “boring” is exactly what you want when it comes to your DNS.


Darian Vance

👉 Read the original article on TechResolve.blog


Support my work

If this article helped you, you can buy me a coffee:

👉 https://buymeacoffee.com/darianvance

Top comments (0)