You've probably been there. You update an A record in your DNS dashboard, then refresh your browser three times in a row. Nothing. Still showing the old server.
Then someone in a different timezone messages you saying they can see the new version. But you can't. You check again. Still nothing.
Someone on your team says, "Don't worry, DNS takes 24-48 hours to propagate."
I used to accept this waiting period as normal. It's not. DNS changes don't take 24 hours because the internet is slow. They take that long because someone left a default or high TTL value on the A record and never thought about changing it.
"Propagation" isn't some mysterious internet ritual. In reality, it's just a cache that expires. And here's the thing: you control the expiry timer. Set your A record’s TTL to 300 seconds (name.com's standard minimum) before making changes, and you're looking at minutes, not days.
In this post, I'll explain how DNS caching actually works, how to use TTL values strategically, and how to make infrastructure changes without any downtime.
What "Propagation" Really Means
When you update a DNS record at name.com, the change takes effect immediately on name.com's authoritative nameservers. There's no delay on their end.
The lag you experience isn't happening on name.com's infrastructure. It's everywhere else.
The internet doesn't work by constantly checking name.com's DNS servers for your latest DNS records. That would create millions of unnecessary queries every second. Instead, DNS uses a layered caching system.
When someone requests your domain, their query goes through a recursive resolver first. This resolver checks its local cache before reaching out to authoritative servers. If the record is cached, the resolver just returns that cached value without ever contacting name.com.
That's your bottleneck. "DNS propagation" is really just waiting for cached records to expire across thousands of recursive resolvers around the world.
The Two Types of DNS Servers
Authoritative nameservers are where your actual DNS records live. When you manage DNS records in the name.com dashboard, you're modifying data on name.com's authoritative servers (ns1.name.com through ns4.name.com). Changes here happen instantly. Update an A record and it's live on the authoritative server in seconds.
Recursive resolvers are intermediary servers run by ISPs, network administrators, or public DNS providers like Google (8.8.8.8) and Cloudflare (1.1.1.1). When your browser needs to resolve a domain name, it asks a recursive resolver. That resolver either returns a cached answer or queries the authoritative server on your behalf. Then it caches the result for future requests.
Your users aren't querying name.com's authoritative servers directly. They're hitting recursive resolvers that might have stale cache. The "propagation period" is just the time it takes for those cached entries to reach their expiration.
TTL: Your Control Mechanism
TTL (Time To Live) is measured in seconds and gets attached to every DNS record. It tells recursive resolvers how long they can trust the cached data before asking again.
An A record with a TTL of 3600 means "you can cache this IP address for one hour."
When you change that A record's IP, resolvers that previously cached it won't see the change until their cached copy expires. If your TTL was 86400 seconds (24 hours), some resolvers might not pick up the new IP for a full day.
That's where the "DNS takes a day to propagate" myth comes from. It's not a limitation of DNS. It's just a consequence of how you've configured it.
Lower the TTL to 300 seconds (5 minutes), and the maximum cache duration drops to 5 minutes. Most resolvers respect TTL values. They'll query the authoritative server again once the timer runs out. This is how you compress propagation time from hours to minutes.
name.com enforces a 300-second minimum TTL on most record types to keep the global DNS infrastructure stable. You can't go lower than that, but 5 minutes is fast enough for most situations.
The trade-off is query volume. Shorter TTLs mean more frequent queries to authoritative servers, which slightly increases load. For most sites, this difference doesn't really matter.
The Problem Nobody Talks About: Negative Caching
There's another type of caching that catches people off-guard during new site launches.
If someone visits your domain before you've configured DNS records, their resolver gets an NXDOMAIN response (domain doesn't exist). That negative response gets cached too.
The cache duration for negative responses is controlled by the SOA record's negative TTL field, which is often set to several hours. According to RFC 2308, the negative TTL is determined by the minimum of the SOA MINIMUM field and the SOA TTL. This can be much longer than your A record's TTL.
Here's the scenario: You register a domain. Someone checks if it's live before you finish setting it up. They get NXDOMAIN. Their resolver caches that negative answer for hours.
When you finally configure the DNS records, that person still can't access your site. Their resolver isn't checking again because it has a valid (though incorrect) cached negative response.
I've seen this mess up coordinated launches. You announce a new site, people check immediately, get NXDOMAIN, and then report "the site doesn't work" even after you've configured everything correctly.
The fix is simple: configure all DNS records before announcing anything publicly. Don't give resolvers a chance to cache a negative response in the first place.
How to Choose the Right TTL
You shouldn't always use the lowest possible TTL. High TTLs reduce query load on authoritative servers, improve response speed (cached lookups are faster than fresh queries), and give you some resilience if your DNS provider has a temporary outage.
Low TTLs give you agility but generate more traffic to authoritative servers.
The right TTL depends on how stable your infrastructure is and how often you make changes:
Use 86400 seconds (24 hours) or higher when:
- Your infrastructure is stable and changes are rare
- You want maximum efficiency on DNS query costs
- You're running a static site that won't move servers
- You need DDoS protection (cached records reduce attack surface)
Use 300-600 seconds (5-10 minutes) when:
- You're planning a migration or deployment soon
- You need rapid failover for high-availability setups
- You're in a development environment with frequent changes
- You want the ability to quickly roll back if something breaks
Use 3600 seconds (1 hour) for most production sites. It's the sweet spot. Changes take effect within an hour, which is fine for most situations, and query volume stays reasonable.
The pattern I use for controlled migrations: run with a high TTL during normal operations, drop to 300 seconds at least 48 hours before planned changes, do the migration, verify everything works, then raise the TTL back up.
This keeps the query load low most of the time while giving you precise control when it matters.
How to Migrate With Zero Downtime
A true zero-downtime migration means both the old and new servers need to serve traffic at the same time during the transition. You can't control which resolver has cached which IP, so both IPs need to return valid responses.
Phase 1: Preparation (48 Hours Before)
Log into the name.com DNS management interface and find the DNS record you're going to change. Note the current TTL value.
If it's higher than 300 seconds, change it to 300 seconds now.
Then wait. The wait time needs to be at least as long as the previous TTL value. If your A record had a TTL of 86400 seconds, you need to wait 24 hours. I usually wait 48 hours to be safe.
This ensures that all resolvers that cached your record at the old, higher TTL will have their cache expire before you make the switch.
You really can't skip this step. If you do, you're back to unpredictable propagation times measured in hours or days.
Phase 2: Set Up Both Servers
While you're waiting for old TTL caches to expire, get your new server ready. Deploy your application, configure your web server (nginx, Apache, Caddy, whatever you use), and install SSL certificates.
Test the new server by hitting it directly via IP address or by temporarily modifying your local /etc/hosts file to point your domain at the new IP.
Here's the critical part: don't shut down the old server. It needs to stay running.
During the transition, users will hit either the old or new server depending on which IP their resolver has cached. Both need to work.
For apps with databases, make sure both servers can access the same database or that you've synced the database state between them. For stateless applications, this is easier. Just make sure both servers have the same code and configuration.
Phase 3: Make the DNS Change
After 48 hours have passed and your new server is tested and ready, update the DNS record.
In the name.com dashboard, change the A record IP address from the old server to the new server. Double-check that the TTL is still 300 seconds. Save it.
The change is live on name.com's authoritative servers immediately. Now you're in the transition window.
For the next 5-10 minutes, some users will hit the old server (cached IP) and some will hit the new server (fresh DNS query). Both work, so nobody sees downtime.
Phase 4: Verify Everything
Don't trust your browser. Don't trust a single ping command. You need to check propagation across multiple public DNS resolvers to get a real picture.
Use the dig command (or nslookup on Windows) to query specific resolvers:
# Query Google's public DNS
dig @8.8.8.8 yourdomain.com A
# Query Cloudflare's public DNS
dig @1.1.1.1 yourdomain.com A
# Query Quad9
dig @9.9.9.9 yourdomain.com A
Look at the "ANSWER SECTION" in each response. If all resolvers return the new IP address, you're done. If they're mixed (some returning the old IP, some returning the new), you're still in the transition window. Wait a few minutes and check again.
You can also use web tools like whatsmydns.net to check DNS resolution from multiple locations around the world at once. This gives you a visual confirmation that the change has spread globally.
If something goes wrong and you need to roll back, just update the A record back to the old IP. With a 300-second TTL, the rollback will take effect within 5 minutes. That's your safety net.
Phase 5: Clean Up
Once you've confirmed that propagation is complete and traffic has fully moved to the new server (check your old server's logs to make sure it's not getting any more requests), you can raise the TTL back to a normal value. This reduces ongoing query load.
I usually wait 24 hours after migration to make sure everything is stable, then edit the DNS record and change the TTL from 300 seconds to 3600 seconds (1 hour) or 86400 seconds (24 hours).
Only after the new server is stable and the TTL has been raised should you shut down the old server.
Tools for Verification
The dig command is the standard tool for DNS debugging. It comes installed by default on most Unix-like systems (Linux, macOS) and you can get it on Windows through BIND utilities.
Basic syntax:
dig yourdomain.com
This queries your system's default resolver. To query a specific resolver, use the @ syntax:
dig @8.8.8.8 yourdomain.com
To see the full path including all the intermediate nameservers, use the trace option:
dig +trace yourdomain.com
The trace option shows each step: root nameservers, TLD nameservers, authoritative nameservers. This helps you figure out where in the DNS chain something is breaking.
For quick checks during a migration, I like to make a simple shell script that queries multiple resolvers and compares the results:
#!/bin/bash
DOMAIN=$1
RESOLVERS="8.8.8.8 8.8.4.4 1.1.1.1 1.0.0.1 9.9.9.9"
echo "Checking DNS propagation for $DOMAIN"
echo "========================================"
for resolver in $RESOLVERS; do
result=$(dig +short @$resolver $DOMAIN A)
echo "$resolver: $result"
done
Save this file as check_dns_propagation.sh, give it “execute” permissions (chmod +x check_dns_propagation.sh) and then run it like this:
./check_dns_propagation.sh yourdomain.com
Run this every few minutes after making the DNS change. When all resolvers return the same IP, you're done.
On Windows, you can do something similar with the nslookup command. Just add the resolver as a second parameter:
nslookup yourdomain.com 8.8.8.8
The output looks a bit different than dig but you can still just look for the new IP address you're expecting to see in your A record.
More Advanced Scenarios
Using CNAME Records
CNAME records point one domain name to another. They're useful when you want multiple domain names to resolve to the same place, and you want to update them all by changing a single A record.
But CNAME records have limits. RFC 1034 doesn't allow CNAMEs at the apex domain (example.com). You can only use them for subdomains (www.example.com, blog.example.com, etc.). For apex domains, you need A records or ANAME records if your provider supports them.
When you use CNAMEs, pay attention to TTL values on both the CNAME record and the target A record. A resolver caches both. Even if your CNAME has a 300-second TTL, if the target A record has an 86400-second TTL, you won't see changes for 24 hours.
MX Records for Email
MX records tell email servers where to deliver mail for your domain. When you migrate email services, the same TTL rules apply. Lower the MX record TTL to 300 seconds before migration, make the change, wait for propagation, then raise it back up.
Email has a built-in retry mechanism that helps here. If email delivery fails, the sending server will try again later. This gives you some forgiveness during email migrations, but you still want to keep the window of inconsistent MX records as short as possible.
Using the API for Automation
For infrastructure-as-code workflows or automated failover systems, name.com has an API for DNS management. You can create, update, and delete records programmatically.
This lets you do things like:
- Run automated health checks that update A records when a server goes down
- Write blue-green deployment scripts that switch DNS between environments
- Set up dynamic DNS for services running on infrastructure with changing IPs
The name.com API documentation covers authentication, rate limits, and has example requests. Most DNS operations complete within seconds, which makes API-driven updates practical for production automation.
Final Thoughts
DNS propagation is really just cache expiring using a timer you control.
The "24-48 hour wait" is a leftover from default TTL settings that nobody bothered to change. Lower your TTL to 300 seconds before making changes, do the switch while keeping both old and new infrastructure running, verify propagation across multiple resolvers, then raise the TTL back up once things are stable.
name.com gives you what you need: authoritative nameservers with 300-second TTL minimums, a straightforward interface for managing records, and API access for automation. The rest comes down to how you execute.
Plan your changes, adjust TTLs ahead of time, keep dual-serving capability during transitions, verify completion, then clean up. This works for any DNS record type, any size infrastructure, and any complexity of deployment.
The underlying mechanism is always the same: control the cache timer, and you control the migration timeline.
I'd love to hear about your DNS migration experiences. Have you run into the "24-48 hour propagation" problem? What's the longest you've waited for DNS changes to take effect? Let me know in the comments.

Top comments (0)