It happens every week without fail. The phone rings, it's a client in a panic — a shop in Tuam, a solicitor's office in Clifden, a small hotel out near Connemara — and the first thing they say is "the internet's gone." Nine times out of ten, it's not the internet. It's DNS.
DNS troubleshooting is one of those things that looks like black magic until you have a repeatable process. Over the past decade doing network and infrastructure work across the West of Ireland, I've built a DNS troubleshooting checklist that I run through on every single call, in roughly the same order, every time. It gets the job done. Here it is.
Step 1: Confirm It's Actually a DNS Issue
Before you touch anything, verify the problem. A DNS failure means names aren't resolving — but other network issues can look identical to the uninitiated.
Quick test:
ping 8.8.8.8
If that works but a website doesn't load, you've got a DNS issue. If even the ping fails, you're dealing with a broader connectivity problem and DNS is a red herring. Don't go chasing DNS gremlins when the router's acting the maggot entirely.
Step 2: Check What DNS Servers the Device Is Using
This is where most people skip straight past the obvious. Find out what DNS resolver the affected machine is actually pointing at.
Windows:
ipconfig /all
Linux/Mac:
cat /etc/resolv.conf
# or
nmcli dev show | grep DNS
Look at the DNS Servers line. Is it pointing at your router (e.g. 192.168.1.1)? A corporate DNS server? A public resolver like 1.1.1.1 or 8.8.8.8? Knowing this shapes every step that follows.
Step 3: Try a Direct DNS Query with nslookup
This is your first real diagnostic step. nslookup troubleshooting is fundamental — it lets you query DNS directly, bypassing the browser cache and OS resolver cache.
nslookup example.com
Then try querying a known public resolver directly:
nslookup example.com 8.8.8.8
nslookup example.com 1.1.1.1
If the domain resolves against 8.8.8.8 but not against your local DNS server, your problem is upstream of the client device — it's the resolver or the network path to it.
If you're on a machine without command-line access, I use publicdns.info's online dig tool to run the same queries from a browser. It's saved me more than a few times when I'm remote-assisting someone who hasn't got a terminal window to hand.
Step 4: Flush the Local DNS Cache
Cached records can cause DNS not resolving issues long after the actual problem is fixed. Always flush before assuming something is still broken.
Windows:
ipconfig /flushdns
Mac:
sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
Linux (systemd-resolved):
sudo systemd-resolve --flush-caches
I had a client in Galway city — a small accountancy firm — swearing blind their new website still wasn't showing up a full day after their DNS change. Flushed the cache, restarted the browser. Sorted in 30 seconds.
Step 5: Check TTL and Propagation Status
If you're dealing with a recent DNS change — new hosting, migrated domain, updated records — propagation time is a factor. TTL (Time to Live) determines how long resolvers cache a record before fetching a fresh copy.
Check the current TTL on the record:
dig example.com A
Look at the answer section — the number before IN A is the TTL in seconds. If it's 86400, that's 24 hours. If someone lowered the TTL after making the change, it's too late — the old TTL was already cached.
I use the DNS propagation checker at publicdns.info to see how a record looks from different resolvers globally. It's handy when a client is asking "is it live yet?" and you need a quick answer without spinning up a VPN.
Step 6: Query Authoritative Nameservers Directly
Don't trust what your local resolver tells you. Go straight to the source.
# Find the authoritative nameservers
dig example.com NS
# Query one of them directly
dig @ns1.example-nameserver.com example.com A
If the authoritative nameserver returns the correct record but end users are seeing the wrong one, the issue is caching at an intermediate resolver — not the DNS configuration itself. This distinction matters when you're telling a client what's actually wrong.
Step 7: Check for DNS Hijacking or Interception
This one's more common than people think, especially on compromised routers or in environments with overly aggressive filtering. To fix DNS issues caused by hijacking, you need to spot it first.
nslookup google.com
If the IP address returned is completely unexpected — especially if it's a private IP like 192.168.x.x or 10.x.x.x when querying a public domain — something is intercepting your DNS queries. Check the router firmware, look for rogue DHCP entries, and inspect any firewall or proxy rules.
I've seen this twice in small businesses on the Wild Atlantic Way corridor — routers compromised and quietly redirecting DNS traffic. Not fun to unravel, but it's always worth checking.
Step 8: Test with Alternative DNS Resolvers
Swap the DNS server temporarily to rule out resolver-specific issues. This is one of the fastest DNS diagnostic steps you can run.
On Windows (temporary):
netsh interface ip set dns "Local Area Connection" static 1.1.1.1
Test the resolution again. If it works with Cloudflare's 1.1.1.1 but not your ISP's resolver, the ISP resolver is the problem — either it's down, or it's returning stale/incorrect data. Cloudflare's DNS documentation has good guidance on setting this permanently across different OS types.
Step 9: Check DNSSEC Validation
If DNSSEC is enabled on a domain, a misconfiguration will cause resolvers to return SERVFAIL even though the zone itself is fine. This one catches people out badly.
dig example.com +dnssec
Look for the AD flag in the response — that means DNSSEC validated successfully. A SERVFAIL with no useful error usually points at a broken DS record or an expired RRSIG. The IETF's DNSSEC operational guidance (RFC 6781) is dry reading, but it's the definitive reference if you need to go deep on this.
Step 10: Document and Report
This is the one people always skip, and it's the one that pays off six months later when the exact same issue reappears.
Write down:
- What the symptoms were
- What resolver was in use
- Which step identified the root cause
- What the fix was
- Any TTL or propagation times involved
I keep a simple log per client. When someone rings me with "the DNS is gone again," I can pull up the notes from the last time in under a minute. Half the time, it's the same problem — same misconfigured router, same ISP resolver dropping the ball on a Friday afternoon.
Final Word
DNS is boring until it breaks, and when it breaks everything stops. Having a repeatable DNS troubleshooting checklist means you're not guessing — you're working through a proven set of diagnostic steps in the right order.
The tools are mostly free. nslookup, dig, ipconfig — they're on every machine. When I'm working remotely and the client doesn't have a terminal, publicdns.info's dig tool does the job straight from the browser.
Run the list, trust the process, and you'll have most DNS issues sorted before the kettle's even boiled.
Top comments (0)