Finding a decent domain these days feels more like a treasure hunt than a technical task. And after two weeks of filtering through sketchy TLDs and SEO traps, I landed on something that I didn’t regret owning after 24 hours. I could confidently live with it.
Bonus: it wasn’t just a typo away from someone’s company name.
For me, that’s huge. It unlocked the path to a potential launch and literally saved ~$500–1,000 in branding specialist fees and an overpriced domain.
Bought the domain. Typed it in. Got the “newly registered” placeholder page. That's Expected and where the fun part starts: figuring out how to actually use it.
➡ GitHub Pages?
➡ Cloudflare + IaaS?
➡ Pinning to a Compute Instance with Let’s Encrypt?
➡ Hyperscalers?
It took me several iterations of evaluation to commit to a solution.
If you're curious how I eventually made sense of all this mess — or want to shortcut some trial-and-error and find out how I picked what to pay for and what to DIY — keep reading.
GitHub Pages
GitHub Pages is the duct tape of static hosting — fast, free, and good enough for most quick projects. All you need is a repo, a branch, and an index.html
— and boom: public-facing page with zero ops work.
At first, it gives you a GitHub subdomain like yourname.github.io
, but you can easily pin your own custom domain to it.
Scanning the docs I quickly revealed why for me just starting here is not a good idea:
Tip
We recommend verifying your custom domain prior to adding it to your repository, in order to improve security and avoid takeover attacks. For more information, see Verifying your custom domain for GitHub Pages.
Warning
We strongly recommend that you do not use wildcard DNS records, such as *.example.com. These records put you at an immediate risk of domain takeovers, even if you verify the domain. For example, if you verify example.com this prevents someone from using a.example.com but they could still take over b.a.example.com (which is covered by the wildcard DNS record).
For my stage, this meant GitHub Pages wasn't the full solution. It’s great for hosting a static page — and if that’s what you need, go for it. Follow their documentation (or drop a comment if you want my notes).
But I needed something that could route traffic to internal dev and demo environments via subdomains. GitHub Pages just doesn't play well with that kind of flexibility.
So back to the drawing board.
Wildcard DNS records
Considering all the following hiccups I encountered as you will see later here, I found it's a good idea to zoom out and think about what I'm even trying to achieve.
Me: What is a wildcard DNS record and why would you want one?
ChatGPT:
A wildcard DNS record is a type of DNS record that matches requests for non-existent subdomains under a given domain. It's represented by an asterisk (*) in place of a subdomain. For example:
*.example.com IN A 192.0.2.1
This means that any subdomain of example.com that doesn't have a specific DNS record—like foo.example.com, bar.example.com, or xyz123.example.com — will resolve to 192.0.2.1.
Why Would You Want a Wildcard DNS Record?
Here are some common reasons:
Catch-All for Subdomains
If you expect dynamic or unpredictable subdomain generation (e.g., user-generated subdomains like username.example.com), a wildcard record lets you handle them without needing to preconfigure each one.Simplifying DNS Management
Instead of creating hundreds of records manually, a single wildcard record handles all unspecified subdomains.Multi-Tenant Applications
For SaaS platforms that offer custom subdomains to users (user1.example.com, user2.example.com), a wildcard DNS makes this scalable and manageable.SSL/TLS with Wildcard Certificates
If you're using a wildcard TLS certificate (e.g., *.example.com), having a wildcard DNS record complements it by ensuring DNS resolution matches the certificate’s coverage.Fallback Handling
If you want to redirect all undefined subdomains to a default server or error page, a wildcard record helps you route traffic accordingly.
Caveats and Considerations
Wildcard records don’t match existing subdomain records. If blog.example.com has its own record, it takes precedence.
Be careful with MX records—wildcard MX records can lead to unexpected email routing.
DNS caching and propagation can make troubleshooting harder with wildcards.
Security concerns: Misconfigured wildcards can make it easier for attackers to exploit subdomains or perform phishing.
Okay. This is exactly what I'm looking for. Let's continue.
Cloudflare
This is the most recommended option. And I always wanted to try it. It's also the one that ChatGPT strongly recommends after you tell it its prior suggestion using GitHub Pages is not a good idea.
Me: “Okay, GitHub Pages is out—what about Cloudflare?”
ChatGPT:
Cloudflare is an excellent choice for managing DNS, free SSL, and even proxying traffic.
Sign up for a free Cloudflare account and add your domain.
Point your nameservers at Cloudflare.
Under SSL/TLS → Edge Certificates, enable ‘Always Use HTTPS’ and ‘Automatic HTTPS Rewrites’.
Create a DNS record:
Type: A
Name: @
Content: 192.0.2.123 # your server IP
Proxy status: Proxied
TTL: Auto
(Optional) If you really want wildcard subdomains, you’ll need a paid plan to enable Wildcard SSL on the free Universal cert.
Then you can use Page Rules to route dev.example.com → your dev server and demo.example.com → your demo box.
What does this even mean?
Wildcard SSL on Cloudflare Free ≠ Free?
This may be a Situation where it's time to read the actual documentation.
Universal SSL certificates present some limitations.
Hostname coverage
Full setup
Universal SSL certificates only support SSL for the root or first-level subdomains such as example.com and www.example.com. To enable SSL support on second, third, and fourth-level subdomains such as dev.www.example.com or app3.dev.www.example.com, you can:
Purchase Advanced Certificate Manager to order advanced certificates.
Upgrade to a Business or Enterprise plan to upload custom certificates.
In summary: You get free, automatic SSL for your domain and first-level subdomains like www.example.com, app.example.com, or dev.example.com. No Let’s Encrypt scripts, no certbot, no cronjobs.
Just point your domain, toggle some settings, and it works.
Need deeper subdomain nesting or more control? That’s where paid plans and custom certs come in.
But for me, the free tier checks all the boxes — wildcard SSL for first-level subdomains like *.example.com, simple setup, no maintenance headaches.
That said, as I kept poking through the docs (because of course I did), I ran into one of those quietly important footnotes — the kind that makes you pause and go, "Wait… what exactly am I trusting here?"
Warning
Note that the certificate Cloudflare provides for you to set up Authenticated Origin Pulls is not exclusive to your account, only guaranteeing that a request is coming from the Cloudflare network.
For more strict security, you should set up Authenticated Origin Pulls with your own certificate and consider other security measures for your origin.
That’s logical and expected. And it's a fun challenge that I’d like to tackle too: running your own SSL termination or reverse proxy to secure the connection between Cloudflare and a server’s IP.
Cloudflare's SSL/TLS Encryption Modes: Understanding "Full (strict)"
When Cloudflare acts as a proxy, it establishes two separate SSL connections. The first, Client (Browser) to Cloudflare, is automatically secured by Cloudflare's free Universal SSL certificate for your domain and first-level subdomains. You don't manage this.
The second, Cloudflare to your Origin Server (your VM/backend), depends on your chosen SSL/TLS encryption mode:
- Flexible/Full: Cloudflare connects over insecure HTTP or accepts any certificate from your origin. This means traffic between Cloudflare and your server is not fully secured.
- Full (strict): This is the most secure mode, where Cloudflare connects over HTTPS and validates your origin's SSL certificate. To use this, you must install and manage a valid, publicly trusted SSL certificate (like from Let's Encrypt) on your origin server. This reintroduces operational tasks like Certbot setup, renewal automation, and monitoring.
Sidenote: Cloudflare also provides free **Origin CA certificates, which are valid for Cloudflare-to-origin connections (up to 15 years), simplifying management if you don't need publicly trusted origin certs.
That feels like a lot of moving parts to monitor in the long run.
And that’s not the only caveat.
We could absolutely use Cloudflare to point to a GitHub Page or even a basic backend server. For many projects, the free tier really does check all the boxes.
But knowing what I do relies heavily on frequent, automated, machine-to-machine interactions — development tools, preview environments, API consumers — I’d end up constantly battling DoS protection rate limiting and tweaking firewall rules to allow my own systems through.
I might be overthinking it. This could still work…
But I couldn’t shake the feeling: why layer complexity on top of complexity, just to avoid complexity?
What if I cut straight to the core — skip the proxy, skip the platform — and just host everything myself?
Let’s explore that idea.
DIY + Compute Instance + NGINX
Since managing certificates was still on the menu even with Cloudflare, I turned to the next item on my list:
Brew everything yourself. Spin up a cheap compute instance, pin your domain, and route traffic through NGINX.
Sounds fun at first.
I asked ChatGPT for a quickstart guide, and within seconds I had all the steps and configs needed to get this setup running. And if you’ve done this before, you already see the red flags:
You’re basically pinning your TLD to a single IP address — making it a single point of failure. Oh, and don’t forget to make it static (yes you'll need to explicitly make it static).
You’re now responsible for securing that server — zero-day patches, firewall rules, etc.
You have to manage it properly — updates, monitoring, logging, the whole shebang.
What if someone accidentally deletes it? How do you even enforce "don’t touch this VM"?
And heaven forbid it shuts down because of a billing hiccup.
That brought up some flashbacks. And the point is — it’s not 2012 anymore. Why am I still worrying about static IPs, accidental deletions, and nginx.conf?
Alright, let’s scratch that idea — what else is on the table?
☁️ Google Cloud
At this point, the natural next thought was: "What about one of the big cloud providers, like Google Cloud?"
They offer robust managed services, global reach, and certainly have the infrastructure to handle traffic. And even better — they support wildcard SSL out of the box.
But it’s the cloud — and that’s expensive. Right?
Is it?
Let’s check the pricing:
1. Google Cloud DNS (google_dns_managed_zone
)
~ $0.20 per managed zone per month (for the first 25 zones). Queries cost ~$0.40 per million. Let’s assume something insane like 4.3 million queries for our case.
2. Google Compute Global External HTTPS Load Balancer
(google_compute_global_address
, google_compute_url_map
, google_compute_target_https_proxy
, google_compute_forwarding_rule
)
- Forwarding Rule: $0.025 per hour for the first 5 rules → ($0.025 × 24 × 30 = $18) x 3 => 54/month
- Data Processed by Load Balancer: $0.008 per GB
In total, you're looking at costs ranging from approximately (These costs are just for routing):
So depending on your setup, you’re looking at:
$20.07/month — using one forwarding rule
~$36.34/month — if you use the global load balancer to proxy traffic to an outside IP or service, plus modest egress (~30GB)
$50.74/month — for a setup routing to a few subdomain-specific Compute Engine apps
$68.74/month — for a wildcard landing page + two routed apps behind HTTPS LB
If you were to use this Load Balancer as a proxy to route traffic outside of Google Cloud to another provider's public IP, for the modest traffic volumes considered here (e.g., 30GB/month), that would add an additional ~$3.60/month in Google Cloud egress fees, making the total ~$36.34/month-$72,34/month.
The additional egress fee may totally acceptable — but a factor to watch out for. Depending on the traffic and payload size, this can become a variable worth comparing against whatever you tried to save by avoiding the Google ecosystem.
Keep in mind: API response egress fees also apply, depending on payload size. For small CRUD responses or status codes, this cost is usually negligible — around $1-5 for 40GB of traffic depending on the region you pick.
Compared to the DIY setup or Cloudflare workarounds, this is the first option that feels like it could scale without me babysitting it.
✨ So Where Did I Land?
After GitHub Pages (too limited), Cloudflare (good for most cases but fragile for automation), and GCP (robust but can become costly), DIY on a Compute Instance + NGINX (ops burden).
To solve the puzzle, I needed to consider the total budget, including the one for the actual backends and not just the costs for routing to them.
This led to this map:
Tier 1: Budget of under $10/month
Cloudflare (Free) proxying to a small Virtual Private Server runs your backend application and landing page. Examples: DigitalOcean Basic Droplets, Hetzner, or Vultr Cloud Compute instances.
Tier 2: Under $50/month
Scale out the same setup from Tier 1 by using a more powerful VPS or adding a managed database solution.
Tier 3: Under 100$/month
Scale out the same setup from Tier 1 by adding multiple smaller VPS instances for redundancy. At this budget, a managed database is well within reach.
Tier 4: Under $150/month
If your priority is minimal operational burden and seamless scalability, a Hyperscaler like GCP is the best choice.
This budget allows for their truly managed services, which drastically reduce the need for you to handle SSL, load balancing, and database operations. If this budget is your starting budget, consider just starting here to prevent complex migrations or vendor lock-in that might have been easier to manage and scale from the start.
And the DIY option? Good for learning or niche onprem deployments, but if you want to deliver something for others to use and want get some sleep at night, skip that.
Thanks for following along.
There is still Netlify/Vercel and the classic AWS S3 + CloudFront you may explore for your case.
Overall, picking a hosting stack is still a matter of trade-offs, personal tolerance for ops, and your budget.
If your stack feels duct-taped together or temporary — congrats. You’re doing it right.
Let’s keep learning together. If you want a guide on how to get any of those Tiers wired and live let me know in the comments.
To end this here is the promised Checklist
Developer’s Go-Live Infra Checklist
Before hitting “launch,” make sure you’ve considered:
🔤 Domain
- Chosen a domain you can live with (and not regret in 24h)
- Avoided legal traps (e.g. company name typos, trademark issues)
- Verified DNS propagation (use tools like dig, nslookup, or online checkers)
🌐 DNS Setup
- Configured A/AAAA or CNAME records correctly
- Decided between wildcard DNS (*.example.com) vs. explicit records
- Chosen a DNS provider (Cloudflare, Google Cloud DNS, registrar defaults)
🔒 SSL / HTTPS
- Are you using Let's Encrypt, Cloudflare Universal SSL, or managed certs?
- Do you need wildcard SSL?
- Is your certificate auto-renewing?
- Checked SSL score (via SSL Labs)
🛠️ Hosting / Infra
- Hosting static or dynamic? (GitHub Pages, GCP, Vercel, DIY?)
- Do you need subdomain routing for dev/demo environments?
- Are you reverse proxying traffic or going direct?
🚦 Load Balancing / Rate Limiting
- Need to handle traffic spikes? (GCP LB, Cloudflare Proxy)
- Are rate limits affecting internal tooling or automation?
🧰 Ops Burden
- Who patches and updates the server?
- Who handles monitoring, alerts, and logs?
- Are backups or snapshots in place?
💸 Cost Awareness
- What are your fixed vs. variable costs? (DNS, SSL, egress)
- Any surprise bandwidth or certificate fees?
- Is your current setup sustainable at scale?
Top comments (0)