While ngrok already uses ngrok to dogfood ngrok.com, I can’t forget about its close cousin: When your engineers use your product to power their homelabs.
Honestly, I’m not even sure what to call it. Doglabbing? Homefooding?
Either way, I’ve heard plenty of stories about how ngrokkers use endpoints and Traffic Policy to access their homelabs from everywhere, or securely share services with friends and family, and thought it was about time for a round-robin look at the shapes of their networks, the policies that rule them, and the gateways that wire it all up.
So, let’s hear from each engineer, in their own words, about exactly how they’re using ngrok for their personal homelabs and self-hosted side projects.
Meta note: I'll be publishing one setup a day throughout the week to make up a short series—be sure to check back each day to see another engineer and their doglabbing setup!
Christian (Staff Data Infrastructure Engineer): Self-hosted analytics with umami, secured with OAuth
While I also use ngrok + Traffic Policy for my home lab, my recent use case was for actually for my website and blog on a VPS, where I wanted to host umami as a privacy-focused alternative to Google Analytics via docker.
To avoid having to run it bare metal or to set up a reverse proxy or similar plumbing (the server already runs nginx and not much else), I used ngrok to expose the service and Traffic Policy to secure it—the admin interface should only be accessible to me, where the client-side script needs to be publicly accessible.
I secure all services with OAuth and IP Intelligence rules to filter on conn.client_ip.is_on_blocklist (and geo location, occasionally).
However, since umami needs to serve its script client-side, I needed to set up some exclusion rules to the OAuth path to ensure /script.js can be served (and that it works!) without OAuth. I've had to use this trick for other services that expose external endpoints, but do not completely contain them on one path.
Agent configuration:
version: 3
endpoints:
- name: umami
url: https://example.org
upstream:
url: "http://host.docker.internal:3000"
traffic_policy:
on_http_request:
- name: block spam
expressions:
- "conn.client_ip.is_on_blocklist == true"
actions:
- type: custom-response
config:
content: Unauthorized request
status_code: 403
- name: Add `robots.txt` to deny all bots and crawlers
expressions:
- req.url.contains('/robots.txt')
actions:
- type: custom-response
config:
status_code: 200
content: "User-agent: *\r\nDisallow: /"
headers:
content-type: text/plain
- name: oauth
expressions:
- "!(req.url.path.contains('/_next') || req.url.path.contains('/script.js') || req.url.path.contains('/api/send'))"
actions:
- type: oauth
config:
auth_id: oauth
provider: google
- name: bad email
expressions:
- "!(actions.ngrok.oauth.identity.email in ['myemail@example.org']) && !(req.url.path.contains('/_next') || req.url.path.contains('/script.js') || req.url.path.contains('/api/send'))"
actions:
- type: custom-response
config:
content: Unauthorized
status_code: 400
on_http_response: []
What's happening here? On every HTTP request, this policy:
- Sends a
403
error code back to any request coming from an IP address on a blocklist. - Uses a custom response to send a
robots.txt
file withDisallow
for all user agents. - Requires OAuth authentication for all requests except those to three specific routes, which aren't used by people:
/_next
/script.js
/api/send
- Sends a
400 Unauthorized
response to OAuth logins from any email exceptmyemail@example.org
. - Allows all non-blocklisted, non-bot, OAuth-authenticated traffic through to the endpoint.
--
Check back tomorrow for part two with standardized AuthN and routing for everything with Kubernetes and a handful of helpful CRDs. For now, some helpful docs reading:
Top comments (0)