PostHog Cloud EU bill crossed 180 EUR/month at ~3M events, so I migrated to a self-hosted Hetzner box for 42 EUR/month.
Cost crossover sits near 2M events/month for solo studios. Below that, Cloud EU is cheaper once you count your time.
Four things bit me on first try: the Hobby tier 10k events/day cap, ClickHouse memory limits, the reverse proxy that needed to actually be persistent, and one outdated plugin.
After 30 days of self-hosted: 42 EUR infra, 0 EUR PostHog, ~3 hours/month maintenance. Worth it past 2M events.
PostHog Cloud EU sent me a 180 EUR invoice last month. I was already paying 42 EUR/month for a Hetzner box that sat 80% idle. The math wrote itself, so I migrated. Then four small mistakes turned a "weekend project" into a 6-day stretch where my analytics were partially broken. Here is what actually happened, the cost crossover I found, and the gotchas nobody warns you about.
When self-hosting PostHog actually saves money
PostHog Cloud EU pricing is generous up to 1M events/month, then it ramps. My event volume sat around 2.4M/month across raxxo.shop and three side projects. Cloud EU billed me 180 EUR. Self-hosted on a Hetzner CCX23 (4 vCPU, 16 GB RAM, 80 GB NVMe) costs me 42 EUR/month including a 5 EUR backup volume.
The crossover point is not what PostHog's docs imply. Their calculator suggests self-hosting wins past 1M events. In practice, you need to count three things they leave out:
Your time. The first migration weekend cost me 14 hours. After that, maintenance has been 3 hours/month.
The persistent reverse proxy you need (more on this below). That is another small box or a Caddy/Cloudflare Tunnel setup.
Backups. The default ClickHouse backup story is "you figure it out." I run a nightly snapshot to Hetzner Storage Box at 4 EUR/month.
Add those up. For me at ~2.4M events: Cloud EU 180 EUR vs self-hosted 51 EUR all-in. Past 2M events/month, self-hosting wins clearly. Below 1M events, Cloud EU is cheaper once you value your weekends honestly. Between 1M and 2M is a wash for a solo studio.
The crossover also depends on session replay. If you record sessions, your storage and compute jump fast. I kept session replay on Cloud EU for the first month after migration to test self-hosted performance under load before flipping that switch.
Gotcha 1: the Hobby tier silently caps at 10k events/day
PostHog ships two self-host install paths: the Hobby image (single Docker Compose file) and the production Helm chart. The Hobby docs say "great for personal projects." What they bury is the 10k events/day soft cap. Past that, ingestion still works, but background jobs start lagging and the dashboards go stale by hours.
I hit this on day 3. The dashboards looked fine for the first two days because traffic was low while DNS propagated. Then a Hashnode syndication picked up a post and pushed 18k events through in an afternoon. Replay clips stopped appearing. Insights froze. PostHog logged nothing useful because the cap is enforced at the worker queue level, not at ingest.
The fix is to skip Hobby entirely for anything past hobby traffic. Move to the Helm chart on a single-node K3s cluster, or pull the Hobby docker-compose.yml and bump the worker replicas + ClickHouse memory yourself. I did the second option because I did not want a Kubernetes layer for one app. Two extra workers and a memory bump fixed the lag inside an hour.
If you are reading this before migrating: pretend the Hobby tier does not exist. Start on Helm or hand-tuned compose from day one.
Gotcha 2: ClickHouse will eat all your RAM if you let it
ClickHouse is the engine PostHog uses for events. By default, the Hobby image gives ClickHouse no memory limit. On a 16 GB box running ClickHouse plus Postgres plus Kafka plus Redis plus PostHog itself, ClickHouse will happily consume 12 GB during a query and OOM-kill Kafka. When Kafka dies, ingestion silently drops events for the 4-7 minutes it takes to restart and replay.
I lost a partial day of analytics figuring this out. The symptom is "some events show up, some do not, and there is no error anywhere." The cause is OOM-killed Kafka silently restarting and not catching up.
The fix is two lines in the ClickHouse config:
0.5
4000000000
Cap ClickHouse at 50% of host RAM and limit per-query memory to 4 GB. On a 16 GB box this leaves headroom for everything else. Query performance dropped maybe 10% on heavy funnels. I have not noticed it once in normal use.
If you run a 32 GB box you can be more generous. On 16 GB, do not skip this step.
One more detail: ClickHouse keeps merge operations running in the background. Those are memory-hungry too. Set background_pool_size to 8 (down from the default 16) on small boxes. My CPU graph flattened out the day I changed that.
Gotcha 3: the reverse proxy actually has to be persistent
PostHog's docs talk about a reverse proxy "to avoid ad blockers." What they do not emphasize is that this proxy needs to be on a domain you own, on infrastructure separate from your app, and persistent across deploys. If you stuff the proxy into your Vercel-deployed Next.js app the way the quickstart shows, two things break:
Vercel cold starts add 200-800 ms to ingestion calls. PostHog's SDK retries on timeout, so you get duplicate events.
Your app deploys cycle the proxy URL. Cached SDK configs in user browsers point to the previous build's edge for a few minutes after each deploy.
I fixed this by running Caddy on the same Hetzner box that hosts PostHog. The PostHog SDK in my apps points to events.raxxo.shop, which Caddy reverse-proxies to the local PostHog container. Zero cold starts, zero deploy ripples, one TLS cert renewed automatically.
If you do not want a second domain, Cloudflare Workers also works as the proxy layer for free. The point is: do not put the proxy on Vercel or Netlify if your app is also there. Same-origin proxies on serverless platforms create more problems than they solve.
A second wrinkle: PostHog's SDK config bakes the proxy URL into your build. If you change the proxy host later, you need a coordinated redeploy of every app pointing at it. Pick the domain once, write it down, and treat it like a permanent record. I keep mine in a single env var (NEXT_PUBLIC_POSTHOG_HOST) shared across all four apps.
For background on the cost-cutting infra logic, see Neon Database Branching Saved Me 200 EUR Every Month, which covers a similar move from managed to right-sized.
Gotcha 4: one PostHog plugin had not been updated in 14 months
PostHog's plugin system is solid for the official plugins. The community plugins are a mixed bag. I was using a GeoIP enrichment plugin from the marketplace that worked fine on Cloud EU. On self-hosted PostHog 1.43, that plugin threw a silent error on every event, dropped the geoip field, and logged nothing visible to the dashboard.
I only caught it because a funnel I built specifically segments German vs rest-of-EU traffic and the German segment went to zero overnight. The plugin's GitHub repo had not seen a commit in 14 months. Replacing it with the official MaxMind GeoIP plugin took 10 minutes once I knew where to look.
Audit your plugins before migrating. Open every community plugin's repo, check the last commit date, and check the issue tracker for "1.40+" or "self-hosted" complaints. If a plugin has not been touched in 12 months, plan to replace it. Cloud EU silently swaps in working versions for you. Self-hosted does not.
I covered the broader observability stack switch in PostHog Error Tracking Killed My Sentry Bill, which lays out why I am consolidating on PostHog instead of running 3 separate tools.
Bottom line
Past 2M events/month, self-hosted PostHog on a Hetzner box saves real money. Below 1M events, Cloud EU is cheaper once you honestly count your time. The crossover sits around 1.5M-2M events for a solo studio.
The four gotchas (Hobby tier cap, ClickHouse memory, persistent reverse proxy, plugin compatibility) cost me 6 days the first time. They are all 10-30 minute fixes if you know to look for them. After 30 days running self-hosted, my analytics setup is faster, costs 51 EUR/month all-in, and I have not lost a single event since the ClickHouse memory fix.
If you want the broader picture of how I run a one-person studio's developer infrastructure, the Lab Overview collects every infra and tooling article I have written, organized by topic. Start there if you are figuring out where to cut your own bill.
Top comments (0)