If you're searching for a posthog self hosted review, you're probably balancing three things that rarely coexist: powerful product analytics, sane pricing, and real control over your data. I’ve run PostHog in production and compared it with mainstream analytics stacks, and the self-hosted route is both liberating and occasionally annoying—in the practical, ops-heavy way that most “open source” pitches gloss over.
What you actually get with PostHog self-hosted
PostHog’s self-hosted offering isn’t a watered-down demo. You get the core product analytics pieces you’d expect from modern tools:
- Event capture + properties (the Mixpanel/Amplitude mental model)
- Funnels, trends, retention, cohorts
- Feature flags & experiments (a big differentiator if you want analytics + rollout control)
- Session replay (competes with Hotjar and FullStory in many common use cases)
- User paths and dashboards
The self-hosted value is less about features and more about governance:
- You keep raw event data in your environment.
- You can control retention, encryption, access, and region.
- You can inspect/patch the deployment when something breaks.
Opinionated take: if you never plan to look at infrastructure metrics, logs, or backups, don’t self-host. PostHog self-hosted shines when you’re willing to treat analytics like any other production dependency.
Architecture, ops reality, and ongoing costs
Self-hosting PostHog is not “just run a container.” In real deployments you’ll be operating a small analytics pipeline. Most setups include:
- PostgreSQL (core app state)
- ClickHouse (event analytics engine)
- Redis (queues/cache)
- A handful of PostHog services (web, worker, plugin server)
That’s totally manageable—but it’s not free in time.
What tends to bite teams
- ClickHouse sizing: underprovision and queries get slow; overprovision and your cloud bill quietly grows.
- Disk and retention: event data balloons. Decide retention early.
- Upgrades: PostHog moves fast. Staying current means regular updates and occasional migration surprises.
- Plugin surface area: powerful, but introduces more moving parts.
Cost reality
People self-host to save on per-event pricing, but you swap it for:
- compute + storage
- engineering time
- monitoring + backups
If your team is small and your traffic is high, self-hosting can be a win. If your team is small and your traffic is low, hosted options may be cheaper when you price your time honestly.
How it compares to Mixpanel, Amplitude, Hotjar, and FullStory
Here’s the blunt comparison based on typical product analytics workflows.
PostHog vs Mixpanel / Amplitude
- Mixpanel and Amplitude are polished, analytics-first platforms. Their UX for exploration and “asking the next question” is hard to beat.
- PostHog is competitive on core analytics, but its real edge is the combo of analytics + feature flags + experimentation under one roof.
- Self-hosting tilts the decision: PostHog gives you data control that you won’t get in the same way with standard SaaS contracts.
If your org lives and dies by high-touch behavioral analysis and you want the smoothest UI, Mixpanel/Amplitude often win. If you want a strong “product OS” you can run on your own infra, PostHog is compelling.
PostHog vs Hotjar / FullStory
- Hotjar is great for lightweight qualitative insights: heatmaps, simple recordings, quick surveys.
- FullStory is a powerhouse for deep session replay + debugging, with excellent search and developer-centric tooling.
- PostHog’s session replay is good enough for many teams, but if replay is the product, FullStory is usually ahead.
A practical pattern: teams use PostHog for events + funnels, and keep a specialist tool for replay—until PostHog replay meets their needs and they consolidate.
Actionable example: self-hosted capture + privacy-safe defaults
Self-hosting doesn’t automatically make you compliant or privacy-safe—you still need to configure what you collect.
A simple, practical starting point: capture events but avoid sending raw PII (emails, full names) as properties. Use stable internal IDs and redact where possible.
Here’s a minimal JavaScript snippet using posthog-js that captures a signup event while keeping properties clean:
import posthog from 'posthog-js'
posthog.init('YOUR_PROJECT_API_KEY', {
api_host: 'https://posthog.yourdomain.com',
capture_pageview: true,
autocapture: false, // start conservative
})
// Use an internal user id, not email
posthog.identify(user.id)
posthog.capture('signed_up', {
plan: user.plan,
signup_method: 'google',
// Avoid: email, full_name, address
})
Opinionated guidance: start with autocapture off unless you have a clear plan for filtering. You can always add more events later; deleting sensitive data at scale is painful.
Who should (and shouldn’t) self-host PostHog
Self-hosted PostHog is a strong fit when:
- You need data residency or stricter governance.
- You have enough event volume that SaaS pricing becomes awkward.
- You want analytics + feature flags together.
- You’re comfortable owning a ClickHouse-based system.
It’s a weak fit when:
- You want zero ops and guaranteed SLAs.
- You need best-in-class replay search and dev tooling (often a FullStory advantage).
- Your team primarily wants a slick, guided analytics UX (Mixpanel/Amplitude strengths).
In the final analysis, I’d treat PostHog self-hosted as the “builder’s choice”: not because it’s trendy, but because it’s one of the few analytics stacks that can realistically replace multiple tools if you’re willing to run it. If you’re already paying for a mix of event analytics and qualitative tooling, piloting PostHog alongside what you have can be a low-risk way to test consolidation—especially if you keep the rollout soft and measure whether teams actually use the dashboards they ask for.
Top comments (0)