About six months ago I was managing infrastructure across several environments and ran into a consistent limitation: there wasn’t a clean way to provide per-environment observability with real isolation without duplicating the entire monitoring stack. Dashboard variables solved for presentation, not security, and any admin could still access everything. Spinning up separate Prometheus instances fixed isolation, but at the cost of operational overhead and fragmentation. Neither approach scaled cleanly.
The stack
The core is standard: Prometheus for metrics, Loki for logs, Grafana for visualization, Alertmanager for routing, Blackbox for website endpoints, and Grafana Alloy as the agent on client hosts. Everything runs in Docker Compose on two Lenovo ThinkCentre M75s, I have one primary server, and one warm standby server. MinIO provides S3-compatible object storage for Loki chunks, while PostgreSQL backs the portal and streams to the replica. Nginx and Cloudflare tunnels handle ingress.
Nothing exotic. The interesting decisions are in how the pieces fit together, not which pieces were chosen.
The architecture decision that defined everything
Early on I had to choose how to handle high availability at the data layer. The obvious approach is server-side replication, by running Prometheus remote_write from the primary to the replica, so the replica stays current. I tried it. Then I removed it.
The problem with server-side replication is that it creates a dependency between the two servers. If the primary is the bottleneck, the replica suffers. If the remote_write endpoint is misconfigured, you get silent data loss with no indication anything went wrong. And when you eventually need to promote the replica, you're never quite sure how much data it really has.
The approach I landed on is client-side dual-push. Each client's Alloy agent pushes metrics and logs to both of our servers simultaneously through two separate Cloudflare tunnels without creating any substantial overhead for the client’s servers. The primary and replica servers have no knowledge of each other at the metrics layer. Each Prometheus instance receives the same data independently. Each Loki instance receives the same logs independently and stores them each in their own instance of MinIO.
The practical result is that the warm standby isn't warm, it's live. If the primary goes down, the replica has current data up to the moment of failure. Failover is a Cloudflare tunnel redirect and a PostgreSQL promotion. No data replay, no gap in metrics, no complicated reconciliation.
The tradeoff is double the egress from every client host and double the ingestion load on our internal network. At current scale that's not meaningful. At a few hundred tenants it becomes a real consideration. We’re currently in the process of planning how to manage that future problem.
Three-layer tenant isolation
The isolation model runs at three independent layers, and the independence is intentional. Any single layer failing shouldn't compromise the others.
The first layer is Prometheus labels. Every metric series that arrives at the ingestion endpoint carries a tenant label injected by Alloy before the push. Prometheus doesn't trust the client to label correctly so Alloy handles it, and the label is set in the config file generated server-side at registration time. A client cannot mislabel their own series, even if they try.
The second layer is separate Grafana organizations. Each tenant gets their own org. Users in that org can only see dashboards scoped to their org. The data sources in each org have a preset label filter applied, so even if someone found a way to query directly, they'd only see their own tenant's data.
The third layer is per-tenant Cloudflare Access service tokens. Each tenant authenticates their Alloy push through a unique token. Revoke the token and that tenant's agents stop pushing immediately. There’s no Prometheus config change, no restart, no waiting for a scrape interval. It's the fastest lever in the decommissioning flow.
A compromised token exposes one tenant's data only, not any other tenant’s. The next improvement in the roadmap is moving from per-tenant tokens to per-server tokens. By doing so, a compromised token would then expose one machine rather than one organization. That's a Phase 2 item.
Design Evolution
The first iteration of this project ran node_exporter and promtail on each server, which worked great on a local network, but as a production model it fell short. Asking a client to expose multiple ports and poke holes in their firewalls felt like an unnecessary security risk, and one of our core beliefs is that we should require as little as possible from the clients, and be as unobtrusive as possible in the client’s infrastructure. Our clients should not have to worry about anything we install on their system, and we should not ask them to change anything about their infrastructure to accommodate us. Keeping all of this in mind, we rebuilt the entire stack from scratch using Grafana Alloy as the remote agent using an encrypted Cloudflare tunnel to connect to our servers.
This innocent initial design flaw made me instantly begin to think about the bigger picture in all the design decisions. The focus on build decisions shifted to forward-thinking and ensuring that all decisions involving the build as production ready as feasible, without going down the rabbit-hole of continuous innovation at the expense of production readiness. This also served to crystallize the idea that we should take an in-depth look at all the software options available and ensure that any options we choose best serve the end users.
What I got wrong
Three things worth being honest about.
The first problem I came across was documentation drift. I documented a decision to remove client-side dual-push in the architecture log after briefly experimenting with server-side replication. The dual-push was never actually removed from the client configs. I discovered this weeks later when reviewing the Alloy config on a client host. The lesson: verify the running system, not the documentation.
Then came data volume and proper backup protocols. The entire stack is backed up in triplicate, but when I first set up the PBS backup script, I was capturing compose files, configs, and scripts, but not the actual data volume where Prometheus, Loki, Grafana, and PostgreSQL store their data. The entire data layer was unprotected. I found this during a backup verification exercise and fixed it immediately, but it's the kind of gap that only shows up when you look carefully.
The third was an mTLS legacy issue in Grafana datasource configuration. After a Grafana admin account recovery, the datasources had stale TLS settings from an old PKI infrastructure that no longer existed. Grafana reported healthy but queries were silently misconfigured. The fix was straightforward once found; the problem was that nothing surfaced it automatically. I now run a data source health check after any Grafana restart.
Where it stands
The platform is running, the architecture is validated, and I'm looking for a small number of beta testers willing to run it on real infrastructure and tell me honestly what's missing. The free tier covers three servers with no credit card required, but for beta-testing I’m flexible. The bootstrap script installs Alloy, registers the server against the API, and exits. By doing this, there’s no ongoing shell access, no cron jobs, no modifications outside the Alloy install path.
If you're running infrastructure without good visibility into it, or if you've looked at pricing from bigger companies and decided it doesn't fit, I'd like to hear about it. The free tier covers three servers, no credit card required. Full script at https://monitor.irinobservability.com/bootstrap.sh if you want to read it before running anything.
https://irinobservability.com/signup

Top comments (0)