I come from a DevOps and blockchain background. I've spent years managing infrastructure, wrangling containers, and thinking about how systems should be architected. So when I started shipping web apps and saw what developers were paying for deployment platforms, something felt off.
Vercel's DX is incredible — I won't pretend otherwise. git push and your app is live. But then you look at the bill: $20/seat/month. Bandwidth overages. And that's just hosting. You still need Sentry for error tracking ($26/mo), something like PostHog for session replay and analytics, an uptime monitoring tool, maybe a transactional email service. Suddenly you're juggling six SaaS subscriptions for what is fundamentally one job: running your app and knowing what's happening inside it.
As someone who's managed infrastructure professionally, I kept thinking: all of this can run on a single $20 VPS. The data is just HTTP requests, error payloads, and time-series metrics. There's no technical reason this needs to be six separate services.
So I built Temps.
What is Temps, in one sentence
An open-source, self-hosted deployment platform with built-in analytics, error tracking, session replay, uptime monitoring, and transactional email. Runs on any VPS. Dual-licensed under MIT and Apache 2.0.
curl -fsSL https://temps.sh/deploy.sh | sh
That's the entire install. One command, on any Linux server. From bare server to first deployment in under 3 minutes.
Why Rust
I didn't start with Rust. The first prototype was Node.js. It worked, but the resource footprint was brutal — the deployment server itself was eating 800MB of RAM just idling. When the thing that deploys your apps needs more resources than the apps it's deploying, something is wrong.
Rust brought that down dramatically. But the real win was Cloudflare Pingora — their open-source proxy engine. Pingora handles reverse proxying, TLS termination (with dynamic SNI-based certificate loading), HTTP/2, and connection management. Building on top of it meant I got battle-tested networking code from a company that handles a significant chunk of internet traffic, instead of writing my own proxy from scratch.
The stack ended up being:
- Rust — 51 workspace crates covering the entire platform
- Axum — HTTP framework for the API
- Sea-ORM — database access layer
- Pingora — Cloudflare's proxy engine for reverse proxying and TLS
- Bollard — Docker API client for container management
- PostgreSQL + TimescaleDB — app data + time-series analytics/metrics
Everything runs as a single binary. No Kubernetes. No microservices. One process that handles deployments, proxying, analytics ingestion, error collection, monitoring, email, and more. My DevOps background made me appreciate this kind of simplicity — fewer moving parts means fewer things to debug at 3am.
The hard problems nobody warns you about
Building a deployment platform sounds straightforward until you actually try it. Here's what surprised me.
Zero-downtime deployments
The naive approach — stop old container, start new one — creates a gap. Even a 2-second gap means dropped requests and angry users.
Temps uses a blue-green deployment pattern:
- Build the new container image
- Deploy and health check the new container alongside the old one (HTTP health checks with a configurable timeout, up to 300 seconds)
- Shift traffic to the new container once health checks pass
- Tear down the old container only after the new one is confirmed healthy
If the new container fails health checks or crashes, the old container stays running and the deployment is marked as failed. No downtime.
Framework auto-detection is a rabbit hole
"Just detect the framework from the project files" sounds simple. In practice:
- A project with both
next.config.jsand aDockerfile— which one wins? - A Python project with
requirements.txt,Pipfile, ANDpyproject.toml— which dependency manager? - A Node.js project — is it Next.js, Vite, Nuxt, Remix, Astro, NestJS, or plain Express?
- A monorepo with 4 different frameworks in subdirectories
I built a detection system that reads package.json dependencies, checks for framework-specific config files, and detects package managers from lock files (npm, yarn, pnpm, bun). It handles Next.js, Vite, Astro, Nuxt, Remix, NestJS, Vue, Express, Docusaurus, CRA, Rsbuild, Python, Go, Rust, Java, .NET, and anything with a Dockerfile. The Dockerfile always wins if present.
Each detected preset generates a Dockerfile automatically. The result: most projects deploy with zero configuration.
bunx @temps-sdk/cli deploy
No temps.json. No temps.yaml. No build configuration file. It just figures it out.
Sentry-compatible error tracking
I didn't want to build a toy error tracker. I wanted something you could actually use in production instead of Sentry.
The key decision: make it Sentry-compatible at the protocol level. Temps implements the Sentry envelope format — it parses events, transactions, sessions, and spans using relay-event-schema (Sentry's own Rust types). If you're already using @sentry/nextjs or sentry-sdk for Python, you change one line — the DSN endpoint — and your errors flow into Temps instead.
// Before
Sentry.init({ dsn: "https://abc@sentry.io/123" });
// After
Sentry.init({ dsn: "https://abc@your-server.com/123" });
Same SDK. Same error grouping. Same stack traces. Source map support included. Zero per-event fees.
I'd rather be compatible with the ecosystem than force people to learn a new tool.
Session replay with rrweb
The session replay feature uses rrweb — the same recording library used by PostHog, LogRocket, and others. The React SDK (@temps-sdk/react-analytics) records DOM mutations and user interactions on the client, compresses them with zlib, and sends them to Temps where they're stored alongside the rest of your analytics data.
You can watch real user sessions directly in the Temps dashboard, correlated with errors, page views, and performance data. No separate session replay subscription needed.
What I got wrong
I'll save you the hero narrative. I made plenty of mistakes building this solo.
I underestimated managed databases. The first version required you to set up your own PostgreSQL and Redis. Nobody wanted to do that. Temps now provisions Postgres, Redis, S3 (via RustFS), and MongoDB alongside your apps — handles creation, backups, and teardown.
The first CLI was overengineered. It had too many flags and options. I rewrote it to have sensible defaults for everything. Now the most common flow is two commands:
bunx @temps-sdk/cli init
bunx @temps-sdk/cli deploy
I tried to build a dashboard before the CLI was solid. The dashboard is nice for monitoring — it has a web terminal (xterm.js), a code editor (Monaco), charts (Recharts), and the rrweb session replay player. But engineers live in the terminal. Getting the CLI experience right first was the correct order of operations — I just didn't do it in that order.
What surprised me: the MCP server
One thing I didn't plan from the start but ended up building: a Model Context Protocol server (@temps-sdk/mcp). MCP is the standard that lets AI assistants interact with external tools.
With the Temps MCP server, an AI agent like Claude can deploy your apps, check deployment status, and manage your infrastructure through natural language. You add it to your Claude Desktop config:
{
"mcpServers": {
"temps": {
"command": "npx",
"args": ["@temps-sdk/mcp"]
}
}
}
And now your AI assistant can talk to your deployment platform directly. It's a small thing, but it fits a pattern I believe in: meet developers where they already work. If that's the terminal, build a great CLI. If that's an AI assistant, build an MCP server.
The economics
Here's the math that motivated this whole project — what a typical developer or small team pays to run production apps:
| What you get with Temps | Instead of paying for |
|---|---|
| Git deployments + preview URLs | Vercel / Netlify ($20+/mo) |
| Web analytics + funnels | PostHog / Plausible ($0-450/mo) |
| Session replay | PostHog / FullStory ($0-2000/mo) |
| Error tracking (Sentry-compatible) | Sentry ($26+/mo) |
| Uptime monitoring + status pages | Better Uptime / Pingdom ($20+/mo) |
| Managed Postgres/Redis/S3/MongoDB | AWS RDS / ElastiCache ($50+/mo) |
| Transactional email + DKIM | Resend / SendGrid ($20-100/mo) |
| Request logs + proxy | Cloudflare ($0-200/mo) |
| KV store + blob storage | Vercel KV / S3 ($0-50/mo) |
| Total with Temps | $0 (self-hosted on your VPS) |
As an indie, that difference is real money. It's the difference between burning runway and not.
What Temps supports today
To be concrete about where the project is — this is a 51-crate Rust workspace, not a weekend project:
Frameworks: Next.js, Vite, Astro, Nuxt, Remix, NestJS, Vue, Express, Docusaurus, Python, Go, Rust, Java, .NET, and anything with a Dockerfile.
Deployment:
- Git push deployments (GitHub and GitLab)
- Preview deployments per branch/PR
- Zero-downtime blue-green deployments
- Automatic SSL via Let's Encrypt (HTTP-01 and DNS-01)
- Custom domains with automatic TLS
- Environment variables and secrets (AES-256 encrypted at rest)
Built-in observability:
- Web analytics with funnels and visitor tracking
- Session replay (rrweb-based)
- Error tracking (Sentry-compatible — same SDK, change one line)
- Uptime monitoring with alerts (email, Slack, webhooks)
- Request-level logging (method, path, status, response time)
- Performance tracking (Web Vitals)
Infrastructure:
- Managed PostgreSQL, Redis, S3 (RustFS), and MongoDB
- KV store (
@temps-sdk/kv— Redis-like API) - Blob storage (
@temps-sdk/blob— S3-compatible) - Transactional email with DKIM verification
- Vulnerability scanning (Trivy-based)
- Status pages with incident management
Developer tools:
- MCP server for AI agents (
@temps-sdk/mcp) - React analytics SDK (
@temps-sdk/react-analytics) - Node.js SDK with Sentry-compatible error tracking (
@temps-sdk/node-sdk) - TypeScript CLI (
bunx @temps-sdk/cli) - Web dashboard with terminal, code editor, and session replay player
Infrastructure: Runs on any Linux VPS — AWS, GCP, Azure, DigitalOcean, Hetzner, your own hardware.
Who should NOT use Temps
I believe in being honest about trade-offs:
- If you're on Vercel's free tier — Vercel's free tier is genuinely great. Temps doesn't make sense until you're paying.
- If you need edge computing — Temps runs on your servers, not a global edge network. If sub-50ms latency from every continent matters, Vercel or Cloudflare is better.
- If you want zero ops — Temps is self-hosted. It's dramatically simpler than raw Docker or Kubernetes, but it's not zero-ops. You're still responsible for a server.
- If cost isn't a concern — If you have the budget, Vercel's ecosystem and managed infrastructure is hard to beat.
What I'd tell someone building an open-source tool solo
A few things I've learned that I wish someone told me:
Compatibility beats originality. Making the error tracking Sentry-compatible (using their actual relay-event-schema types) instead of inventing a new protocol was the single best technical decision. Users can try it with a one-line change and zero risk.
The install experience IS the product. If your open-source tool takes more than 5 minutes to set up, most people will never try it. The one-liner install took an unreasonable amount of engineering effort — auto-detecting OS, architecture, setting up services, configuring PostgreSQL with TimescaleDB, initializing encryption keys — but it's worth it.
Don't build for everyone. Temps is for developers and small teams who are paying for hosting and want to own their infrastructure without the DevOps overhead. That's a specific group, and that's fine.
Build for the workflow, not just the feature. Adding the MCP server wasn't on my roadmap. But developers are increasingly working through AI assistants, and if Temps can be part of that workflow natively, it removes friction. Same logic applies to the CLI, the SDKs, the Sentry compatibility. Meet people where they are.
Try it
If any of this resonates:
curl -fsSL https://temps.sh/deploy.sh | sh
The CLI is free forever. The source code is on GitHub — 51 Rust crates, dual-licensed MIT/Apache 2.0.
If you run into issues or want to chat, the Discord is active and I'm usually around.
Temps isn't perfect — it's not. But I think the idea that your deployment platform should include observability, email, storage, and AI integration by default, at no extra cost, on infrastructure you control, is the right direction. And I'd rather build that in the open than behind a paywall.
If you've dealt with SaaS cost sprawl or have opinions on self-hosted vs managed, I'd love to hear your take in the comments.
Top comments (1)
You can check the website for the docs here: temps.sh