A pull request sits open for four days because the backend change can't be reviewed without running it. Someone spins up a staging server manually. Someone else opens an ngrok tunnel from their laptop and forgets to close it. The client asks for a link and gets a screenshot instead.
We've been that engineer. We built PreviewDrop to stop being that engineer.
The core promise is simple: push a feature branch, get a live HTTPS URL posted to the pull request as a commit status check. The container runs, serves traffic, self-destructs when your TTL expires. No staging server to maintain. No ngrok tunnel to keep alive.
But the "simple" part took some doing. Here's the architecture that makes it work.
Why Most Preview Tools Don't Work for Backend Stacks
Vercel preview deployments are excellent — for Next.js. Render's PR previews work for web services that fit their build pipeline. But backend stacks are different. Django, Rails, Laravel, FastAPI, Spring Boot — these are long-lived processes in containers. They need a database connection, environment variables injected at runtime, background workers that might need to start, and WebSocket support that doesn't degrade.
A preview environment for a backend stack isn't a static deploy. It's a full container that needs to boot, bind a port, accept traffic, and stay healthy until the review is done. That requires infrastructure decisions most preview tools don't make — and it's where Docker becomes the foundation.
PreviewDrop's rule is: if it runs in Docker, it previews. No framework adapter. No serverless wrapper. No YAML manifest describing how to shape your app into a preview-shaped box. A working Dockerfile in the project root is the only prerequisite.
The Architecture: Docker, WebSockets, and a Next.js Control Plane
Three layers do the work.
Layer 1 — The container runtime. Every preview is a Docker container running on a managed worker node. When GitHub Actions triggers a build (via the workflow file that npx previewdrop setup writes), the image is pushed to a private registry. PreviewDrop pulls it, injects environment variables from the project's dashboard settings, and starts the container with the port your app exposes. The container gets a health check, a TTL, and an HTTPS reverse proxy route. It lives until the TTL expires or the PR merges — whichever comes first.
Layer 2 — WebSocket coordination. This is where it gets interesting. Building a container image and starting it takes time — anywhere from 30 seconds to a few minutes depending on the stack. During that window, the user is staring at a PR status check that says "pending." The dashboard needs to show real-time progress: image pull started, container starting, health check passing, URL assigned.
We use WebSockets for that pipeline of state transitions. The control plane pushes an event stream: build_registered → image_pulled → container_allocated → starting → healthy → url_assigned. The dashboard subscribes to the relevant project channel and renders each stage as it happens. The PR commit status updates at the final healthy transition, with the URL attached.
No polling. No spamming the API every 3 seconds waiting for a status field to flip. The WebSocket connection stays open for the life of the preview, pushing state changes and container logs. If the container exits unexpectedly, the event stream surfaces the exit code and the last log lines to the dashboard before the status goes red.
Layer 3 — The Next.js dashboard. This is the surface area users actually touch. The dashboard is a Next.js app that handles authentication (GitHub OAuth), project management, environment variable configuration, deployment logs, and the live preview list. It consumes the WebSocket feed from layer 2 and renders the deployment pipeline in real time.
Next.js was the right choice here because the app is mostly server-rendered pages with some client-side reactivity for the deployment pipeline and log viewer. The API routes in /api/v1/ expose a programmatic interface for the CLI and CI workflows. The dashboard isn't a separate service — it's the same Next.js app, which keeps the architecture simpler than splitting into a separate SPA.
How the Share Link Actually Works
When a preview reaches the healthy state, the control plane generates a shareable URL with a unique token. This is the link posted to the PR commit status. It's also the link you can send to a client, a PM, or a stakeholder who doesn't have a GitHub account.
The share link is a reverse proxy route through the control plane. A request hits https://{token}.previewdrop.dev (or a subpath on the main domain, depending on configuration), the proxy resolves the token to the correct container on the correct worker node, and the request is forwarded. TLS terminates at the edge. The container never sees the proxy — it just gets HTTP traffic on the port it bound.
The link is password-protectable (enabled in the dashboard per preview). The TTL is configurable per project — 1 hour minimum, up to 168 hours (one week) on the Team plan. When the TTL expires, the container is stopped, the route is removed, and the token is invalidated. No zombie containers. No URLs that still work six months later and confuse someone looking at a stale Google index entry.
This is the feature that replaces the "staging server shared across PRs" workflow. Instead of one staging URL that everyone steps on, every PR gets its own isolated container and its own URL. Review one branch at a time, share a link that can't accidentally show the wrong branch's code, and stop coordinating deploys to staging.
What Happens When a Container Dies
Containers fail. Out-of-memory kills. Unhandled exceptions. Missing environment variables that cause a crash-loop. The architecture needs to handle this gracefully.
The worker node monitors container health via a periodic HTTP check to the app's exposed port. If the check fails three consecutive times, the container is marked unhealthy. The WebSocket connection pushes an unhealthy event with the exit code (if the container exited) or the last error response (if it's still running but broken). The dashboard shows the status change and surfaces the most recent log lines.
If the container is crash-looping — starts, fails, restarts, fails — the worker node detects the pattern and stops restarting after the third attempt. The dashboard shows a crashed state with the failure reason extracted from the container exit log. No infinite restart loops burning CPU on the worker node.
Environment variable misconfiguration is the most common cause. The dashboard's "Scan .env.example" feature detects which variables your app expects and flags any that aren't set in the preview environment. If a preview is returning a 500, that's usually the fix.
The Build Pipeline in Practice
Here's what a push looks like end to end.
A developer pushes a feature branch to GitHub. GitHub Actions picks up the previewdrop.yml workflow, builds the Docker image, tags it with the branch name and commit SHA, and pushes it to the PreviewDrop registry. The workflow triggers the PreviewDrop API: "new image available for project X, branch Y, commit Z."
The control plane assigns a worker node, pulls the image, starts the container with the project's environment variables, and begins the health check loop. The WebSocket feed pushes state transitions to the dashboard. When the container passes its health check, the URL is generated, the route is activated, and the PR commit status updates to "success" with the preview URL attached.
The whole pipeline from push to live URL typically finishes in 45–90 seconds for most Django, Rails, and Laravel apps with a reasonably-sized Docker image. Cold starts (first build on a new worker) add another 20–40 seconds for the image pull. Subsequent builds on the same worker node are faster because the base layers are cached.
The PR stays open for review. Feedback comes in. The developer pushes a new commit. The pipeline runs again — new image, new container, new URL, same commit status update. The old container self-destructs on the next TTL cycle.
The Pricing Model That Made Architecture Decisions Possible
Usage-based billing creates design pressure. If you bill per container-second, every optimization looks like an opportunity to charge more — longer build times, longer health checks, no auto-cleanup. You don't want containers to self-destruct quickly because every second is revenue.
Flat pricing removes that incentive. PreviewDrop Starter is $19/month per workspace. That covers 5 concurrent previews, up to 3 team members, and a 4-hour TTL per container. The same $19 whether your team has a quiet week with two PRs or a crunch sprint with ten. The same $19 whether previews run for one hour or the full four.
When billing doesn't punish usage, teams use the product more — more reviews, more client shares, fewer coordination bottlenecks. That's the behavior you want. It's also the behavior per-second pricing accidentally discourages.
The free tier is two concurrent previews and three projects. No credit card required. It's enough to evaluate whether the workflow fits your team.
What's Launching May 20th
PreviewDrop is launching on Product Hunt on May 20, 2026. The free tier ships that day — two concurrent previews, three projects, no credit card. The setup is the same one-command flow: npx previewdrop setup writes the workflow file and prints the secret.
If your backend runs in Docker and your team has more than one open PR right now, try the free tier when we launch. You'll have a live preview URL on your next push.
Follow the launch at previewdrop.dev. See you May 20th.
Top comments (0)