Deploy a production-ready TanStack Start application with PostgreSQL to your own server in minutes. No Kubernetes, no vendor lock-in, just Docker and simple configuration.
What is Haloy?
Haloy is an open-source, MIT-licensed deployment tool that makes it simple to deploy Docker-based applications to your own servers. It handles SSL certificates, routing, and container orchestration without the complexity of Kubernetes or cloud platforms.
Complete source code: github.com/haloydev/examples/tanstack-start-postgres
What We're Building
A full-stack todo application using:
- TanStack Start - React meta-framework with server functions
- PostgreSQL - Production database
- Drizzle ORM - Type-safe database queries
- Haloy - Simple deployment to your own server
Prerequisites
Before starting, you'll need:
- Node.js 20+
- A Linux server (VPS or dedicated)
- A domain or subdomain
- Basic React and TypeScript knowledge
This guide uses pnpm, but npm works too. Just replace pnpm add with npm install and pnpm with npm run.
Project Setup
Initialize the Project
mkdir my-tanstack-app
cd my-tanstack-app
pnpm init
Configure TypeScript
Create tsconfig.json:
{
"compilerOptions": {
"jsx": "react-jsx",
"moduleResolution": "Bundler",
"module": "ESNext",
"target": "ES2022",
"skipLibCheck": true,
"strictNullChecks": true
}
}
Install Dependencies
# TanStack Start and React
pnpm add @tanstack/react-start @tanstack/react-router react react-dom nitro
# Dev dependencies
pnpm add -D vite @vitejs/plugin-react typescript @types/react @types/react-dom @types/node vite-tsconfig-paths
# Database
pnpm add drizzle-orm pg dotenv
pnpm add -D drizzle-kit @types/pg
Update package.json
Add these fields to your package.json:
{
"type": "module",
"scripts": {
"dev": "vite dev",
"build": "vite build",
"start": "node .output/server/index.mjs",
"db:push": "drizzle-kit push",
"db:studio": "drizzle-kit studio"
}
}
Important: The "type": "module" field is crucial. Without it, Node.js treats files as CommonJS, causing errors. TanStack Start requires ES modules.
Create Vite Configuration
Create vite.config.ts:
import { defineConfig } from "vite";
import { nitro } from "nitro/vite";
import tsConfigPaths from "vite-tsconfig-paths";
import { tanstackStart } from "@tanstack/react-start/plugin/vite";
import viteReact from "@vitejs/plugin-react";
export default defineConfig({
server: {
port: 3000,
},
plugins: [
tsConfigPaths(),
tanstackStart(),
nitro(),
viteReact(), // Must come after tanstackStart
],
nitro: {},
});
TanStack Start uses Nitro as its server engine. The default Node.js preset works perfectly with Haloy - no extra configuration needed.
Database Setup
Configure Drizzle
Create drizzle.config.ts:
import { config } from "dotenv";
import { defineConfig } from "drizzle-kit";
import { getDatabaseUrl } from "./src/db/database-url";
config();
const databaseUrl = getDatabaseUrl();
export default defineConfig({
out: "./drizzle",
schema: "./src/db/schema.ts",
dialect: "postgresql",
dbCredentials: {
url: databaseUrl,
},
});
Create Database Client
Create src/db/index.ts:
import "dotenv/config";
import { drizzle } from "drizzle-orm/node-postgres";
import { getDatabaseUrl } from "./database-url";
const databaseUrl = getDatabaseUrl();
const db = drizzle(databaseUrl);
export { db };
Define Schema
Create src/db/schema.ts:
import { integer, pgTable, timestamp, varchar } from "drizzle-orm/pg-core";
export const todos = pgTable("todos", {
id: integer().primaryKey().generatedAlwaysAsIdentity(),
title: varchar({ length: 255 }).notNull(),
createdAt: timestamp({ mode: "date" }).defaultNow(),
});
Database Connection Helper
Create src/db/database-url.ts:
export function getDatabaseUrl() {
const postgresUser = process.env.POSTGRES_USER;
if (!postgresUser) {
throw new Error("POSTGRES_USER environment variable not found");
}
const postgresPassword = process.env.POSTGRES_PASSWORD;
if (!postgresPassword) {
throw new Error("POSTGRES_PASSWORD environment variable not found");
}
const postgresDb = process.env.POSTGRES_DB;
if (!postgresDb) {
throw new Error("POSTGRES_DB environment variable not found");
}
// Use 'postgres' hostname in production, localhost in development
const host = process.env.NODE_ENV === "production" ? "postgres" : "localhost";
return `postgres://${postgresUser}:${postgresPassword}@${host}:5432/${postgresDb}`;
}
This helper automatically switches between localhost (development) and postgres (production hostname) based on NODE_ENV.
Environment Variables
Create .env for local development:
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=todo_app
Local Database (Optional)
Run PostgreSQL locally using Docker:
docker run --name postgres-dev \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_DB=todo_app \
-p 5432:5432 \
-d postgres:18
To stop later:
docker stop postgres-dev
docker rm postgres-dev
Application Code
Create Router
Create src/router.tsx:
import { createRouter } from "@tanstack/react-router";
import { routeTree } from "./routeTree.gen";
export function getRouter() {
const router = createRouter({
routeTree,
scrollRestoration: true,
defaultNotFoundComponent: () => <div>404 - not found</div>,
});
return router;
}
Note: The ./routeTree.gen import will show a TypeScript error until you run the dev server. TanStack Start generates this file automatically.
Root Route
Create src/routes/__root.tsx:
/// <reference types="vite/client" />
import {
createRootRoute,
HeadContent,
Outlet,
Scripts,
} from "@tanstack/react-router";
import type { ReactNode } from "react";
export const Route = createRootRoute({
head: () => ({
meta: [
{
charSet: "utf-8",
},
{
name: "viewport",
content: "width=device-width, initial-scale=1",
},
{
title: "TanStack Start Starter",
},
],
}),
component: RootComponent,
});
function RootComponent() {
return (
<RootDocument>
<Outlet />
</RootDocument>
);
}
function RootDocument({ children }: Readonly<{ children: ReactNode }>) {
return (
<html lang="en">
<head>
<HeadContent />
</head>
<body>
{children}
<Scripts />
</body>
</html>
);
}
Index Route with Server Functions
Create src/routes/index.tsx:
import { createFileRoute, useRouter } from "@tanstack/react-router";
import { createServerFn } from "@tanstack/react-start";
import { eq } from "drizzle-orm";
import { db } from "../db";
import { todos } from "../db/schema";
const getTodos = createServerFn({
method: "GET",
}).handler(async () => await db.select().from(todos));
const addTodo = createServerFn({ method: "POST" })
.inputValidator((data: FormData) => {
if (!(data instanceof FormData)) {
throw new Error("Expected FormData");
}
return {
title: data.get("title")?.toString() || "",
};
})
.handler(async ({ data }) => {
await db.insert(todos).values({ title: data.title });
});
const deleteTodo = createServerFn({ method: "POST" })
.inputValidator((data: number) => data)
.handler(async ({ data }) => {
await db.delete(todos).where(eq(todos.id, data));
});
export const Route = createFileRoute("/")({
component: RouteComponent,
loader: async () => await getTodos(),
});
function RouteComponent() {
const router = useRouter();
const todos = Route.useLoaderData();
return (
<div>
<ul>
{todos.map((todo) => (
<li key={todo.id}>
{todo.title}
<button
type="button"
onClick={async () => {
await deleteTodo({ data: todo.id });
router.invalidate();
}}
>
X
</button>
</li>
))}
</ul>
<h2>Add todo</h2>
<form
onSubmit={async (e) => {
e.preventDefault();
const form = e.currentTarget;
const data = new FormData(form);
await addTodo({ data });
router.invalidate();
form.reset();
}}
>
<input name="title" placeholder="Enter a new todo..." />
<button type="submit">Add</button>
</form>
</div>
);
}
Health Check Route
Create src/routes/health.tsx:
import { createFileRoute } from "@tanstack/react-router";
export const Route = createFileRoute("/health")({
server: {
handlers: {
GET: async () => {
return Response.json({ status: "ok" });
},
},
},
});
This endpoint responds without querying the database, ensuring fast health checks.
Docker Configuration
Dockerfile
Create Dockerfile:
FROM node:24-slim AS base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
COPY . /app
WORKDIR /app
FROM base AS prod-deps
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --prod --frozen-lockfile
FROM base AS build
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --frozen-lockfile
RUN pnpm run build
FROM base
COPY --from=prod-deps /app/node_modules /app/node_modules
COPY --from=build /app/.output /app/.output
HEALTHCHECK --interval=10s --timeout=3s --start-period=10s --retries=3 \
CMD node -e "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))"
CMD ["pnpm", "start"]
Key features:
- Multi-stage builds for smaller images
- Built-in health check using the
/healthendpoint - Production-optimized dependencies
.dockerignore
Create .dockerignore:
node_modules
.git
.gitignore
*.md
dist
.DS_Store
Haloy Configuration
Create haloy.yml:
# Global configuration
server: your-server.haloy.dev
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: "postgres"
- name: POSTGRES_DB
value: "todo_app"
targets:
# Database Service
postgres:
preset: database
image:
repository: postgres:18
port: 5432
volumes:
- postgres-data:/var/lib/postgresql
# Application Service
tanstack-start-postgres:
domains:
- domain: my-app.example.com
port: 3000
env:
- name: NODE_ENV
value: production
Replace these values:
-
your-server.haloy.dev- Your actual server domain -
my-app.example.com- Your application domain -
POSTGRES_PASSWORD- A strong password for production
Configuration Explained
We define two targets:
-
postgres:
- Official PostgreSQL 18 image
- Persistent storage with named volume
- Accessible to other containers via hostname
postgres
-
tanstack-start-postgres:
- Your application code
- Connects to database using environment variables
-
NODE_ENV=productionensures correct database hostname
The named volume ensures data persists across deployments and restarts.
Deployment
Install Haloy
First, install Haloy on your local machine and set up your server:
# Install Haloy CLI
curl -fsSL https://haloy.dev/install.sh | sh
# Set up your server
haloy server setup
Follow the prompts to configure your server. See the Haloy quickstart for detailed instructions.
Test Locally
Before deploying, verify everything works:
# Push schema to local database
pnpm db:push
# Start development server
pnpm dev
Visit http://localhost:3000 and test the todo functionality.
Deploy Database
Deploy PostgreSQL first:
haloy deploy -t postgres
Wait for the deployment to complete.
Note: If you're running a local PostgreSQL container for testing, stop it first:
docker stop postgres-dev
Push Schema to Production
Use Haloy's tunnel feature to connect to the production database:
# Terminal 1: Open tunnel
haloy tunnel 5432 -t postgres
In another terminal, push your schema:
# Terminal 2: Push schema
pnpm db:push
Drizzle connects to localhost:5432 (which tunnels to production) and applies your schema.
Deploy Application
Now deploy your application:
haloy deploy -t tanstack-start-postgres
Verify Deployment
# Check status
haloy status --all
# View logs
haloy logs -t tanstack-start-postgres
Your application is now live with automatic HTTPS!
Working with Production Database
The tunnel feature is useful for ongoing database management.
Inspect Data with Drizzle Studio
# Terminal 1: Open tunnel
haloy tunnel 5432 -t postgres
# Terminal 2: Start Drizzle Studio
pnpm db:studio
Open https://local.drizzle.studio to browse your production data visually.
Update Schema
When you modify src/db/schema.ts, push changes to production:
# Terminal 1: Open tunnel (if not already open)
haloy tunnel 5432 -t postgres
# Terminal 2: Push changes
pnpm db:push
Drizzle shows a diff and prompts for confirmation before applying changes.
Alternative: Migration-Based Workflow
The drizzle-kit push approach is ideal for solo developers who want to move fast. For teams or projects requiring controlled change management, consider using Drizzle Migrations.
Migrations capture schema changes as versioned SQL files that can be reviewed in pull requests and applied consistently across environments.
Why Haloy?
Haloy is designed for developers who want:
- ✅ Simple deployments without Kubernetes complexity
- ✅ Self-hosted, Docker-based infrastructure
- ✅ Clean dev/staging/production separation
- ✅ Fast "build → ship → run" workflow
- ✅ Zero vendor lock-in
It's perfect for:
- Indie developers
- Small teams
- Self-hosted SaaS
- API services
- Internal tools
Next Steps
Questions or feedback? Drop a comment below or open an issue on GitHub!
Top comments (0)