DEV Community

ANKUSH CHOUDHARY JOHAL
ANKUSH CHOUDHARY JOHAL

Posted on • Originally published at johal.in

A Tutorial for Building a Deno 2.0 REST API with Oak 12.0 and PostgreSQL 16.0 for Production Workloads

Build a Production-Ready Deno 2.0 REST API with Oak 12.0 and PostgreSQL 16.0

Introduction

Deno 2.0 brings stability, backwards compatibility, and production-grade features, while Oak 12.0 remains the go-to middleware framework for Deno HTTP servers. Paired with PostgreSQL 16.0’s performance improvements and JSON enhancements, this stack is ideal for scalable REST APIs. This tutorial walks through building a CRUD API for a task management app, with production-ready additions like connection pooling, validation, error handling, and Docker deployment.

Prerequisites

  • Deno 2.0+ installed (verify with deno --version)
  • PostgreSQL 16.0+ running locally or via a cloud provider
  • Basic knowledge of REST APIs, SQL, and TypeScript
  • Postman or curl for testing endpoints

Project Initialization

Create a new directory for your project and initialize a Deno project:

mkdir deno-oak-postgres-api && cd deno-oak-postgres-api
deno init --force
Enter fullscreen mode Exit fullscreen mode

Install Oak 12.0 and the PostgreSQL driver (deno-postgres 0.17+ is compatible with Deno 2 and PostgreSQL 16):

deno add oak @denodrivers/postgres
Enter fullscreen mode Exit fullscreen mode

Create a .env file for environment variables (use deno add dotenv to load env vars):

DENO_ENV=development
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_USER=postgres
POSTGRES_PASSWORD=your_password
POSTGRES_DB=tasks_db
PORT=8000
Enter fullscreen mode Exit fullscreen mode

PostgreSQL 16 Setup

Connect to your PostgreSQL instance and create the database and tasks table:

CREATE DATABASE tasks_db;
\c tasks_db;

CREATE TABLE tasks (
  id SERIAL PRIMARY KEY,
  title VARCHAR(255) NOT NULL,
  description TEXT,
  completed BOOLEAN DEFAULT false,
  created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- PostgreSQL 16+ supports JSON schema validation, add a check for metadata if needed
ALTER TABLE tasks ADD COLUMN metadata JSONB DEFAULT '{}'::jsonb;
CREATE INDEX idx_tasks_completed ON tasks(completed);
Enter fullscreen mode Exit fullscreen mode

We’ll use connection pooling for production: the @denodrivers/postgres pool manages multiple connections efficiently.

Configure Database Connection

Create a db.ts file to set up the PostgreSQL pool:

import { Pool } from "@denodrivers/postgres";
import { load } from "dotenv";

await load({ export: true });

const pool = new Pool({
  hostname: Deno.env.get("POSTGRES_HOST")!,
  port: Number(Deno.env.get("POSTGRES_PORT")),
  user: Deno.env.get("POSTGRES_USER")!,
  password: Deno.env.get("POSTGRES_PASSWORD")!,
  database: Deno.env.get("POSTGRES_DB")!,
  poolSize: Deno.env.get("DENO_ENV") === "production" ? 20 : 5,
  ssl: Deno.env.get("DENO_ENV") === "production" ? { caCertificate: Deno.env.get("POSTGRES_SSL_CA")! } : false,
}, 10);

export default pool;
Enter fullscreen mode Exit fullscreen mode

Set Up Oak 12 Middleware

Create a main.ts file to initialize the Oak application with production-grade middleware:

import { Application, Router } from "oak";
import { oakCors } from "cors";
import { rateLimiter } from "oak-rate-limiter";
import router from "./routes.ts";
import { errorHandler } from "./middleware/error_handler.ts";

const app = new Application();
const port = Number(Deno.env.get("PORT")) || 8000;

// Production middleware stack
app.use(oakCors({ origin: Deno.env.get("ALLOWED_ORIGINS")?.split(",") || "*" }));
app.use(rateLimiter({
  windowMs: 15 * 60 * 1000,
  max: 100,
  message: "Too many requests, please try again later.",
}));
app.use(errorHandler);
app.use(router.routes());
app.use(router.allowedMethods());

// Start server
app.listen({ port });
console.log(`API running on http://localhost:${port}`);
Enter fullscreen mode Exit fullscreen mode

Install required middleware: deno add cors oak-rate-limiter

Create API Routes (CRUD for Tasks)

Create a routes.ts file with CRUD endpoints:

import { Router } from "oak";
import { getTasks, getTask, createTask, updateTask, deleteTask } from "./handlers/task_handlers.ts";

const router = new Router();

router.get("/api/tasks", getTasks)
  .get("/api/tasks/:id", getTask)
  .post("/api/tasks", createTask)
  .put("/api/tasks/:id", updateTask)
  .delete("/api/tasks/:id", deleteTask);

export default router;
Enter fullscreen mode Exit fullscreen mode

Implement Task Handlers

Create handlers/task_handlers.ts with database logic and validation:

import { Context } from "oak";
import pool from "../db.ts";
import { validateTask } from "../validators/task_validator.ts";

export const getTasks = async (ctx: Context) => {
  const { rows } = await pool.query("SELECT * FROM tasks ORDER BY created_at DESC");
  ctx.response.body = rows;
};

export const getTask = async (ctx: Context) => {
  const id = ctx.params.id;
  const { rows } = await pool.query("SELECT * FROM tasks WHERE id = $1", [id]);
  if (rows.length === 0) {
    ctx.response.status = 404;
    ctx.response.body = { error: "Task not found" };
    return;
  }
  ctx.response.body = rows[0];
};

export const createTask = async (ctx: Context) => {
  const body = await ctx.request.body().value;
  const validationError = validateTask(body);
  if (validationError) {
    ctx.response.status = 400;
    ctx.response.body = { error: validationError };
    return;
  }
  const { title, description, metadata } = body;
  const { rows } = await pool.query(
    "INSERT INTO tasks (title, description, metadata) VALUES ($1, $2, $3) RETURNING *",
    [title, description || null, metadata || {}]
  );
  ctx.response.status = 201;
  ctx.response.body = rows[0];
};

export const updateTask = async (ctx: Context) => {
  const id = ctx.params.id;
  const body = await ctx.request.body().value;
  const validationError = validateTask(body, true);
  if (validationError) {
    ctx.response.status = 400;
    ctx.response.body = { error: validationError };
    return;
  }
  const { title, description, completed, metadata } = body;
  const { rows } = await pool.query(
    `UPDATE tasks 
     SET title = COALESCE($1, title), 
         description = COALESCE($2, description), 
         completed = COALESCE($3, completed), 
         metadata = COALESCE($4, metadata), 
         updated_at = CURRENT_TIMESTAMP 
     WHERE id = $5 RETURNING *`,
    [title, description, completed, metadata, id]
  );
  if (rows.length === 0) {
    ctx.response.status = 404;
    ctx.response.body = { error: "Task not found" };
    return;
  }
  ctx.response.body = rows[0];
};

export const deleteTask = async (ctx: Context) => {
  const id = ctx.params.id;
  const { rowCount } = await pool.query("DELETE FROM tasks WHERE id = $1", [id]);
  if (rowCount === 0) {
    ctx.response.status = 404;
    ctx.response.body = { error: "Task not found" };
    return;
  }
  ctx.response.status = 204;
};
Enter fullscreen mode Exit fullscreen mode

Add Validation and Error Handling

Create validators/task_validator.ts for input validation:

export const validateTask = (data: any, isUpdate = false) => {
  if (!isUpdate && !data.title) return "Title is required";
  if (data.title && typeof data.title !== "string") return "Title must be a string";
  if (data.title && data.title.length > 255) return "Title must be under 255 characters";
  if (data.description && typeof data.description !== "string") return "Description must be a string";
  if (data.completed && typeof data.completed !== "boolean") return "Completed must be a boolean";
  if (data.metadata && typeof data.metadata !== "object") return "Metadata must be a JSON object";
  return null;
};
Enter fullscreen mode Exit fullscreen mode

Create middleware/error_handler.ts for consistent error responses:

import { Context, Next } from "oak";

export const errorHandler = async (ctx: Context, next: Next) => {
  try {
    await next();
  } catch (err) {
    ctx.response.status = err.status || 500;
    ctx.response.body = { error: err.message || "Internal Server Error" };
    if (Deno.env.get("DENO_ENV") === "production") {
      console.error(`Error: ${err.message}`, err.stack);
    }
  }
};
Enter fullscreen mode Exit fullscreen mode

Testing the API

Use curl or Postman to test endpoints:

# Create a task
curl -X POST http://localhost:8000/api/tasks \
  -H "Content-Type: application/json" \
  -d '{"title": "Learn Deno 2.0", "description": "Build a REST API with Oak and Postgres"}'

# Get all tasks
curl http://localhost:8000/api/tasks

# Update a task
curl -X PUT http://localhost:8000/api/tasks/1 \
  -H "Content-Type: application/json" \
  -d '{"completed": true}'

# Delete a task
curl -X DELETE http://localhost:8000/api/tasks/1
Enter fullscreen mode Exit fullscreen mode

Production Deployment

Containerize the API with Docker for consistent deployments. Create a Dockerfile:

FROM denoland/deno:2.0.0

WORKDIR /app

COPY deno.json deno.lock ./
COPY . .

RUN deno cache main.ts

EXPOSE 8000

CMD ["deno", "run", "--allow-net", "--allow-env", "--allow-read", "main.ts"]
Enter fullscreen mode Exit fullscreen mode

Build and run the Docker image:

docker build -t deno-oak-postgres-api .
docker run -p 8000:8000 --env-file .env deno-oak-postgres-api
Enter fullscreen mode Exit fullscreen mode

For production, use a managed PostgreSQL service (like AWS RDS, Neon, or Supabase) and set DENO_ENV=production with restricted CORS origins and SSL enabled.

Production Best Practices

  • Use connection pooling for PostgreSQL to avoid connection exhaustion
  • Enable SSL for database connections in production
  • Add rate limiting and CORS restrictions
  • Use structured logging and error tracking (Sentry, Datadog)
  • Add health check endpoints (/health) for load balancers
  • Implement authentication (JWT, OAuth2) for protected routes
  • Use database migrations (e.g., deno-migrate) to manage schema changes

Conclusion

This tutorial covered building a production-ready Deno 2.0 REST API with Oak 12.0 and PostgreSQL 16.0. You can extend this by adding authentication, caching with Redis, or GraphQL support. Deno 2.0’s stability and Oak’s flexibility make this stack a strong choice for modern backend applications.

Top comments (0)