DEV Community

myougaTheAxo
myougaTheAxo

Posted on

Edge Functions in Practice: Cloudflare Workers and Vercel Edge Design Patterns

When to Choose Edge Functions

Edge Functions execute code at locations closest to users. Unlike traditional serverless, cold starts are nearly zero and globally low latency is achievable.

However, Edge Runtime has constraints. Most Node.js APIs don't work. fs, child_process, etc. are unavailable. Direct connections to traditional databases like SQLite also don't work.

Edge is best for: Auth checks/redirects, A/B testing, cache layers, lightweight APIs, edge SSR optimization

Regular servers are better for: Direct DB access, file processing, heavy computation, Node.js-specific libraries

Cloudflare Workers + Hono: Type-Safe Routing

import { Hono } from "hono";
import { cors } from "hono/cors";
import { jwt } from "hono/jwt";

interface Env {
  KV: KVNamespace;
  DB: D1Database;
  API_SECRET: string;
}

const app = new Hono<{ Bindings: Env }>();
app.use("*", cors({ origin: ["https://myapp.com"] }));
app.use("/api/*", jwt({ secret: (c) => c.env.API_SECRET }));

app.get("/api/config/:key", async (c) => {
  const key = c.req.param("key");
  const cached = await c.env.KV.get(`config:${key}`, "json");
  if (cached) return c.json({ data: cached, source: "cache" });

  const result = await c.env.DB.prepare("SELECT value FROM configs WHERE key = ?")
    .bind(key).first<{ value: string }>();

  if (!result) return c.json({ error: "Not found" }, 404);

  const data = JSON.parse(result.value);
  await c.env.KV.put(`config:${key}`, JSON.stringify(data), { expirationTtl: 3600 });
  return c.json({ data, source: "db" });
});

export default app;
Enter fullscreen mode Exit fullscreen mode

Vercel Edge Middleware: Auth and Redirects

import { NextRequest, NextResponse } from "next/server";
import { jwtVerify } from "jose";

const PUBLIC_PATHS = ["/", "/login", "/signup", "/api/auth"];

export async function middleware(req: NextRequest) {
  const { pathname } = req.nextUrl;
  if (PUBLIC_PATHS.some((p) => pathname.startsWith(p))) return NextResponse.next();

  const token = req.cookies.get("auth-token")?.value;
  if (!token) {
    const loginUrl = new URL("/login", req.url);
    loginUrl.searchParams.set("redirect", pathname);
    return NextResponse.redirect(loginUrl);
  }

  try {
    const secret = new TextEncoder().encode(process.env.JWT_SECRET!);
    const { payload } = await jwtVerify(token, secret);
    if (pathname.startsWith("/admin") && payload.role !== "admin") {
      return NextResponse.redirect(new URL("/403", req.url));
    }
    const response = NextResponse.next();
    response.headers.set("x-user-id", String(payload.sub));
    return response;
  } catch {
    const response = NextResponse.redirect(new URL("/login", req.url));
    response.cookies.delete("auth-token");
    return response;
  }
}
Enter fullscreen mode Exit fullscreen mode

A/B Testing at the Edge

function getVariant(testId: string, userId: string): string {
  const variants = ["control", "variant-a", "variant-b"];
  const hash = Array.from(`${testId}:${userId}`).reduce(
    (acc, char) => (acc * 31 + char.charCodeAt(0)) & 0x7fffffff, 0
  );
  return variants[hash % variants.length];
}
Enter fullscreen mode Exit fullscreen mode

Edge LLM Streaming

import Anthropic from "@anthropic-ai/sdk";

export const runtime = "edge";

export async function POST(req: Request) {
  const { messages } = await req.json();
  const client = new Anthropic();

  const stream = await client.messages.create({
    model: "claude-haiku-4-5",
    max_tokens: 1024,
    messages,
    stream: true,
  });

  const readableStream = new ReadableStream({
    async start(controller) {
      for await (const event of stream) {
        if (event.type === "content_block_delta" && event.delta.type === "text_delta") {
          controller.enqueue(new TextEncoder().encode(event.delta.text));
        }
      }
      controller.close();
    },
  });

  return new Response(readableStream, {
    headers: { "Content-Type": "text/plain; charset=utf-8" },
  });
}
Enter fullscreen mode Exit fullscreen mode

The core of Edge Functions design is "minimize processing at the edge and delegate Node-requiring processes to origin servers." Concentrating on thin layers of auth, caching, and A/B testing maximizes edge low-latency characteristics.


This article is from the Claude Code Complete Guide (7 chapters) on note.com.
myouga (@myougatheaxo) - VTuber axolotl. Sharing practical AI development tips.

Top comments (0)