DEV Community

Cover image for Cloudflare Pages vs Workers in 2026: Migration Guide
Rick Cogley
Rick Cogley

Posted on • Originally published at cogley.jp

Cloudflare Pages vs Workers in 2026: Migration Guide

Cloudflare is folding Pages into Workers. Pages isn't getting killed tomorrow, but all the new stuff lands on Workers only. I migrated all my Pages projects earlier this year and wrote down what tripped me up along the way.

Is Cloudflare sunsetting Pages?

Not exactly. On Reddit and Hacker News, people keep saying "Pages is deprecated" but that's not quite right. Kenton Varda (Workers tech lead) said: "We are taking all the Pages-specific features and turning them into general Workers features." The product isn't being killed — it's being absorbed.

But the signal is clear. New features go to Workers first (or only). The Secrets Store? Workers only. Workflows? Workers only. Containers? Workers only. Pages gets maintenance updates at best. The official compatibility matrix shows Workers can do everything Pages does, plus a growing list of things Pages can't.

I migrated all my Pages projects between January and March 2026. Here's what the feature gap looks like now:

Feature Pages Workers
Static Assets
Server-Side Rendering
Durable Objects Requires separate Worker Native support
Cron Triggers
Queue Consumers
Email Workers (inbound)
Image Resizing Binding
Rate Limiting
Workers Logs Basic Full observability
Tail Workers
Source Maps
Gradual Deployments
Remote Development ✓ (--remote)
Smart Placement
Secrets Store ✓ (shared across Workers)

If you need Durable Objects, scheduled tasks, or production observability, migration is worth doing now rather than waiting for the forced migration.

What These Features Mean in Practice

The table lists feature names, but what do they let you build? Here's the short version for each:

Durable Objects: Stateful Edge Computing

What it is: Durable Objects provide strongly consistent, stateful storage at the edge. Think of them as tiny single-threaded servers that maintain state between requests, like a WebSocket connection manager, a collaborative document editor, or a rate limiter that actually works globally.

Practical example: Building a real-time collaborative feature? Without Durable Objects, you'd need a centralized WebSocket server. With them, each document gets its own Durable Object that handles all connected users, coordinates edits, and persists state, all at the edge, close to your users.

// Each chat room gets its own Durable Object instance
export class ChatRoom {
  private connections: WebSocket[] = [];

  async fetch(request: Request) {
    const [client, server] = Object.values(new WebSocketPair());
    this.connections.push(server);
    server.accept();

    server.addEventListener('message', (event) => {
      // Broadcast to all connected clients in this room
      this.connections.forEach((ws) => ws.send(event.data));
    });

    return new Response(null, { status: 101, webSocket: client });
  }
}
Enter fullscreen mode Exit fullscreen mode

Durable Objects Documentation

Cron Triggers: Scheduled Tasks Without Infrastructure

What it is: Run your Worker on a schedule, every minute, hourly, daily, or with custom cron expressions. No external scheduler needed, no always-on server, and it scales automatically.

Practical example: Daily digest emails, cache warming, data synchronization, cleanup jobs, or generating static reports from dynamic data.

{
  "triggers": {
    "crons": [
      "0 0 * * *", // Daily at midnight UTC
      "*/15 * * * *", // Every 15 minutes
      "0 9 * * 1", // Every Monday at 9am UTC
    ],
  },
}
Enter fullscreen mode Exit fullscreen mode
export default {
  async scheduled(event: ScheduledEvent, env: Env) {
    // This runs on your schedule, not in response to HTTP requests
    await env.DB.prepare('DELETE FROM sessions WHERE expires_at < ?').bind(Date.now()).run();
  },
};
Enter fullscreen mode Exit fullscreen mode

Cron Triggers Documentation

Queue Consumers: Reliable Background Processing

What it is: Cloudflare Queues let you decouple request handling from heavy processing. Your Worker responds immediately while background work happens asynchronously with automatic retries and dead-letter handling.

Practical example: User uploads an image → respond immediately → queue processes the image (resize, analyze, store) → no timeout concerns, no user waiting.

// Producer: Queue the work
export default {
  async fetch(request: Request, env: Env) {
    const image = await request.arrayBuffer();
    await env.IMAGE_QUEUE.send({ image, userId: 'abc123' });
    return new Response('Processing started', { status: 202 });
  },
};

// Consumer: Process in background
export default {
  async queue(batch: MessageBatch<ImageJob>, env: Env) {
    for (const message of batch.messages) {
      await processImage(message.body);
      message.ack();
    }
  },
};
Enter fullscreen mode Exit fullscreen mode

Queues Documentation

Email Workers: Programmable Inbound Email

What it is: Receive, parse, and route inbound emails in your Worker. This is not for sending transactional email — for that you'd call an external service (Resend, Maileroo, SES) via fetch. Email Workers handle the receiving side: Cloudflare routes incoming mail to your Worker, and you decide what to do with it.

Practical example: Inbound email parsing for support tickets, email-to-task automation, custom forwarding rules, or spam filtering.

export default {
  async email(message: EmailMessage, env: Env) {
    // Parse incoming email and create a support ticket
    const ticket = {
      from: message.from,
      subject: message.headers.get('subject'),
      body: await new Response(message.raw).text(),
    };
    await env.DB.prepare('INSERT INTO tickets ...').bind(ticket).run();

    // Forward to team if urgent
    if (ticket.subject.includes('[URGENT]')) {
      await message.forward('team@example.com');
    }
  },
};
Enter fullscreen mode Exit fullscreen mode

Email Workers Documentation

Image Resizing Binding: On-Demand Image Transformation

What it is: Transform images on-the-fly without pre-generating variants. Resize, crop, convert formats, and optimize, all at the edge with caching.

Practical example: Serve responsive images from a single source. Request /image.jpg?w=400 and get a 400px-wide WebP automatically.

export default {
  async fetch(request: Request, env: Env) {
    const url = new URL(request.url);
    const width = url.searchParams.get('w');

    // Fetch and transform in one operation
    return fetch(url.origin + '/original.jpg', {
      cf: {
        image: {
          width: parseInt(width),
          format: 'webp',
          quality: 85,
        },
      },
    });
  },
};
Enter fullscreen mode Exit fullscreen mode

Image Resizing Documentation

Rate Limiting: Protect Your APIs

What it is: Built-in rate limiting without external services. Define limits per IP, API key, or custom identifiers with sliding windows.

Practical example: Protect authentication endpoints, API quotas, or prevent abuse of expensive operations.

export default {
  async fetch(request: Request, env: Env) {
    const { success } = await env.RATE_LIMITER.limit({ key: getClientIP(request) });

    if (!success) {
      return new Response('Rate limit exceeded', { status: 429 });
    }

    return handleRequest(request);
  },
};
Enter fullscreen mode Exit fullscreen mode

Rate Limiting Documentation

Full Observability: Logs, Traces, and Metrics

What it is: Production-grade debugging with persistent logs, distributed traces across Service Bindings, real-time log streaming, and detailed analytics.

Why it matters: Pages gives you basic request logs. Workers provides:

  • Persistent logs: Query historical logs in the dashboard, not just real-time
  • Distributed traces: Follow a request across multiple Workers connected via Service Bindings
  • Invocation logs: See every console.log, uncaught exception, and subrequest
  • Custom metrics: Track business metrics alongside system metrics

Workers Observability Documentation

Tail Workers: Real-Time Log Processing

What it is: Stream logs from your Workers to another Worker for real-time processing. Build custom alerting, log aggregation, or analytics pipelines.

Practical example: Send errors to Slack, aggregate logs to your SIEM, or build custom dashboards.

// Tail Worker receives logs from other Workers
export default {
  async tail(events: TraceItem[]) {
    const errors = events.filter((e) => e.outcome === 'exception');
    if (errors.length > 0) {
      await fetch('https://hooks.slack.com/...', {
        method: 'POST',
        body: JSON.stringify({ text: `${errors.length} errors detected!` }),
      });
    }
  },
};
Enter fullscreen mode Exit fullscreen mode

Tail Workers Documentation

Source Maps: Debug Production Errors

What it is: Upload source maps with your deployment and see original file names and line numbers in error stack traces, not minified gibberish.

Why it matters: When production breaks at 3am, you want to see src/auth/validate.ts:47 not index.js:1:28456.

Source Maps Documentation

Gradual Deployments: Safe Rollouts

What it is: Roll out new versions incrementally, send 1% of traffic to the new version, monitor for errors, then gradually increase. Automatic rollback if things go wrong.

Practical example: Deploy a risky change to 5% of users, watch error rates, then promote to 100% with confidence.

# Deploy new version to 10% of traffic
npx wrangler versions deploy --percentage 10

# If metrics look good, increase
npx wrangler versions deploy --percentage 50

# Full rollout
npx wrangler versions deploy --percentage 100
Enter fullscreen mode Exit fullscreen mode

Gradual Deployments Documentation

Remote Development: Test Against Production Bindings

What it is: Run wrangler dev --remote to execute your Worker on Cloudflare's infrastructure while developing locally. Your code runs against real D1 databases, KV namespaces, and Durable Objects, not local emulators.

Why it matters: Local emulation is good, but sometimes you need to test against actual production data or debug issues that only appear on real infrastructure.

Remote Development Documentation

Smart Placement: Automatic Latency Optimization

What it is: Cloudflare automatically runs your Worker closer to your backend services (databases, APIs) rather than close to users. Best for Workers that spend most of their time talking to a centralized backend.

When to use it: If your Worker calls a database in us-east-1 for every request, Smart Placement runs the Worker near that database instead of near users, reducing round-trip latency for database calls.

{
  "placement": {
    "mode": "smart",
  },
}
Enter fullscreen mode Exit fullscreen mode

Smart Placement Documentation

Secrets Store: Centralized Secret Management

What it is: Share secrets across multiple Workers without duplicating them. Update a secret once, all Workers using it get the new value.

Practical example: API keys, database credentials, or signing secrets that are used by multiple Workers in your architecture.

Secrets Store Documentation


Architecture Overview

The main structural change:

flowchart TB
    subgraph before["Pages Architecture"]

        B1[Git Push] --> B2[Pages Build]
        B2 --> B3[Pages Deployment]
        B3 --> B4[pages.dev URL]
        B3 --> B5[Custom Domain]
        B3 -.-> B6[Separate Worker
for Durable Objects]
    end

    subgraph after["Workers Architecture"]

        A1[Git Push] --> A2[Build Step]
        A2 --> A3[Workers Deployment]
        A3 --> A4[workers.dev URL]
        A3 --> A5[Custom Domain]
        A3 --> A6[Durable Objects
Cron Triggers
Queues
Email Workers]
        A3 --> A7[Smart Placement]
        A3 --> A8[Full Observability]
    end

    before -.->|migrate| after

    style B6 stroke-dasharray: 5 5
    style A6 fill:#e1f5fe
    style A7 fill:#e1f5fe
    style A8 fill:#e1f5fe
Enter fullscreen mode Exit fullscreen mode

Workers consolidates everything into one deployment. No more maintaining a separate Worker just to use Durable Objects.

Pre-Migration Assessment

Before diving in, audit your project:

Bundle Size Check

Workers have a 10MB compressed limit. Analyze your bundle:

# For Vite-based projects
npx vite-bundle-visualizer

# For any project with source maps
npx source-map-explorer dist/**/*.js
Enter fullscreen mode Exit fullscreen mode

If you're over the limit, you'll need to optimize before migrating.

Node.js API Compatibility

Workers run on workerd, Cloudflare's open-source C++ runtime. It embeds V8 (the same JS engine as Chrome) but without the Node.js layer on top — no filesystem, no persistent processes, no raw sockets. Each request gets its own isolate that spins up in under a millisecond and dies when the response is sent. That's what makes Workers fast, but it's also why some Node APIs are impossible: there's no disk to read from and no long-lived process to spawn children in.

The good news: adding "nodejs_compat" (or "nodejs_compat_v2" on recent compatibility dates) to your compatibility_flags enables polyfills for most common Node.js APIs — Buffer, crypto, stream, path, and others. Many npm packages just work with this flag.

{
  "compatibility_date": "2026-03-01",
  "compatibility_flags": ["nodejs_compat"]  // Enables Node.js API polyfills
}
Enter fullscreen mode Exit fullscreen mode

What still doesn't work, even with the compat flag:

Node.js Workers Alternative
fs module Fetch from R2/KV, or use framework's server utilities
process.env env parameter in fetch handler, or framework bindings
child_process No equivalent — use Service Bindings or Queues
net / dgram No raw sockets — use fetch or WebSockets

DNS Requirements

Workers custom domains require Cloudflare-managed nameservers. Unlike Pages, you cannot use external DNS providers. Verify your domain's nameservers are with Cloudflare before proceeding.

Migration Process Overview

flowchart TB
    subgraph prep["A: Prepare"]
        direction TB
        P1[Audit bundle size]
        P2[Check Node.js APIs]
        P3[Inventory env vars]
    end

    subgraph config["B: Configure"]
        direction TB
        C1[Update wrangler.jsonc]
        C2[Update framework adapter]
        C3[Create .assetsignore]
    end

    subgraph test["C: Test"]
        direction TB
        T1[wrangler dev locally]
        T2[Deploy to workers.dev]
        T3[Verify functionality]
    end

    subgraph switch["D: Switch Domain"]
        direction TB
        S1[Remove from Pages API]
        S2[Add Workers routes API]
        S3[Verify HTTPS]
    end

    subgraph cleanup["E: Cleanup"]
        direction TB
        CL1[Delete old deployments]
        CL2[Remove Pages project]
        CL3[Update CI/CD]
    end

    prep --> config
    config --> test
    test --> switch
    switch --> cleanup
Enter fullscreen mode Exit fullscreen mode

Environment Variables Inventory

Pages separates "production" and "preview" environments. Document all your variables and secrets, you'll need to recreate them in Workers.

Core Migration Steps

Configuration Transformation

The biggest change is your wrangler.jsonc (or wrangler.toml). The mapping:

flowchart LR
    subgraph pages["Pages Config"]
        P1["pages_build_output_dir"]
        P2["Implicit 404 handling"]
        P3["Implicit asset serving"]
    end

    subgraph workers["Workers Config"]
        W1["assets.directory"]
        W2["assets.not_found_handling"]
        W3["assets.binding + main"]
        W4["compatibility_date"]
    end

    P1 -->|becomes| W1
    P2 -->|becomes| W2
    P3 -->|becomes| W3
    pages -->|add| W4

    style W4 fill:#c8e6c9
Enter fullscreen mode Exit fullscreen mode

Pages configuration:

{
  "name": "my-project",
  "pages_build_output_dir": "./dist/client/",
}
Enter fullscreen mode Exit fullscreen mode

Workers configuration:

{
  "name": "my-project",
  "compatibility_date": "2026-03-01",
  "compatibility_flags": ["nodejs_compat"],
  "main": "./dist/server/index.js",
  "assets": {
    "directory": "./dist/client/",
    "binding": "ASSETS",
    "not_found_handling": "single-page-application",
  },
}
Enter fullscreen mode Exit fullscreen mode

Differences:

  • pages_build_output_dir becomes assets.directory
  • Add main pointing to your server entry point
  • Add compatibility_date (required for Workers, use a recent date within 6 months for latest features)
  • Add compatibility_flags: ["nodejs_compat"] for npm packages that expect Node.js APIs
  • Explicitly configure 404 handling (single-page-application or 404-page)

Static Site Configuration

For purely static sites without server-side logic:

{
  "name": "my-static-site",
  "compatibility_date": "2026-03-01",
  "assets": {
    "directory": "./dist/",
    "not_found_handling": "404-page",
  },
}
Enter fullscreen mode Exit fullscreen mode

SPA Configuration

For single-page applications with client-side routing:

{
  "name": "my-spa",
  "compatibility_date": "2026-03-01",
  "assets": {
    "directory": "./build/",
    "not_found_handling": "single-page-application",
  },
}
Enter fullscreen mode Exit fullscreen mode

Assets Ignore Patterns

Pages automatically excluded node_modules, .git, and .DS_Store. Workers doesn't. Create .assetsignore:

node_modules
.git
.DS_Store
.env*
*.map
Enter fullscreen mode Exit fullscreen mode

Note: For SvelteKit projects using @sveltejs/adapter-cloudflare, the adapter handles asset filtering automatically. You typically don't need .assetsignore unless you have custom static files outside the adapter's output directory.

Local Development

Wrangler commands change slightly:

# Pages
wrangler pages dev ./dist --port 8788

# Workers
wrangler dev --port 8787
Enter fullscreen mode Exit fullscreen mode

Note the default port change: 8788 → 8787. If your team has scripts expecting the old port, configure it in wrangler.jsonc:

{
  "dev": {
    "port": 8788, // Keep Pages-era port for consistency
  },
}
Enter fullscreen mode Exit fullscreen mode

Framework-Specific Guidance

SvelteKit

If you're also upgrading to Vite 8 (Rolldown), do that at the same time — the build pipeline changed and it's easier to sort both out in one pass. I migrated 8 SvelteKit sites on Workers in a day; see Migrating 8 SvelteKit Sites to Vite 8 for the details on that side.

Update your adapter:

npm install -D @sveltejs/adapter-cloudflare
Enter fullscreen mode Exit fullscreen mode

svelte.config.js:

import adapter from '@sveltejs/adapter-cloudflare';

export default {
  kit: {
    adapter: adapter({
      routes: {
        include: ['/*'],
        exclude: [''],
      },
      platformProxy: {
        configPath: './wrangler.jsonc',
        persist: { path: '.wrangler/state/v3' },
      },
    }),
  },
};
Enter fullscreen mode Exit fullscreen mode

wrangler.jsonc:

{
  "name": "my-sveltekit-app",
  "compatibility_date": "2026-03-01",
  "compatibility_flags": ["nodejs_compat"],

  // IMPORTANT: SvelteKit adapter outputs to .svelte-kit/cloudflare/, NOT dist/
  "main": ".svelte-kit/cloudflare/_worker.js",
  "assets": {
    "directory": ".svelte-kit/cloudflare",
    "binding": "ASSETS",
  },

  // Enable preview URLs for PR deployments
  "workers_dev": true,
  "preview_urls": true,
}
Enter fullscreen mode Exit fullscreen mode

Common Mistake: Many guides show ./dist/server/ and ./dist/client/ paths, but @sveltejs/adapter-cloudflare outputs to .svelte-kit/cloudflare/. Using the wrong paths causes "Worker not found" errors.

Access bindings via platform.env:

// +page.server.ts
export async function load({ platform }) {
  const db = platform?.env?.DB;
  const result = await db?.prepare('SELECT * FROM posts').all();
  return { posts: result?.results ?? [] };
}
Enter fullscreen mode Exit fullscreen mode

Lume, 11ty etc (Static Sites)

For Lume, 11ty and similar static site generators, you don't need server-side logic. Create an assets-only Worker:

wrangler.jsonc:

{
  "name": "my-11ty-site",
  "compatibility_date": "2026-03-01",
  "assets": {
    "directory": "./_site/",
    "not_found_handling": "404-page",
  },
}
Enter fullscreen mode Exit fullscreen mode

For local development with live reload, run both tools in parallel:

# Terminal 1: 11ty watch
npx @11ty/eleventy --watch

# Terminal 2: Wrangler with live reload
npx wrangler dev --live-reload
Enter fullscreen mode Exit fullscreen mode

Or create a convenience script in package.json:

{
  "scripts": {
    "dev": "concurrently \"npx @11ty/eleventy --watch\" \"npx wrangler dev --live-reload\""
  }
}
Enter fullscreen mode Exit fullscreen mode

Gatsby and React SPAs

For Gatsby or Create React App builds that use client-side routing:

wrangler.jsonc:

{
  "name": "my-gatsby-site",
  "compatibility_date": "2026-03-01",
  "assets": {
    "directory": "./public/",
    "not_found_handling": "single-page-application",
  },
}
Enter fullscreen mode Exit fullscreen mode

If you need to add security headers or custom logic, create a Worker:

src/index.ts:

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    // Let assets binding handle static files
    const response = await env.ASSETS.fetch(request);

    // Add security headers
    const headers = new Headers(response.headers);
    headers.set('X-Content-Type-Options', 'nosniff');
    headers.set('X-Frame-Options', 'DENY');
    headers.set('Referrer-Policy', 'strict-origin-when-cross-origin');

    return new Response(response.body, {
      status: response.status,
      headers,
    });
  },
};
Enter fullscreen mode Exit fullscreen mode

Then configure the Worker to run first:

{
  "name": "my-gatsby-site",
  "compatibility_date": "2026-03-01",
  "main": "./src/index.ts",
  "assets": {
    "directory": "./public/",
    "binding": "ASSETS",
    "not_found_handling": "single-page-application",
    "run_worker_first": true,
  },
}
Enter fullscreen mode Exit fullscreen mode

Pages Functions Migration

If you used the functions/ directory pattern:

  1. Compile your functions:
   npx wrangler pages functions build --outdir ./dist/functions
Enter fullscreen mode Exit fullscreen mode
  1. Point main to the compiled output:
   {
     "main": "./dist/functions/index.js",
     "assets": {
       "directory": "./dist/client/",
     },
   }
Enter fullscreen mode Exit fullscreen mode

If you used _routes.json for routing, replace it with run_worker_first:

{
  "assets": {
    "run_worker_first": ["/api/*", "/auth/*"],
  },
}
Enter fullscreen mode Exit fullscreen mode

Domain Migration: The Tricky Part

This is where most guides fall short. Custom domain migration isn't straightforward because:

  1. You cannot manually edit the DNS CNAME while the domain is attached to Pages
  2. You cannot add the domain to Workers while it's attached to Pages
  3. Deleting from Pages first causes downtime

The solution: atomic API switchover.

flowchart TD
    subgraph problem["❌ The Problem"]

        PR1[Domain attached to Pages]
        PR2[Cannot add to Workers
'already in use' error]
        PR3[Delete from Pages first?]
        PR4[⚠️ Downtime!]
        PR1 --> PR2
        PR2 --> PR3
        PR3 --> PR4
    end

    subgraph solution["✅ The Solution: Atomic API Switchover"]

        S1[Deploy Worker to workers.dev first]
        S2[Test thoroughly on workers.dev URL]
        S3[Run switchover script]

        subgraph atomic["~2-5 seconds"]
            A1[DELETE domain from Pages API]
            A2[POST Workers route for root]
            A3[POST Workers route for www]
            A1 --> A2 --> A3
        end

        S4[Verify site loads]
        S5[Enable 'Always Use HTTPS']

        S1 --> S2 --> S3 --> atomic --> S4 --> S5
    end

    problem -.->|instead use| solution

    style PR4 fill:#ffcdd2
    style atomic fill:#c8e6c9
Enter fullscreen mode Exit fullscreen mode

Finding Your Pages Project

First, identify your existing Pages project via API:

#!/bin/bash
# find-pages-project.sh

ACCOUNT_ID="your-account-id"
DOMAIN="yourdomain.com"
API_TOKEN="your-api-token"

# List all Pages projects
curl -s "https://api.cloudflare.com/client/v4/accounts/${ACCOUNT_ID}/pages/projects" \
  -H "Authorization: Bearer ${API_TOKEN}" | \
  jq -r '.result[] | select(.domains[] | contains("'"${DOMAIN}"'")) | .name'
Enter fullscreen mode Exit fullscreen mode

Atomic Domain Switchover

This script removes the domain from Pages and adds it to Workers in rapid succession (2-5 seconds of downtime):

#!/bin/bash
# switchover.sh

set -e

ACCOUNT_ID="your-account-id"
ZONE_ID="your-zone-id"
API_TOKEN="your-api-token"
PAGES_PROJECT="old-pages-project"
WORKERS_SCRIPT="new-workers-project"
DOMAIN="yourdomain.com"

echo "Removing domain from Pages..."
curl -s -X DELETE \
  "https://api.cloudflare.com/client/v4/accounts/${ACCOUNT_ID}/pages/projects/${PAGES_PROJECT}/domains/${DOMAIN}" \
  -H "Authorization: Bearer ${API_TOKEN}"

echo "Adding Workers route for root domain..."
curl -s -X POST \
  "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/workers/routes" \
  -H "Authorization: Bearer ${API_TOKEN}" \
  -H "Content-Type: application/json" \
  --data '{
    "pattern": "'"${DOMAIN}"'/*",
    "script": "'"${WORKERS_SCRIPT}"'"
  }'

echo "Adding Workers route for www subdomain..."
curl -s -X POST \
  "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/workers/routes" \
  -H "Authorization: Bearer ${API_TOKEN}" \
  -H "Content-Type: application/json" \
  --data '{
    "pattern": "www.'"${DOMAIN}"'/*",
    "script": "'"${WORKERS_SCRIPT}"'"
  }'

echo "Done! Domain switched to Workers."
Enter fullscreen mode Exit fullscreen mode

Post-Switchover Verification

After running the switchover:

  1. Test immediately: Load your site and verify it works
  2. Check both domains: Test both yourdomain.com and www.yourdomain.com
  3. Enable HTTPS: In Cloudflare dashboard, ensure "Always Use HTTPS" is enabled
  4. Monitor for errors: Check Workers analytics for any 5xx errors

Common Domain Issues

522 Timeout Errors: You're pointing to the wrong target. Workers routes attach directly to your zone, don't try to CNAME to workers.dev.

www Subdomain Not Working: You need separate routes for root and www. The switchover script above handles both.

API Permission Errors: Your token needs "Edit Workers Routes" permission. Also ensure the start date is today or earlier (a future start date blocks the token).

Advanced Features

Once you're on Workers, the full platform opens up.

Service Bindings

Connect multiple Workers without network overhead. Traffic stays within Cloudflare's network, bypassing the public internet entirely:

{
  // Service Bindings provide network-level security
  // Traffic never leaves Cloudflare's infrastructure
  "services": [
    {
      "binding": "AUTH_SERVICE",
      "service": "auth-worker",
      "entrypoint": "AuthHandler", // Optional: only needed for named exports
    },
  ],
}
Enter fullscreen mode Exit fullscreen mode
// In your main Worker
const user = await env.AUTH_SERVICE.validateToken(token);
Enter fullscreen mode Exit fullscreen mode

Security Note: Service Bindings provide network-level isolation, but you should still implement application-level authentication (like HMAC signatures) for defense-in-depth. If one Worker is compromised, HMAC prevents it from impersonating other legitimate callers. Think of Service Bindings as the private network, and HMAC as the identity verification.

Service Bindings Documentation

Secrets Store

Share secrets across multiple Workers:

{
  "secrets_store_secrets": [
    {
      "binding": "API_KEY",
      "secret_name": "shared-api-key",
    },
  ],
}
Enter fullscreen mode Exit fullscreen mode

Secrets Store Documentation

Smart Placement

Let Cloudflare automatically place your Worker close to your data:

{
  "placement": {
    "mode": "smart",
  },
}
Enter fullscreen mode Exit fullscreen mode

Verify it's working by checking response headers:

# Check which colo your Worker is running from
curl -sI https://your-worker.example.com/ | grep -iE "cf-ray|cf-placement"

# cf-ray: abc123-NRT    ← NRT = Tokyo (user's nearest colo, normal)
# cf-ray: abc123-IAD    ← IAD = Virginia (Smart Placement moved it near your DB)
Enter fullscreen mode Exit fullscreen mode

The cf-ray header suffix shows the colo code. Without Smart Placement, a request from Tokyo always runs at NRT. With it enabled, if your D1 or external database is in us-east-1, the Worker might run from IAD instead — fewer round trips to the database, faster overall response even though the Worker is farther from the user.

Smart Placement Documentation

D1 Read Replication

For global apps with D1, enable read replicas to serve reads from the nearest edge location:

{
  "d1_databases": [
    {
      "binding": "DB",
      "database_name": "my-database",
      "database_id": "xxx",
      // Note: Enable read replication in Cloudflare dashboard, not wrangler.jsonc
    },
  ],
}
Enter fullscreen mode Exit fullscreen mode

Critical: Use D1 Sessions for Read-After-Write Consistency

When read replication is enabled, you must wrap database connections with sessions to ensure read-after-write consistency. Without this, a write followed immediately by a read may return stale data from a replica:

// WRONG: May read stale data after a write
const db = env.DB;
await db.prepare('INSERT INTO posts (title) VALUES (?)').bind('New Post').run();
const posts = await db.prepare('SELECT * FROM posts').all(); // Might miss the insert!

// CORRECT: Use sessions for consistency
const db = env.DB.withSession();
await db.prepare('INSERT INTO posts (title) VALUES (?)').bind('New Post').run();
const posts = await db.prepare('SELECT * FROM posts').all(); // Guaranteed to include insert
Enter fullscreen mode Exit fullscreen mode

For multiple databases, create a helper:

interface D1SessionEnv {
  DB_MAIN: D1Database;
  DB_CLIENT?: D1Database;
}

function wrapWithSessions(env: D1SessionEnv) {
  return {
    DB_MAIN: env.DB_MAIN.withSession(),
    DB_CLIENT: env.DB_CLIENT?.withSession(),
  };
}

// In your request handler
const dbs = wrapWithSessions(env);
Enter fullscreen mode Exit fullscreen mode

D1 Read Replication Documentation

D1 Sessions Documentation

Observability

Enable comprehensive logging and tracing for debugging and monitoring:

{
  "observability": {
    "enabled": true,
    "head_sampling_rate": 1, // 1 = 100% sampling, reduce for high-traffic
    "logs": {
      "enabled": true,
      "head_sampling_rate": 1,
      "persist": true, // Store logs for later analysis
      "invocation_logs": true, // Log each request
    },
    "traces": {
      "enabled": true,
      "persist": true,
      "head_sampling_rate": 1, // Capture all traces for debugging
    },
  },
}
Enter fullscreen mode Exit fullscreen mode

This configuration provides:

  • Invocation logs: See every request with timing and status
  • Persistent logs: Query historical logs in the Cloudflare dashboard
  • Traces: Distributed tracing for debugging complex flows across Service Bindings

For production with high traffic, reduce head_sampling_rate (e.g., 0.1 for 10%) to manage costs.

Workers Observability Documentation

Workers Logs Documentation

Workers Traces Documentation

CI/CD Setup

A lesson we learned the hard way

When we migrated from Pages, the obvious move was to replicate the whole build-and-deploy pipeline in GitHub Actions. So we did — every push built the project, ran tests, and deployed via wrangler deploy. It worked great for about two weeks, until we burned through our entire GitHub Actions minutes allocation!

The problem: Pages handled builds on Cloudflare's infrastructure for free. GitHub Actions bills per minute, and running npm ci && npm run build && wrangler deploy across multiple projects on every push adds up fast.

Our fix: use Cloudflare's Workers Builds (the built-in CI connected to your GitHub repo) for the actual build-and-deploy step, and reserve GitHub Actions for the lighter-weight jobs only, like linting, security scanning, type checking. The build itself happens on Cloudflare's side, where it's included in your plan.

If you still want GitHub Actions for deployment

It works, just watch your minutes. Here's the pipeline:

flowchart LR
    subgraph trigger["Triggers"]
        PR[Pull Request]
        Push[Push to main]
    end

    subgraph build["Build"]
        Install[npm ci]
        Build[npm run build]
        Test[npm test]
    end

    subgraph deploy["Deploy"]
        Preview[wrangler versions upload]
        Prod[wrangler deploy]
    end

    subgraph notify["Notify"]
        Comment[PR Comment with URL]
        Slack[Slack notification]
    end

    PR --> Install --> Build --> Test --> Preview --> Comment
    Push --> Install --> Build --> Test --> Prod --> Slack
Enter fullscreen mode Exit fullscreen mode

Here's a complete GitHub Actions workflow:

# .github/workflows/deploy.yml
name: Deploy to Workers

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - run: npm ci
      - run: npm run build

      - name: Deploy to Cloudflare Workers
        uses: cloudflare/wrangler-action@v3
        with:
          apiToken: ${{ secrets.CF_API_TOKEN }}
          accountId: ${{ secrets.CF_ACCOUNT_ID }}
Enter fullscreen mode Exit fullscreen mode

Required API Token Permissions

Your token needs:

  • Workers Scripts: Edit - Deploy Workers
  • Workers Routes: Edit - Manage custom domains
  • Account Settings: Read - Access account resources

If using Secrets Store:

  • Workers Secrets Store: Edit (not just Read!)

Preview Environments

One of Pages' best features was automatic preview deployments for every branch. Workers can replicate this with three approaches:

Approach How it works Isolation Complexity Best for
Built-in Preview URLs wrangler versions upload → unique URL per deploy Per-deployment Low Most projects
Environment-based wrangler deploy --env preview → staging subdomain Single preview env Medium Staging workflows
Branch Workers Separate Worker per branch, delete after merge Per-branch High Large teams

For most projects, start with Option 1.

Option 1: Built-in Preview URLs (Simplest)

Enable Cloudflare's native preview URL feature:

wrangler.jsonc:

{
  "name": "my-app",
  "compatibility_date": "2026-03-01",
  "preview_urls": true,
  "main": "./dist/server/index.js",
  "assets": {
    "directory": "./dist/client/",
  },
}
Enter fullscreen mode Exit fullscreen mode

Then use versioned deployments:

# Upload a new version (doesn't affect production)
npx wrangler versions upload

# Get a preview URL for that version
# Output: https://abc123.my-app.workers.dev
Enter fullscreen mode Exit fullscreen mode

In CI, deploy preview versions for pull requests:

# .github/workflows/preview.yml
name: Preview Deployment

on:
  pull_request:
    branches: [main]

jobs:
  preview:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - run: npm ci
      - run: npm run build

      - name: Deploy Preview Version
        id: deploy
        uses: cloudflare/wrangler-action@v3
        with:
          apiToken: ${{ secrets.CF_API_TOKEN }}
          accountId: ${{ secrets.CF_ACCOUNT_ID }}
          command: versions upload

      - name: Comment Preview URL
        uses: actions/github-script@v7
        with:
          script: |
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: '🚀 Preview deployed! Check it out: ${{ steps.deploy.outputs.deployment-url }}'
            })
Enter fullscreen mode Exit fullscreen mode

Option 2: Environment-based Previews

Create a dedicated preview environment with its own subdomain:

wrangler.jsonc:

{
  "name": "my-app",
  "compatibility_date": "2026-03-01",
  "compatibility_flags": ["nodejs_compat"],
  "main": "./dist/server/index.js",
  "assets": {
    "directory": "./dist/client/",
  },

  // Required for preview URLs to work
  "workers_dev": true,
  "preview_urls": true,

  "env": {
    "preview": {
      "name": "my-app-preview",
      "vars": {
        "ENVIRONMENT": "preview",
      },
      // Preview uses workers.dev URL automatically
    },
    "production": {
      "routes": [
        { "pattern": "yourdomain.com", "zone_name": "yourdomain.com" },
        { "pattern": "www.yourdomain.com", "zone_name": "yourdomain.com" },
      ],
      "vars": {
        "ENVIRONMENT": "production",
      },
    },
  },
}
Enter fullscreen mode Exit fullscreen mode

Deploy to different environments:

# Deploy to preview
npx wrangler deploy --env preview

# Deploy to production
npx wrangler deploy --env production
Enter fullscreen mode Exit fullscreen mode

CI workflow:

- name: Deploy
  uses: cloudflare/wrangler-action@v3
  with:
    apiToken: ${{ secrets.CF_API_TOKEN }}
    command: deploy --env ${{ github.ref == 'refs/heads/main' && 'production' || 'preview' }}
Enter fullscreen mode Exit fullscreen mode

Option 3: Dynamic Branch Workers

For teams that need isolated environments per feature branch:

# .github/workflows/branch-preview.yml
name: Branch Preview

on:
  push:
    branches-ignore: [main]
  delete:

jobs:
  deploy:
    if: github.event_name == 'push'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'

      - run: npm ci && npm run build

      - name: Deploy Branch Worker
        uses: cloudflare/wrangler-action@v3
        with:
          apiToken: ${{ secrets.CF_API_TOKEN }}
          # Sanitize branch name for Worker naming
          command: deploy --name my-app-${{ github.ref_name | replace('/', '-') }}

  cleanup:
    if: github.event_name == 'delete'
    runs-on: ubuntu-latest
    steps:
      - name: Delete Branch Worker
        uses: cloudflare/wrangler-action@v3
        with:
          apiToken: ${{ secrets.CF_API_TOKEN }}
          command: delete --name my-app-${{ github.event.ref | replace('/', '-') }}
Enter fullscreen mode Exit fullscreen mode

Troubleshooting

Deployment failures:

flowchart TD
    DF{Deploy fails — error type?}
    DF -->|Size limit| SL[Bundle too large]
    DF -->|Error 10021| AE1[Secrets Store needs Edit permission]
    DF -->|Error 10000| AE2[Token needs Workers Routes Edit]
    DF -->|DNS conflict| DNS[Delete existing DNS record first]
    DF -->|wrangler not found| WNF[Use npx wrangler deploy]

    style SL fill:#fff3e0
    style AE1 fill:#ffcdd2
    style AE2 fill:#ffcdd2
    style DNS fill:#fff3e0
Enter fullscreen mode Exit fullscreen mode

Runtime errors:

flowchart TD
    RF{Runtime error — what happens?}
    RF -->|fs/path/process error| NODE[Missing nodejs_compat flag]
    RF -->|522 timeout| T522[Wrong CNAME — use Workers routes]
    RF -->|404 on routes| R404[Check not_found_handling config]
    RF -->|Env var undefined| ENV[Static vs dynamic env access]
    RF -->|Auth failed| AUTH[Re-set secrets on new Worker]
    RF -->|SQLITE_CONSTRAINT| FK[Foreign key — parent record missing]

    style NODE fill:#e3f2fd
    style T522 fill:#ffcdd2
    style AUTH fill:#ffcdd2
    style FK fill:#fff3e0
Enter fullscreen mode Exit fullscreen mode

Bundle Size Exceeds 10MB

Symptoms: Deployment fails with size limit error.

First thing to try: Upgrade to Vite 8. The Rolldown bundler produces significantly smaller output than Vite 7's Rollup — we saw 10-30% bundle size reductions across our projects with zero code changes. See Migrating 8 SvelteKit Sites to Vite 8 for details.

If you're still over the limit:

  1. Tree-shake aggressively: Remove unused imports
  2. Dynamic imports: Split code that isn't needed on every request
  3. Move to client: Large libraries that don't need server-side rendering
  4. External services: Offload to KV, R2, or external APIs
// Before: Large import always loaded
import { heavyLibrary } from 'heavy-library';

// After: Dynamic import when needed
const heavyLibrary = await import('heavy-library');
Enter fullscreen mode Exit fullscreen mode

Node.js API Errors

Symptoms: Runtime errors about missing fs, path, or process.

Solutions:

For fs operations:

// Instead of fs.readFileSync
const response = await env.ASSETS.fetch(new Request('file.json'));
const data = await response.json();
Enter fullscreen mode Exit fullscreen mode

For process.env:

// SvelteKit
import { env } from '$env/dynamic/private';
const apiKey = env.API_KEY;

// Raw Workers
export default {
  fetch(request, env) {
    const apiKey = env.API_KEY;
  },
};
Enter fullscreen mode Exit fullscreen mode

Authorization Errors

Error 10021 (Secrets Store): Your token has "Read" but needs "Edit" permission for Secrets Store.

Error 10000 (Workers Routes): Your token needs "Edit Workers Routes" permission.

Token not working at all: Check that the token's start date isn't set to a future date.

Secrets Synchronization After Migration

Symptoms: Authentication failures when calling other Workers or external services after migration.

Cause: Secrets set via wrangler secret put are Worker-specific. When you migrate from Pages to Workers, you create a new Worker, your secrets don't transfer automatically.

Solution:

  1. Re-set all secrets on the new Worker:
   wrangler secret put MY_SECRET
Enter fullscreen mode Exit fullscreen mode
  1. If using HMAC authentication with another service, verify the secret format:

    • Some services use the raw secret as the signing key
    • Others use a hash of the secret (e.g., SHA256) as the signing key
    • Check the receiving service's documentation or code
  2. For Service Bindings authentication, ensure both the calling and receiving Workers have matching secrets configured.

Static Redirects Limit

Pages allowed 2000 static redirects in _redirects. Workers doesn't have this file.

Solutions:

  1. Pattern-based redirects in Worker:
   const redirects = new Map([
     ['/old-path', '/new-path'],
     ['/another-old', '/another-new'],
   ]);

   export default {
     fetch(request, env) {
       const url = new URL(request.url);
       const redirect = redirects.get(url.pathname);
       if (redirect) {
         return Response.redirect(new URL(redirect, url.origin), 301);
       }
       return env.ASSETS.fetch(request);
     },
   };
Enter fullscreen mode Exit fullscreen mode
  1. Cloudflare Bulk Redirects: For large redirect lists, use Bulk Redirects in the Cloudflare dashboard.

DNS Conflicts

Symptoms: Deployment fails with "DNS record already exists" error.

Solution: Manually delete the conflicting DNS record in Cloudflare dashboard before deploying.

D1 Foreign Key Constraint Errors

Symptoms: FOREIGN KEY constraint failed: SQLITE_CONSTRAINT errors at runtime.

Cause: D1 enforces foreign key constraints by default (unlike some SQLite configurations). This catches referential integrity issues that might have been silently ignored before.

Solutions:

  1. Ensure parent records exist first:
   // WRONG: Child before parent
   await db.prepare('INSERT INTO posts (user_id, title) VALUES (?, ?)').bind(userId, title).run();

   // CORRECT: Verify parent exists or create in correct order
   const user = await db.prepare('SELECT id FROM users WHERE id = ?').bind(userId).first();
   if (!user) throw new Error('User not found');
   await db.prepare('INSERT INTO posts (user_id, title) VALUES (?, ?)').bind(userId, title).run();
Enter fullscreen mode Exit fullscreen mode
  1. Use NULL for optional foreign keys:
   // If the FK column is nullable and you don't have a valid reference
   await db
     .prepare('INSERT INTO posts (user_id, title) VALUES (?, ?)')
     .bind(null, title) // Pass null, not undefined or invalid ID
     .run();
Enter fullscreen mode Exit fullscreen mode
  1. Order batch operations correctly:
   // In a batch, parent inserts must come before child inserts
   await db.batch([
     db.prepare('INSERT INTO users (id, name) VALUES (?, ?)').bind(userId, name),
     db.prepare('INSERT INTO posts (user_id, title) VALUES (?, ?)').bind(userId, title),
   ]);
Enter fullscreen mode Exit fullscreen mode

wrangler Command Not Found in CI

Symptoms: CI fails with "wrangler: command not found".

Solution: Use your package manager's exec command:

# npm
npx wrangler deploy

# pnpm
pnpm exec wrangler deploy

# yarn
yarn wrangler deploy
Enter fullscreen mode Exit fullscreen mode

Deleting Pages Projects with Many Deployments

The Cloudflare dashboard can't delete Pages projects with 500+ deployments.

Solution: Use the API to delete deployments in batches:

#!/bin/bash
# delete-deployments.sh

ACCOUNT_ID="your-account-id"
PROJECT_NAME="your-pages-project"
API_TOKEN="your-api-token"
LIMIT=100  # Deployments to delete per run

deployments=$(curl -s \
  "https://api.cloudflare.com/client/v4/accounts/${ACCOUNT_ID}/pages/projects/${PROJECT_NAME}/deployments?per_page=${LIMIT}" \
  -H "Authorization: Bearer ${API_TOKEN}" | jq -r '.result[].id')

for id in $deployments; do
  echo "Deleting deployment: $id"
  curl -s -X DELETE \
    "https://api.cloudflare.com/client/v4/accounts/${ACCOUNT_ID}/pages/projects/${PROJECT_NAME}/deployments/${id}" \
    -H "Authorization: Bearer ${API_TOKEN}"
  sleep 0.3  # Rate limit protection
done

echo "Deleted up to ${LIMIT} deployments. Run again if more remain."
Enter fullscreen mode Exit fullscreen mode

Run this script repeatedly until the deployment count is low enough to delete via dashboard.

Post-Migration Checklist

After migrating, verify everything works:

  • [ ] Site loads correctly on all custom domains
  • [ ] Both root and www subdomains work
  • [ ] HTTPS is enforced ("Always Use HTTPS" enabled)
  • [ ] All environment variables are configured
  • [ ] Secrets are accessible (re-set via wrangler secret put)
  • [ ] Database bindings (D1, KV, R2) work
  • [ ] D1 Sessions enabled if using read replication
  • [ ] API routes function correctly
  • [ ] Service Bindings authenticate successfully (if applicable)
  • [ ] Preview deployments work for non-production branches
  • [ ] CI/CD pipeline deploys successfully
  • [ ] Observability/logging is enabled (logs + traces)
  • [ ] Scheduled triggers (crons) are firing correctly
  • [ ] Smart Placement is active (if desired)

Where Things Stand (March 2026)

As of March 2026, Workers has full feature parity with Pages for static assets, SSR, and custom domains. The Secrets Store, Workflows, Containers, and Durable Objects remain Workers-only. Cloudflare hasn't announced a forced migration deadline, but the gap keeps widening — every major platform feature ships for Workers first.

The domain switchover is the trickiest part. The atomic API approach described above works, but expect 2-5 seconds of downtime. Everything else is configuration changes.

If you're starting a new project, skip Pages entirely — deploy to Workers from day one. If you have existing Pages projects, migrate on your own schedule while you can control the process.


References:

Migration & Getting Started

Framework Adapters

Workers Features

Observability & Debugging

Deployment & CI/CD

- GitHub Actions Integration

Originally published at cogley.jp

Rick Cogley is CEO of eSolia Inc., providing bilingual IT outsourcing and infrastructure services in Tokyo, Japan.

Top comments (0)