DEV Community

Cover image for Cloudflare Dynamic Workers: Sandboxed Code Execution at the Edge
Rick Cogley
Rick Cogley

Posted on • Originally published at cogley.jp

Cloudflare Dynamic Workers: Sandboxed Code Execution at the Edge

I needed to run user-defined JavaScript templates from a database — code that formats RSS feed items into social media posts. The templates could be anything: hand-written, AI-generated, pasted from a blog. Running arbitrary code strings inside my production Worker, with access to D1 databases and R2 buckets, wasn't an option.

Cloudflare's Dynamic Workers, released in March 2026, solved this. They let a parent Worker spawn new Workers at runtime from code strings, each in its own V8 isolate with explicitly controlled access. I wired them into my publishing stack in an afternoon.

The Problem Before Dynamic Workers

If you had a JavaScript function stored in a database and wanted to execute it inside a Cloudflare Worker, you had three options — all bad:

  1. new Function() / eval() — runs the code inside your Worker process, with full access to every binding, every secret, and the open internet. One malicious template and your D1 database, R2 buckets, and API keys are exposed.
  2. String interpolation only — treat templates as dumb mustache-style patterns ({{title}} - {{link}}). Safe, but you can't express logic: no "skip items without titles," no conditional formatting, no length-based truncation.
  3. External execution service — ship the code to a separate sandboxed API. Adds latency, another deployment to maintain, and a network hop that defeats the edge-computing point of Workers.

Dynamic Workers give you option 1's full JavaScript expressiveness with option 3's isolation, at option 2's simplicity. The code runs in a separate V8 isolate on the same machine, with zero access to anything you don't explicitly hand it.

How They Work

The parent Worker creates a new Worker from a code string. Each spawned worker gets its own V8 sandbox, starts in milliseconds, and the parent controls exactly what it can touch.

The API is minimal:

const worker = env.LOADER.load({
  compatibilityDate: '2026-03-21',
  mainModule: 'transform.js',
  modules: { 'transform.js': codeString },
  globalOutbound: null, // block all network access
});

const result = await worker.getEntrypoint().myMethod(data);
Enter fullscreen mode Exit fullscreen mode

Two things matter here:

  1. globalOutbound: null — the spawned worker can't make HTTP requests. No phoning home, no data exfiltration, no SSRF.
  2. Selective binding exposure — the parent decides which bindings (D1, R2, KV) to pass. Pass nothing and the sandbox is a pure function: data in, data out.

How I'm Using Them: Auto-Post RSS Feed Templates

My publishing system (cogley.jp) is a monorepo with a Hono API on Cloudflare Workers, backed by D1, R2, KV, Vectorize, and several Workflows. It handles micro posts, long-form articles, feeds, and syndication to Bluesky, Nostr, and status.lol.

I already had an RSS feed reader with a feeds table that included unused auto_post and auto_post_template columns. The idea was always "when a feed gets a new item, create a post from it."

Before Dynamic Workers, I would have had to hardcode the formatting logic for each feed directly in my Worker — a function like formatEsoliaBlogPost() baked into the deployed code, redeployed every time I wanted to change the output format or add a new feed with different formatting. Now, I store a JavaScript function in a database column, and the cron executes it in a sandbox. Adding a new feed with completely different formatting is an API call, not a deployment.

The Flow

flowchart LR
    A[Cron 15min] --> B[Fetch RSS]
    B --> C{New items?}
    C -- No --> Z[Done]
    C -- Yes --> D[Dynamic Worker
sandbox]
    D --> E{Valid output?}
    E -- null --> F[Mark skipped]
    E -- Yes --> G[Create post]
    G --> H[Generate OG image]
    H --> I[Syndicate to
Bluesky / Nostr / status.lol]
Enter fullscreen mode Exit fullscreen mode
  1. A cron runs every 15 minutes, refreshing subscribed RSS feeds
  2. When new items appear in a feed with auto_post: true, each item goes to a Dynamic Worker
  3. The Dynamic Worker executes the template — a JavaScript function stored in D1
  4. The template returns structured data (content, stream, mood, visibility)
  5. The parent validates the output, creates a post, pre-generates an OG image, and triggers syndication to Bluesky/Nostr/status.lol

The Template

Templates are arrow functions stored in the auto_post_template column. This is the one I'm running for our company's bilingual tech blog:

(item) => {
  if (!item.title) return null; // skip items without titles
  const title = item.title;
  const link = item.link || "";
  const summary = item.summary || "";
  const teaser = summary.length > 150
    ? summary.substring(0, 147) + "..."
    : summary;

  let content = `**${title}**`;
  if (teaser) content += `\n\n${teaser} ☕`;
  if (link) content += `\n\n${link}`;
  content += `\n\n(Japanese version also available on the blog.)`;

  return {
    content,
    stream: "tech",
    visibility: "public",
    mood: "binoculars",
    syndicate_to: ["bluesky", "nostr", "statuslog"]
  };
}
Enter fullscreen mode Exit fullscreen mode

The function receives a plain data object — title, link, summary, content, author, and source feed metadata. It returns what the post should look like, or null to skip the item.

This is the kind of logic that string interpolation can't express. The template makes decisions: skip items without titles, truncate summaries longer than 150 characters at a word-friendly boundary, conditionally include the teaser line, choose a mood and syndication targets. With a mustache-style {{title}} - {{link}} template, I'd need a separate config field for every one of those behaviors — and I'd still be hardcoding the logic in my Worker code. With Dynamic Workers, the template is the logic, and changing it means updating one database row. No redeploy, no code change.

The Security Model

flowchart TB
    subgraph Parent Worker
        A[Template code from D1] --> B[env.LOADER.load]
        F[Validate & sanitize output] --> G[Create post in D1]
    end
    subgraph Dynamic Worker sandbox
        C[Execute template function]
        D[No bindings
No network
No secrets]
    end
    B --> C
    C --> F
    style D fill:#fee,stroke:#c33
Enter fullscreen mode Exit fullscreen mode

The template code could be anything. The sandbox ensures it can't cause harm:

  • No bindings: zero Cloudflare bindings. No D1, no R2, no KV, no secrets.
  • No network: globalOutbound: null blocks all outbound HTTP.
  • Data in, data out: the template receives a plain object and returns a plain object.
  • Output validation: the parent validates the return value against a strict schema, sanitizes strings, strips control characters, and enforces allowed values for fields like stream and visibility.

If the template throws or returns something invalid, the feed item is marked skipped with a reason. The system moves on to the next item.

Cost

$0.002 per unique worker loaded per day (waived during beta), plus standard CPU time. For a cron that runs a handful of templates a few times a day, this rounds to zero against the AI inference costs I'm already paying for image analysis and content summarization.

Another Use Case: Live Code Migration

I maintain svelte.cogley.jp, an interactive reference for migrating from React, Vue, and Angular to Svelte 5. The site itself is static SSR — all mapping data baked into TypeScript files. But the people using it are actively converting code from one framework to another, and they'd benefit from a "paste your component, get a Svelte 5 version" feature.

Dynamic Workers are how I'd build it:

  1. User pastes a React/Vue/Angular component
  2. The server sends it to Workers AI with a conversion prompt
  3. The AI-generated Svelte code runs in a Dynamic Worker sandbox to verify it parses — no phantom imports, no syntax errors
  4. The validated result goes back to the user

Two layers of untrusted input — user-submitted code and AI-generated output — both handled by the same sandbox. globalOutbound: null blocks the code from reaching the network, and the V8 isolate prevents it from touching anything else on the platform. This pattern applies anywhere you need a "paste your code, get a conversion" tool.

Where They Don't Fit

I evaluated Dynamic Workers across my entire stack. Three places where they're the wrong tool:

  • Typed application routes — my Hono routes are stable, typed with Drizzle ORM, and benefit from static deployment. Making them dynamic would add indirection for no gain.
  • Durable Workflows — image processing, syndication delivery, and transcription use Cloudflare Workflows because they're multi-step and retriable. Dynamic Workers are ephemeral — wrong primitive.
  • "Make everything dynamic" — Dynamic Workers solve a specific problem: runtime execution of variable code. Applying them broadly adds complexity without purpose.

The right use cases are user-configurable transforms and sandboxed execution of untrusted code — places where the logic changes without redeploying, or where you need a hard security boundary between arbitrary code and your platform.

What's Next

I'm building a feed management UI in my authoring app — list subscriptions, toggle auto-posting, edit templates in a code editor. The API endpoints already exist; it's the frontend that needs work.

Beyond that, the same Dynamic Worker pattern applies to syndication formatting (per-platform rules stored in D1, editable without redeploying) and AI content transforms (LLM generates the code, Dynamic Worker executes it, parent validates the output).


Originally published at cogley.jp

Rick Cogley is CEO of eSolia Inc., providing bilingual IT outsourcing and infrastructure services in Tokyo, Japan.

Top comments (0)