Not every request needs to hit your backend. Some operations—metadata extraction, URL validation, lightweight transformations—are better served at the edge, closer to your users. Cloudflare Workers provide a compelling platform for these use cases, but knowing when to reach for them (and when not to) is the real skill.
In this article, I'll share a decision framework for edge computing based on my experience building services at Allscreenshots, where we use Workers for link preview metadata extraction while keeping heavier operations in our main backend.
What Are Cloudflare Workers?
Cloudflare Workers are serverless functions that run on Cloudflare's global edge network—over 300 data centers worldwide. Unlike traditional serverless (AWS Lambda, Google Cloud Functions), Workers execute at the edge, meaning your code runs in the data center closest to each user.
Key characteristics:
- No cold starts: V8 isolates spin up in under 5ms
- Global by default: Deployed to all edge locations automatically
- Lightweight runtime: JavaScript/TypeScript with Web APIs (fetch, crypto, streams)
- Request-based pricing: Pay per invocation, not per server
The Decision Framework
Before moving functionality to the edge, ask these questions:
Move to the Edge When:
1. The operation is stateless
Edge functions excel at request-in, response-out operations. If you need to read from or write to a database, you're adding latency that negates edge benefits.
Good fit: Parse HTML and extract meta tags
Poor fit: Look up user preferences from PostgreSQL
2. Latency matters more than compute power
Edge locations have limited CPU time (typically 10-50ms for free tiers, more for paid). If your operation is lightweight but latency-sensitive, edge wins.
Good fit: Validate and normalize a URL
Poor fit: Generate a PDF report
3. You need geographic distribution without managing infrastructure
Deploying to 300+ locations manually is impractical. Edge platforms handle this automatically.
Good fit: Serve different content based on user location
Poor fit: Run a long-lived WebSocket server
4. The workload is read-heavy and cacheable
Edge computing pairs well with CDN caching. Compute once, cache at the edge, serve many.
Good fit: Transform and cache API responses
Poor fit: Process unique, non-cacheable data per request
Keep in Your Backend When:
- You need persistent connections (databases, WebSockets)
- Operations are CPU-intensive (image processing, ML inference)
- You require more than a few seconds of execution time
- The operation needs access to your internal services
- State management is complex
Common Edge Use Cases
1. Metadata Extraction
Fetching a URL and parsing its HTML for Open Graph tags, Twitter Cards, or structured data. The operation is stateless, involves network I/O to external sites, and benefits from being geographically close to both the user and target server.
2. Request Validation and Transformation
Validating API requests, normalizing URLs, or transforming payloads before they reach your backend. Reject bad requests at the edge before they consume backend resources.
// Example: URL normalization at the edge
function normalizeUrl(input: string): string {
let url = input.trim().toLowerCase();
if (!url.startsWith('http://') && !url.startsWith('https://')) {
url = 'https://' + url;
}
// Remove trailing slashes, normalize www, etc.
const parsed = new URL(url);
return parsed.origin + parsed.pathname.replace(/\/+$/, '');
}
3. API Response Shaping
Your backend returns verbose JSON, but mobile clients need a slim payload. Transform at the edge:
// Backend returns full user object
// Edge returns only what the client needs
function shapeResponse(fullResponse: FullUser): SlimUser {
return {
id: fullResponse.id,
name: fullResponse.display_name,
avatar: fullResponse.profile.avatar_url
};
}
4. A/B Testing and Feature Flags
Route requests to different backends or modify responses based on cookies, headers, or random assignment—all without touching your application code.
5. Authentication Token Validation
Validate JWTs at the edge before requests reach your backend. Invalid tokens never consume backend resources.
// Pseudo-code for JWT validation at edge
async function validateToken(request: Request): Promise<Response> {
const token = request.headers.get('Authorization')?.replace('Bearer ', '');
if (!token || !isValidJwt(token)) {
return new Response('Unauthorized', { status: 401 });
}
// Token valid, forward to backend
return fetch(request);
}
Architecture Pattern: Edge + Backend
The pattern that works well: edge for preprocessing, backend for heavy lifting.
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Client │────▶│ Edge │────▶│ Backend │
│ │ │ Worker │ │ │
└─────────────┘ └─────────────┘ └─────────────┘
│
┌─────┴─────┐
│ │
Stateless Lightweight
operations transforms
At the edge:
- Validate requests
- Extract metadata from external URLs
- Transform/filter responses
- Handle CORS
- Cache static computations
In the backend:
- Database operations
- File storage
- Job queues and async processing
- Browser automation
- Business logic requiring state
Trade-offs to Consider
Bundle size matters: Workers have size limits. Heavy libraries may not fit or may blow your CPU budget. Prefer lightweight, purpose-built code.
No traditional debugging: You can't attach a debugger. Rely on logging, Cloudflare's dashboard, and local development with Wrangler.
Limited execution time: Workers have CPU time limits (10-50ms on free tier, higher on paid). Long-running operations don't fit.
Statelessness is strict: No filesystem, no persistent memory between requests. Need state? Use Cloudflare KV, Durable Objects, or call your backend.
Vendor lock-in: Workers use Web APIs but have Cloudflare-specific features. Porting to another platform requires some rewriting.
When NOT to Use Workers
I've seen teams move everything to the edge and regret it. Avoid Workers for:
Database-heavy operations: Every DB call adds latency. If you're making 5+ queries, keep it in your backend near the database.
CPU-intensive processing: Image resizing, video transcoding, ML inference—these need more compute than edge provides.
Complex orchestration: Multi-step workflows with error handling and retries belong in your backend.
Operations requiring secrets rotation: While Workers support secrets, complex credential management is easier in traditional backends.
Performance Expectations
Typical characteristics for well-designed Workers:
| Metric | Typical Value |
|---|---|
| Worker execution time | 1-10ms (excluding external fetches) |
| Cold start | <5ms |
| Total latency | Dominated by external API calls |
| Bundle size target | <1MB (smaller is better) |
The worker itself is rarely the bottleneck. External network calls dominate response time.
Getting Started
Cloudflare's Wrangler CLI makes deployment straightforward:
# Install
npm install -g wrangler
# Create project
wrangler init my-worker
# Develop locally
wrangler dev
# Deploy globally
wrangler deploy
Your code deploys to 300+ locations in seconds. No regions to configure, no scaling to manage.
Conclusion
Cloudflare Workers shine for stateless, latency-sensitive operations that benefit from geographic distribution:
- Metadata extraction
- Request validation and transformation
- API response shaping
- A/B testing and feature flags
- Token validation
Before moving functionality to the edge, verify it's truly stateless and lightweight. The edge isn't a replacement for your backend—it's a complement that handles the quick stuff so your backend can focus on what it does best.
Start with one simple use case, measure the impact, and expand from there.
Top comments (1)
Clear and practical breakdown. I like the emphasis on decision making over hype Workers as a complement, not a backend replacement. The stateless + latency first framework is especially useful for teams deciding what actually belongs at the edge.