DEV Community

Alex Spinov
Alex Spinov

Posted on

Wrangler Has a Free CLI API That Deploys Code to Cloudflare in Seconds

Wrangler is Cloudflare's CLI for Workers, Pages, R2, D1, KV, and Queues. Deploy globally in seconds from your terminal.

Quick Start

# Create a new project
npm create cloudflare@latest my-worker

# Dev mode with hot reload
wrangler dev

# Deploy globally
wrangler deploy
Enter fullscreen mode Exit fullscreen mode

wrangler.toml: Configuration

name = "my-scraper-api"
main = "src/index.ts"
compatibility_date = "2026-03-01"

# Environment variables
[vars]
API_VERSION = "v2"
MAX_RESULTS = "100"

# Secrets (set via CLI)
# wrangler secret put API_KEY

# D1 Database
[[d1_databases]]
binding = "DB"
database_name = "scraping-db"
database_id = "xxxx-xxxx-xxxx"

# R2 Storage
[[r2_buckets]]
binding = "BUCKET"
bucket_name = "scraped-data"

# KV Namespace
[[kv_namespaces]]
binding = "CACHE"
id = "xxxx"

# Cron Triggers
[triggers]
crons = ["*/30 * * * *"]

# Routes
routes = [
  { pattern = "api.example.com/*", zone_name = "example.com" }
]
Enter fullscreen mode Exit fullscreen mode

D1 Commands

# Create database
wrangler d1 create scraping-db

# Run SQL
wrangler d1 execute scraping-db --command "CREATE TABLE products (id INTEGER PRIMARY KEY, title TEXT, price REAL)"

# Run migrations
wrangler d1 migrations apply scraping-db

# Query
wrangler d1 execute scraping-db --command "SELECT * FROM products LIMIT 5"

# Export
wrangler d1 export scraping-db --output backup.sql
Enter fullscreen mode Exit fullscreen mode

R2 Commands

# Create bucket
wrangler r2 bucket create scraped-data

# Upload file
wrangler r2 object put scraped-data/reports/2026-03.csv --file ./report.csv

# Download
wrangler r2 object get scraped-data/reports/2026-03.csv --file ./downloaded.csv

# List objects
wrangler r2 object list scraped-data --prefix reports/
Enter fullscreen mode Exit fullscreen mode

KV Commands

# Create namespace
wrangler kv namespace create CACHE

# Set value
wrangler kv key put --namespace-id=xxx "config" '{"maxRetries":3}'

# Get value
wrangler kv key get --namespace-id=xxx "config"

# Bulk upload
wrangler kv bulk put --namespace-id=xxx data.json
Enter fullscreen mode Exit fullscreen mode

Environments

# wrangler.toml
[env.staging]
name = "my-worker-staging"
vars = { API_VERSION = "v2-beta" }

[env.production]
name = "my-worker-prod"
routes = [{ pattern = "api.example.com/*" }]
Enter fullscreen mode Exit fullscreen mode
wrangler deploy --env staging
wrangler deploy --env production
Enter fullscreen mode Exit fullscreen mode

Tail: Live Logs

# Stream live logs from production
wrangler tail

# Filter by status
wrangler tail --status error

# Filter by search
wrangler tail --search "scrape"
Enter fullscreen mode Exit fullscreen mode

Deploy scraping APIs globally? My Apify tools + Wrangler = edge-deployed data APIs.

Custom deployment solution? Email spinov001@gmail.com

Top comments (0)