A Playwright script that works on your laptop can still fail once it becomes a scheduled scraping worker in production.
In this post, I break down a practical setup using Playwright, Bright Data Browser API, and Kubernetes Jobs/CronJobs to run browser-based scraping more reliably for JavaScript-heavy targets.
It covers:
- remote browser execution over CDP
- lightweight worker containers
- Kubernetes Job/CronJob scheduling
- secret handling with Kubernetes Secrets
- avoiding overlapping runs in production
If you are building real scraping pipelines rather than one-off demos, this walkthrough may be useful.

Top comments (0)