Why Playwright Test?
Playwright Test has auto-waiting (no flaky sleep calls), parallel execution, trace viewer, and tests across Chromium, Firefox, and WebKit from one codebase.
Quick Start
npm init playwright@latest
Your First Test
import { test, expect } from "@playwright/test"
test("homepage has title", async ({ page }) => {
await page.goto("https://your-app.com")
await expect(page).toHaveTitle(/My App/)
})
test("login flow", async ({ page }) => {
await page.goto("/login")
await page.fill("#email", "user@example.com")
await page.fill("#password", "password123")
await page.click("button[type=submit]")
await expect(page).toHaveURL("/dashboard")
})
Auto-Waiting
// No sleep, no waitFor needed
await page.click("button.submit") // Waits until visible + enabled
await page.fill("#name", "John") // Waits until editable
await expect(page.locator(".toast")).toBeVisible() // Waits up to 5s
Page Object Model
export class LoginPage {
constructor(private page: Page) {}
async login(email: string, password: string) {
await this.page.locator("#email").fill(email)
await this.page.locator("#password").fill(password)
await this.page.locator("button[type=submit]").click()
}
}
API Testing
test("API: create user", async ({ request }) => {
const res = await request.post("/api/users", {
data: { name: "John", email: "john@test.com" },
})
expect(res.ok()).toBeTruthy()
})
Trace Viewer
npx playwright test --trace on
npx playwright show-trace trace.zip
Shows every action, screenshot, network request, and console log.
Configuration
import { defineConfig } from "@playwright/test"
export default defineConfig({
testDir: "./tests",
fullyParallel: true,
use: {
baseURL: "http://localhost:3000",
trace: "on-first-retry",
},
projects: [
{ name: "chromium", use: { browserName: "chromium" } },
{ name: "firefox", use: { browserName: "firefox" } },
{ name: "webkit", use: { browserName: "webkit" } },
],
})
Need to extract data from any website at scale? I build custom web scrapers — 77 production scrapers running on Apify Store. Email me at spinov001@gmail.com for a tailored solution.
Check out my awesome-web-scraping collection — 400+ tools for extracting web data.
Top comments (0)