End-to-end testing for a desktop app is different from web testing. You're not just checking that buttons click — you're verifying the integration between your frontend, your Rust backend, and (in my case) the Windows operating system.
This final post covers how I set up E2E testing for WSL-UI, including some features that proved unexpectedly useful: automated screenshot generation for Store listings and demo video recording.
The Testing Stack
For E2E testing, I'm using:
- WebdriverIO — The test runner and assertion library
- Tauri Driver — A WebDriver server that speaks to Tauri apps
- Mocha — Test framework (WebdriverIO's default)
- wdio-video-reporter — Records test runs as video
Tauri Driver is essential. It implements the WebDriver protocol but connects to Tauri's WebView instead of a browser. From your test's perspective, it looks like you're testing a website, but you're actually driving a desktop app.
Configuration
The WebdriverIO config (wdio.conf.ts) handles finding and launching the Tauri binary:
function findTauriBinary(): string {
const debugPath = path.join(
__dirname,
'src-tauri/target/debug/wsl-ui.exe'
);
const releasePath = path.join(
__dirname,
'src-tauri/target/release/wsl-ui.exe'
);
// Prefer debug build for E2E testing
// (includes dev tools, faster build)
if (fs.existsSync(debugPath)) {
return debugPath;
}
if (fs.existsSync(releasePath)) {
return releasePath;
}
throw new Error('Tauri binary not found. Run: npm run tauri build -- --debug');
}
export const config: WebdriverIO.Config = {
runner: 'local',
specs: ['./src/test/e2e/specs/**/*.spec.ts'],
capabilities: [{
'tauri:options': {
application: findTauriBinary(),
},
}],
services: ['tauri'],
framework: 'mocha',
reporters: ['spec'],
mochaOpts: {
timeout: 60000, // Desktop apps are slower than web
},
};
Important note: You need a debug build for E2E testing. The debug build includes proper WebView2 initialization for automation. Release builds may have issues with the Origin header that Tauri Driver sends.
Writing Tests
A basic test looks like this:
describe('Distribution List', () => {
before(async () => {
// Wait for app to load
await waitForAppReady();
});
beforeEach(async () => {
// Reset mock state between tests
await resetMockState();
});
it('should display all mock distributions', async () => {
const cards = await $$('[data-testid="distro-card"]');
expect(cards).toHaveLength(7); // Mock mode has 7 distros
});
it('should show running state for Ubuntu', async () => {
const ubuntuCard = await $('[data-testid="distro-card-Ubuntu"]');
const statusBadge = await ubuntuCard.$('[data-testid="status-badge"]');
const text = await statusBadge.getText();
expect(text).toBe('Running');
});
});
The waitForAppReady and resetMockState utilities are crucial:
export async function waitForAppReady(): Promise<void> {
// Wait for main content to render
const main = await $('main');
await main.waitForDisplayed({ timeout: 10000 });
// Give stores time to populate
await browser.pause(500);
}
export async function resetMockState(): Promise<void> {
// Call Tauri command to reset backend mock
await browser.execute(async () => {
await (window as any).__TAURI__.core.invoke('reset_mock_state_cmd');
});
// Reset frontend stores
await browser.execute(() => {
(window as any).__distroStore?.getState()?.reset();
(window as any).__notificationStore?.getState()?.clear();
});
// Wait for UI to update
await browser.pause(300);
}
Screenshot Generation
Here's where it gets interesting. I needed screenshots for:
- Documentation — README, user guide
- Microsoft Store — Listing requires 1366x768 minimum
- GitHub Releases — Show what's new in each version
Instead of manually capturing these, I wrote a test spec that generates them automatically:
// screenshots.spec.ts
const SCREENSHOT_DIR = path.join(process.cwd(), 'docs', 'screenshots');
function ensureScreenshotDir(): void {
if (!fs.existsSync(SCREENSHOT_DIR)) {
fs.mkdirSync(SCREENSHOT_DIR, { recursive: true });
}
}
async function saveScreenshot(name: string): Promise<void> {
ensureScreenshotDir();
const filepath = path.join(SCREENSHOT_DIR, `${name}.png`);
await browser.saveScreenshot(filepath);
console.log(`Screenshot saved: ${filepath}`);
}
describe('Screenshots', () => {
before(async () => {
await waitForAppReady();
});
it('captures main distribution list', async () => {
await saveScreenshot('01-distribution-list');
});
it('captures distribution details', async () => {
// Click on a distribution to show details
const ubuntuCard = await $('[data-testid="distro-card-Ubuntu"]');
await ubuntuCard.click();
await browser.pause(500);
await saveScreenshot('02-distribution-details');
});
it('captures settings page', async () => {
const settingsButton = await $('[data-testid="settings-button"]');
await settingsButton.click();
await browser.pause(500);
await saveScreenshot('03-settings');
});
it('captures create dialog', async () => {
const createButton = await $('[data-testid="create-button"]');
await createButton.click();
await browser.pause(500);
await saveScreenshot('04-create-dialog');
});
});
For Microsoft Store screenshots at specific resolutions:
# Standard resolution for docs
npm run screenshots
# Store-required resolution (1920x1080)
SCREENSHOT_WIDTH=1920 SCREENSHOT_HEIGHT=1080 npm run screenshots:store
The config respects these environment variables:
// In wdio.conf.ts
capabilities: [{
'tauri:options': {
application: findTauriBinary(),
},
// Window size from environment or defaults
'wdio:windowSize': {
width: parseInt(process.env.SCREENSHOT_WIDTH || '1280'),
height: parseInt(process.env.SCREENSHOT_HEIGHT || '800'),
},
}],
Video Recording
For demo videos, I added wdio-video-reporter:
import video from 'wdio-video-reporter';
export const config: WebdriverIO.Config = {
// ... other config
reporters: [
'spec',
[video, {
saveAllVideos: true,
videoSlowdownMultiplier: parseInt(process.env.VIDEO_SPEED || '1'),
outputDir: './docs/videos',
videoScale: '-1:-1', // Preserve original resolution
videoFormat: 'mp4',
}],
],
// Longer timeout when recording
mochaOpts: {
timeout: process.env.RECORD_VIDEO === '1' ? 300000 : 60000,
},
};
The demo spec walks through the app's features:
// demo.spec.ts
describe('Demo Recording', () => {
before(async () => {
await waitForAppReady();
});
it('demonstrates WSL-UI features', async () => {
// Show the main list
await browser.pause(2000);
// Start a distribution
const startButton = await $('[data-testid="start-Ubuntu"]');
await startButton.click();
await browser.pause(1500);
// Open quick actions menu
const menuButton = await $('[data-testid="menu-Ubuntu"]');
await menuButton.click();
await browser.pause(1000);
// Navigate to terminal
const terminalOption = await $('=Open Terminal');
await terminalOption.click();
await browser.pause(2000);
// Show settings
const settingsButton = await $('[data-testid="settings-button"]');
await settingsButton.click();
await browser.pause(2000);
// Toggle dark mode
const themeToggle = await $('[data-testid="theme-toggle"]');
await themeToggle.click();
await browser.pause(1500);
// Return to main view
const backButton = await $('[data-testid="back-button"]');
await backButton.click();
await browser.pause(2000);
});
});
Run it with:
# Standard speed, 1280x720
npm run demo
# Full HD resolution
DEMO_WIDTH=1920 DEMO_HEIGHT=1080 npm run demo:hd
# Half-speed playback (useful for tutorials)
VIDEO_SPEED=2 npm run demo:slow
Here's an example of the generated demo video:
src: https://cdn.jsdelivr.net/gh/octasoft-ltd/wsl-ui@v0.14.0/docs/videos/wsl-ui-demo.mp4
poster: https://cdn.jsdelivr.net/gh/octasoft-ltd/wsl-ui@v0.14.0/docs/screenshots/main-distro-list.png
title: WSL UI Demo
CI Integration
The E2E tests run in GitHub Actions, but with some considerations:
e2e-tests:
runs-on: windows-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install Rust
uses: dtolnay/rust-action@stable
- name: Install dependencies
run: npm ci
- name: Build Tauri (debug)
run: npm run tauri build -- --debug --no-bundle
- name: Start Tauri Driver
run: |
npx tauri-driver &
shell: bash
- name: Run E2E tests
run: npm run test:e2e
env:
WSL_UI_MOCK_MODE: 'true' # Always use mock in CI
- name: Upload test artifacts
if: always()
uses: actions/upload-artifact@v4
with:
name: e2e-results
path: |
test-results/
docs/screenshots/
docs/videos/
Important notes:
- Mock mode is mandatory — CI runners don't have WSL installed
- Debug build required — Release builds have WebDriver issues
- Artifact upload on failure — Videos are invaluable for debugging CI failures
- Windows runner — WebView2 isn't available on Linux runners
Test Organization
The test suite grew to 33 spec files covering:
| Category | Tests | Coverage |
|---|---|---|
| Distribution lifecycle | 8 specs | Start, stop, terminate, delete |
| Import/Export | 5 specs | Tar files, cloning |
| Container imports | 3 specs | OCI, Podman integration |
| Renaming | 2 specs | Name validation, registry updates |
| Settings | 4 specs | Global WSL options, themes |
| Error handling | 6 specs | Timeouts, command failures |
| Accessibility | 3 specs | Keyboard navigation |
| Screenshots/Demo | 2 specs | Asset generation |
The screenshot and demo specs are excluded from normal test runs:
exclude: process.env.INCLUDE_ALL_SPECS === '1' ? [] : [
'./src/test/e2e/specs/screenshots.spec.ts',
'./src/test/e2e/specs/demo.spec.ts',
],
They're only run when explicitly requested for asset generation.
Lessons Learned
- Mock mode is essential — E2E tests need reproducible state
- Debug builds for automation — Release builds can have WebDriver issues
- Automate screenshots — Manual capture is tedious and inconsistent
- Video for debugging — When a CI test fails, the recording shows exactly what happened
- Reasonable timeouts — Desktop apps are slower than web; 60 seconds per test is reasonable
What's Next
The technical infrastructure is solid — mock mode for testing, automated screenshots and videos, CI that catches regressions. But there's a side of building WSL-UI I haven't talked about yet: the sheer amount of time spent on polish.
As someone who spent years as a backend developer, UI work was an eye-opener. Next up: the endless polish phase, edge cases everywhere, and adding privacy-first analytics with Aptabase.
Try It Yourself
WSL-UI is open source and available on:
- Microsoft Store: Search for "WSL UI" or visit the store listing
- Direct Download: Official Website
- GitHub: github.com/octasoft-ltd/wsl-ui
Originally published at https://wsl-ui.octasoft.co.uk/blog/building-wsl-ui-e2e-testing


Top comments (0)