DEV Community

Cover image for How to Build a Device Farm: A Practical Guide
Bhawana
Bhawana

Posted on

How to Build a Device Farm: A Practical Guide

Mobile test coverage fails when you are guessing which devices your users own. A proper device farm, whether physical or cloud-based, solves that problem systematically. This guide walks through the architecture decisions, tooling choices, and trade-offs you need to make before you spin up a single device.

What You Are Actually Building

A device farm is an orchestrated pool of real devices and emulators connected to a test runner that can execute your test suite across multiple device and OS combinations in parallel.

The components you need:

  • Device layer: Physical devices, emulators, or simulators (usually a mix)
  • Connectivity layer: USB hubs, ADB servers, or cloud device APIs
  • Orchestration layer: A system that allocates devices to test jobs and collects results
  • Test framework: Appium, Espresso, XCUITest, or similar
  • CI integration: Jenkins, GitHub Actions, GitLab CI, or your pipeline of choice

Each layer introduces its own failure modes. Understanding them upfront saves you weeks of debugging later.

Option 1: Self-Hosted Physical Device Farm

Hardware Selection

Start with your actual user analytics. Pull your top device models by session count, then add OS version coverage on top of that. A practical starting matrix for a mid-size app:

  • 10 to 15 Android devices across 3 to 4 OS versions (Android 11 through 14 at minimum)
  • 5 to 8 iOS devices covering iPhone and iPad form factors
  • Representation of low-end, mid-range, and flagship hardware

Do not try to cover everything upfront. Start with your top 80% of user devices and expand from there.

ADB Server Setup for Android

Run a dedicated ADB server host per rack to avoid USB enumeration conflicts:

# Kill any existing ADB server
adb kill-server

# Start fresh with explicit host binding
adb -H 0.0.0.0 -P 5037 start-server

# Verify connected devices
adb devices -l
Enter fullscreen mode Exit fullscreen mode

For parallel test execution, each device needs a unique serial. Assign static USB port mappings in udev rules to prevent device serial collisions after reboots:

# /etc/udev/rules.d/99-android.rules
SUBSYSTEM=="usb", ATTR{idVendor}=="18d1", SYMLINK+="android_%k", MODE="0666"
Enter fullscreen mode Exit fullscreen mode

Appium Grid Setup

Use Appium with Selenium Grid or Appium's native parallel capabilities. A basic Node config for a two-device setup:

{
  "server": {
    "port": 4723,
    "basePath": "/wd/hub",
    "nodeconfig": {
      "capabilities": [
        {
          "browserName": "Android",
          "version": "13",
          "platform": "ANDROID",
          "maxInstances": 1,
          "deviceName": "Pixel_6_001"
        }
      ],
      "maxSession": 1
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Scale this horizontally by adding nodes, but expect USB stability to become your primary operational challenge above 20 devices.

The Maintenance Problem

Self-hosted farms break in predictable ways:

  • ADB over USB drops connections silently during long test runs
  • OS updates push to devices overnight and break test sessions
  • Batteries degrade and cause devices to shut down mid-suite
  • App caches accumulate and produce false failures

Budget at minimum 20% of an engineer's time for ongoing maintenance if you run more than 20 devices. That number climbs fast.

Option 2: Cloud Device Farm

Cloud-based real device cloud platforms handle the hardware layer for you. You connect via WebDriver protocol, the same way you would with a local Appium setup, and the platform allocates a physical device to your session.

Appium Capabilities for Cloud Execution

from appium import webdriver

desired_caps = {
    "platformName": "Android",
    "platformVersion": "13",
    "deviceName": "Samsung Galaxy S23",
    "app": "lt://APP_ID",
    "build": "v2.4.1",
    "name": "Login flow test",
    "isRealMobile": True,
    "network": True,
    "visual": True,
    "console": True,
    "devicelog": True
}

driver = webdriver.Remote(
    command_executor="https://mobile-hub.testmuai.com/wd/hub",
    desired_capabilities=desired_caps
)
Enter fullscreen mode Exit fullscreen mode

The key difference from local Appium: device allocation, teardown, and log collection are all managed by the platform. Your test code stays identical.

Adding Virtual Devices for Broader Coverage

For OS combinations that are impractical to maintain physically, virtual devices fill the gap. Emulators and simulators spin up on demand and cover edge-case OS versions without any hardware dependency.

A practical split that works for most teams:

  • Real devices: Critical user flows, payment paths, biometric features, hardware-dependent behavior
  • Virtual devices: Broad OS version coverage, UI regression suites, accessibility checks

Parallel Execution

Running tests serially across 50 devices defeats the purpose. Structure your test suite to run in parallel from the start.

With pytest and Appium:

# pytest.ini
[pytest]
addopts = -n auto --dist=loadscope

# conftest.py
import pytest

@pytest.fixture(scope="function", params=["Samsung Galaxy S23", "Pixel 7", "OnePlus 11"])
def driver(request):
    caps = base_caps.copy()
    caps["deviceName"] = request.param
    driver = webdriver.Remote(hub_url, caps)
    yield driver
    driver.quit()
Enter fullscreen mode Exit fullscreen mode

This spins up one session per device in the params list and runs your test against all three concurrently. Parallel testing at the framework level is what turns a device farm into a fast feedback loop rather than a slow queue.

CI Pipeline Integration

Wire your device farm into CI so tests run automatically on every pull request. GitHub Actions example:

name: Mobile Device Farm Tests

on:
  pull_request:
    branches: [main, develop]

jobs:
  device-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: "3.11"

      - name: Install dependencies
        run: pip install appium-python-client pytest pytest-xdist

      - name: Run device suite
        env:
          LT_USERNAME: ${{ secrets.LT_USERNAME }}
          LT_ACCESS_KEY: ${{ secrets.LT_ACCESS_KEY }}
        run: pytest tests/mobile/ -n 4 --tb=short
Enter fullscreen mode Exit fullscreen mode

Store credentials as secrets, never in the repository. Use separate test plans for PR checks versus full regression runs to keep CI feedback under 10 minutes.

Choosing Between Self-Hosted and Cloud

Factor Self-Hosted Cloud
Upfront cost High Low
Ongoing maintenance High (20%+ eng time) Minimal
Device coverage Limited by budget Hundreds of devices
Custom hardware Possible Rare
Scalability Manual On-demand
Data sovereignty Full control Depends on provider

For automated device testing at scale, cloud wins on total cost of ownership for most teams. Self-hosted makes sense when you need custom hardware or have hard data locality requirements.

What to Test on Your Device Farm

Structure your device matrix tests around these categories:

  1. Critical paths: Login, checkout, core user flows. Run on real devices always.
  2. UI regression: Layout, font scaling, dark mode. Virtual devices are fine here.
  3. Performance: App launch time, scroll FPS, memory usage. Real devices give accurate numbers.
  4. Network conditions: Test with throttled connections using network simulation. Cloud mobile testing platforms typically expose this as a capability flag.
  5. OS-specific behavior: Camera, permissions, notifications. Real devices required.

Start Small, Scale Deliberately

Do not try to build the full farm on day one. Start with five to ten devices or a cloud trial, get your Appium setup stable, then expand coverage as your test suite matures.

The infrastructure is straightforward once it is working. The hard part is maintaining discipline around what you test, on which devices, and how you interpret the results. A device farm with 100 devices and no clear test strategy is slower and noisier than one with 15 devices and a focused suite.

TestMu AI gives you the device layer and orchestration out of the box so you can focus on writing tests rather than managing hardware. Worth evaluating before you commit to the self-hosted route.

Top comments (0)