Here’s a detailed overview of an automation testing framework that combines UI testing of desktop applications with hardware instrumentation testing. By leveraging Appium, MSTest, Selenium WebDriver, and a CI/CD pipeline on self-hosted agents, tests can validate both software workflows and hardware interactions.
UI & Hardware Testing Overview
- Frameworks & Tools: MSTest, Appium C# client, Selenium WebDriver
- UI Recording: Windows Application Driver (WinAppDriver) Recorder captures user interactions for test script generation
- Hardware Testing Integration: Connected hardware can be controlled and monitored during automation tests via UI workflows
Setting Up UI Automation Infrastructure
1. Install Windows Application Driver (WinAppDriver)
- Download: GitHub Releases
- Install the
.msi
package - Enable Developer Mode: Settings > Update & Security > For Developers
- Reboot if prompted
2. Installing Appium Server
(1) Install Node.js LTS from nodejs.org (includes npm)
(2) Install Appium globally:
npm install -g appium
(3) Install Windows Driver plugin:
appium driver install --source=npm appium-windows-driver
(4) Verify installed drivers:
appium driver list
(5) Start Appium server:
appium
Server URL: http://127.0.0.1:4723
Running UI + Hardware Tests Locally in Visual Studio
- Ensure dependencies:
Appium.WebDriver 5.0.0
,Selenium.WebDriver 4.21.0
- Confirm application under test, Appium server, drivers (WinAppDriver), and connected hardware are running
- Open Test Explorer in Visual Studio
- Build solution to resolve dependencies
- Discover and run test methods; debug using breakpoints if needed
- Hardware interactions are triggered through UI automation flow, allowing validation of hardware responses, data acquisition, and telemetry checks
Extending to Instrumentation & Hardware Testing
- Incorporate custom waits, event subscriptions, and telemetry assertions
- Synchronize with hardware responses using
WebDriverWait
or custom polling - Collect logs, events, and performance metrics to validate software and hardware behavior
- Enables end-to-end validation: from UI actions to hardware output
Automation Best Practices
- Assign unique Automation IDs to all UI elements
- Prefer
Automation ID
orName
over XPath - Verify IDs at runtime using Inspect.exe:
C:\Program Files (x86)\Windows Kits\10\bin\<version>\x64\Inspect.exe
Sample Test Code (C#)
[TestInitialize]
public void Setup()
{
var options = new AppiumOptions();
options.PlatformName = "Windows";
options.App = AppPath;
options.DeviceName = "WindowsPC";
options.AutomationName = "Windows";
options.AddAdditionalAppiumOption("newCommandTimeout", 300);
_session = new WindowsDriver<WindowsElement>(new Uri(WinAppDriverUrl), options);
_session.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(5);
_wait = new WebDriverWait(_session, TimeSpan.FromSeconds(60));
}
[TestMethod]
public void ClickAddFromLibraryButton()
{
var element = _wait.Until(d => d.FindElement(MobileBy.AccessibilityId("SizeSelection_AddFromLibrary")));
element.Click();
// Hardware interaction can be triggered via this UI flow
}
- Use UI Recorder to generate XPath and action scripts
- For hardware testing, add waits, log captures, and telemetry validations after UI actions
CI/CD Integration (Azure DevOps Example)
- Nightly Build: The application is built every night.
- UI Test Binaries Published: Post-build, UI test binaries are published as artifacts.
- Scheduled Deployment: The application is deployed to the self-hosted machine according to the schedule.
-
Automation Pipeline Execution: Once deployment completes, the automation pipeline runs:
- Tests execute the UI workflows
- If any workflow involves hardware interaction, connected devices on the self-hosted machine are automatically tested
- Environment & Artifact Separation: Test binaries run independently of the main build, ensuring reproducibility and stable CI/CD practices
Key Takeaways:
- End-to-end automation covers UI workflows and connected hardware
- Proper waits and telemetry assertions enable instrumentation and hardware testing
- CI/CD pipelines with scheduled builds and deployments ensure reliable, automated, repeatable testing
- Unique Automation IDs and robust waits make tests maintainable and scalable
#UIAutomation #HardwareTesting #InstrumentationTesting #Appium #WinAppDriver #MSTest #Selenium #WindowsApps #CI_CD #QualityEngineering #AutomationBestPractices
Top comments (2)
Mixing UI automation with hardware testing it can get tricky real quick. One of the biggest headaches is timing-UI actions usually run faster than hardware can respond, so getting everything to sync up nicely can be a bit of a juggling act. Then there's the environment setup… it’s no walk in the park. You’ve got to line up a bunch of device-specific dependencies and nail those configurations, or things just won’t play nice. On top of that, hardware tends to throw in a little randomness-test results can be all over the place, which makes test stability a whole other challenge. You’ve really got to keep your test data tight and well-managed. Now, if you're using Appium with WinAppDriver for desktop testing, props to you-but be ready for some bumps. These tools weren’t exactly made to be besties, so getting them to work together smoothly takes some effort.
@onlineproxy
Totally agree — timing can get messy fast when the UI runs way ahead of the hardware. In our pipeline, we handle that by building in device-specific waits and event checks so the tests only move forward once the hardware is actually ready. The hardware sits on a dedicated self-hosted Azure DevOps agent with a fixed setup and all dependencies pre-loaded, which keeps things consistent. Test data is kept squeaky clean to avoid those ‘random’ results.
For us, Appium + WinAppDriver worked pretty smoothly out of the box — just running Appium on Node.js which automatically starts WinAppDriver was enough to get reliable desktop automation.
The payoff is huge — full end-to-end confidence that both the app and the hardware are working exactly as they should before a release goes out.