DEV Community

Discussion on: Automated Accessibility Testing

Collapse
 
dylanlacey profile image
Dylan Lacey

Excellent article! I wonder if you could use automation tools as a guard to make sure you're attempting to catch some of the human-level problems?

I'm imagining something like an analysis of the ARIA information, making sure the labels correspond to IDs or surrounding text in some way (and asking a human to agree at least once that they do)...

Or perhaps story-telling? Video screenshots in tab order, or scripts of how a page flows from top to bottom in terms of accessibility labels.

Nothing that "solves" the problem, just using automation to make sure someone is actively thinking about accessibility all the time. (This is one reason I like testing frameworks that use accessibility selectors... They're not a guarantee but at least, if they suck, the developers suffer as well)

Collapse
 
thawkin3 profile image
Tyler Hawkins

Thank you! This is exactly the kind of problem that lately has been keeping me up at night haha. Is it possible to build better automated tools for accessibility that can incorporate these human ideas? And if so, how?

I'm imagining something like an analysis of the ARIA information, making sure the labels correspond to IDs or surrounding text in some way (and asking a human to agree at least once that they do)...

That could be interesting. A drawback would be that this wouldn't be able to catch missing information, like a button that opens a dropdown menu and should have something like aria-haspopup="true" on it. So that kind of comes back to the same pitfall of static analysis checkers not being able to identify where technically "valid" HTML is actually missing important attributes or opportunities to use more appropriate semantic HTML elements.

Or perhaps story-telling? Video screenshots in tab order, or scripts of how a page flows from top to bottom in terms of accessibility labels.

This could be interesting as well! At one of my past jobs we were looking at incorporating visual snapshot tests from Chromatic that can tell you if the UI has changed at all. This basically catches those hard-to-test-for CSS changes or other things that it doesn't really work to write unit tests for. This was unrelated to accessibility, but the idea is the same, it generates a snapshot and a human then can verify if the UI still looks good or not. So what you're suggesting could be nice as a quick sanity check but still requiring human verification.

This is one reason I like testing frameworks that use accessibility selectors

Absolutely! That's a huge benefit of using React Testing Library for the frontend unit tests. We can use queries like getByRole to find a button or a listbox or an option, so that bakes into the test the assertion that the element in question does in fact have the correct role.

Thinking about other ways to automate things, I've been toying with the idea of writing a test library that runs basic accessibility tests against various widget types. So for example, you would specify in the test setup that you are testing a modal, and you'd probably have to give it references to selectors for the trigger button and whatnot, and then it would run through the acceptance criteria I've outlined above for modals. So basically still unit tests, but it saves you as the developer time not having to write out all those tests. It would require a little more configuration on the developer's part and knowledge of the correct widgets types and expectations, but it would have the added benefit of running appropriate tests for the widget rather than relying on a set of basic rules and missing the specific rules that should be applied for the intended use case of the widget. This is still a working idea, but maybe it's a step in the right direction.