One thing that is often asked when I am talking about accessibility is how you evaluate it on a website or service.
There isn't an easy answer to this question as it is a complex mix of technical implementation, design, and tooling.
The most recent evaluation I did was with an interaction designer, the first thing I did was use a Miro board to document the high-level flow of the service.
This helps me understand:
- what the service is doing
- what sort of patterns are included
- any sticky points
This gives us a visual representation to document issues against.
The first thing I always do is get a feel for the page
- Does it have landmarks?
- Does it have form elements?
- Does it have any interactive elements?
- Does the header document outline follow a standard order?
- Does it respond when you change browser width?
Usually, this will help narrow down what sort of issues might happen on a page.
From here I'll run some browser plugins to see if anything obvious is flagged
These are by no means the only automated checks, and we need to appreciate that these sort of plugins, will only flag a limited selection of issues.
"Our research backs this up. While the tools picked up the majority of the accessibility barriers we created – 71% – there was a large minority that would only have been picked up by manual checking." from What we found when we tested tools on the world’s least-accessible webpage.
Often there is a lot of subtlety to accessibility issues semantically an image may have an alternative text tag, but that doesn't mean it actively describes the image.
The manual check falls into two parts largely, understanding and interaction.
For interaction I'll generally go through this list:
- Check tab order
- Check focus style on interaction elements
- Check page can be used by only a keyboard (this related to two)
- Check interaction with the page
- Check error states
- Check common input issues (i.e trimming spaces)
- Check using a screen reader
- Check buttons and links are used in the right context and Voice Control
Checking the page conveys understanding ends up being more of a content check
- Check page titles follow best practice
- Check form fields are descriptive
- Check errors are descriptive
- Check link text is descriptive, this may need a screen reader as sometimes context is hidden.
- Check any aria/hidden content is read out as intended
There is a couple of functions we need to check at a service level
The final part of the evaluation is to check if the service is consistent with other GOV.UK services and documented patterns.
The first step is to validate is if the issue is a false positive.
From here we need to decide if it is an issue of implementation or something we are consuming.
If for example we are using an external package, we will need to work out if the issue is already documented, or if it is something new. With the GOV.UK frontend we would raise this on GitHub.
It won't always be a technical fix either, a lot of complicated accessibility issues will need to be designed out of a service.
A recent example of this is the conversation around the use of an accordion where we should a step back, and change how we present the information.
The first time I did this I gave people an Excel list, this lacked the context needed to where it was happening in the service.
I've now started to use that Miro as a visual representation of issues grouped into different types and attached related -Web Content Accessibility Guidelines (known as WCAG 2.1) which I walk through with the team to set context and talk about the barriers it will create.
Miro is really nice for giving context but bad as a document store. The next step is to raise issues as output to a project board (like Jira or GitHub Projects), this will allow better tracking of issues, tagging, and give a consolidated view.