Accessibility and meeting ADA standards is crucial when it comes to tech, which means that along with engineering the assistive, physical technologies that allows for improving inclusivity, the application architecture and functionality too must follow. Since the majority of applications (and the rest of the world for that matter) are catered to individuals that can see, this presents a problem for those that are visually impaired. For the folks with visual disabilities, the UI must be constructed in a way that conveys these same affordances to assistive technology. Part of the solution for this involves having actions of an application along with other details read aloud, and this is where screen readers come into play.
According to the American Foundation for the Blind:
“Screen readers are software programs that allow blind or visually impaired users to read the text that is displayed on the computer screen with a speech synthesizer or braille display. A screen reader is the interface between the computer's operating system, its applications, and the user. “[1]
So how do screen readers work? Simply by pressing a combination of different keys on a computer keyboard or a braille display, the user is able to send commands instructing the speech synthesizer what to say. The synthesizer will also speak automatically when changes occur on the computer screen. Other commands are able to then instruct the synthesizer to read or spell words, read one or several lines of text, locate specific strings of text on a screen, indicate the location of the cursor or the item of focus, etc. The more advanced functions allow for the performance of a variety of actions, including locating a specific line of highlighted text, identifying active choices in a menu, and more.
Semantics:
When it comes to speaking/signing language, understanding semantics is what grounds the structure of communication, and we as humans are able to do this while taking the process for granted. In web development, semantics refers to the meaning of code. Essentially, the non-visual exposure of a UI element's affordances is called its semantics. Semantics covers basic questions like understanding what the effect running certain lines of code produces or asking about the roles specific HTML elements have (rather than pondering its appearance). Sometimes the intuition portion of communicating isn’t always easy to predict. There are thus three key elements that define semantics in web development:
Intent - Ensuring that the content created is received by and understood by the intended audience (digital or human)
Content - Whether visual or not, communication in web development requires the written word or computer code. Consumable content for recipients is often combined with
instructional content in code form for the digital recipient.
Context - Understanding that the elements surrounding the meaning play a role in communication. For instance, naming. Naming is not enough if the intended audience does not recognize the naming.
So why is semantics important for screen readers? Well for starters, screen readers know what to audibly announce to their users through Semantic HTML, which is basically the use of HTML markup to reinforce the semantics, or meaning, of the information in web pages and web applications rather than merely defining its presentation or appearance [2]. The accessibility tree also plays an integral role here by filtering out non-accessibility-related information for most HTML elements while also containing elements that are relevant.
Every html element in semantic programming will include one of the following properties:
A role or type - describing an element’s purpose or type; i.e. “button” or “input”
A name - computed label; i.e. “Sign up Button”
A value (optional)
A state (optional)
These properties make up the foundation for whether or not something in the HTML is semantically interesting or not. So when a screen reader is providing an alternative UI to the user, it is often doing so by walking this accessibility tree. For instance, the screen reader will ignore semantically uninteresting nodes from the tree, like div and span tags –especially if all they’re doing is positioning their children with CSS.
Modern communications still have a long way to go to provide accessibility to their technology. Screen readers are just one of those frontiers and I look forward to seeing where that technology will go next (literally no pun intended).
2.
Top comments (0)