DEV Community

AgentKit
AgentKit

Posted on • Originally published at blog.a11yfix.dev

Your Lighthouse Score Is 95. You Could Still Get a Demand Letter Tomorrow.

A 95 in Chrome's Lighthouse accessibility audit is the number small-business owners point at when their developer asks whether the site is okay. It is the number agencies put in their delivery report. And it is the number that, three months later, sits next to a demand letter on the same desk.

The gap is not Lighthouse being broken. Lighthouse is doing exactly what it claims to do — run an automated scan against a defined subset of the WCAG criteria. The gap is between what an automated scan can prove and what a plaintiffs' firm needs to prove. And that gap is wide enough that the relationship between scanner score and lawsuit risk is much weaker than the score makes it look.

What scanners are actually testing

axe-core, the engine behind Lighthouse and most browser-extension accessibility tools, runs around 90 automated rules. Every one is a binary check that can be answered by reading the rendered HTML and CSS: does this image have an alt attribute, does this input have an associated label, is this color contrast ratio above the threshold, is the page language declared.

Deque, the company behind axe-core, publishes a clear figure on this: automated tools catch approximately 30 to 40 percent of WCAG issues. The remaining 60 to 70 percent require a human running the page with a keyboard, a screen reader, and a brain.

The 30 to 40 percent that scanners catch is real, useful work. A site with twenty axe violations is genuinely worse off than a site with two. The problem is that the rest of the iceberg is what plaintiffs' firms cite in demand letters, because their tester is a human, not an automated scan.

Alt text exists, but says nothing

The Lighthouse rule for images is whether the alt attribute is present. It is not whether the alt text is meaningful. Every one of these passes the scan:

  • <img src="hero.jpg" alt="hero">
  • <img src="DSC_4271.jpg" alt="DSC_4271.jpg">
  • <img src="bride.jpg" alt="image">
  • <img src="product.jpg" alt=""> on a product page where the image is the only thing identifying the item

A blind shopper trying to figure out which blue dress on the page is the one her friend wore last weekend hears "image, image, image, image" and bounces. That is the experience cited in dozens of recent demand letters against e-commerce stores. The scanner gave the site a clean report.

What to do: walk through your site reading the alt text out loud. If it would not allow a stranger to picture the image, rewrite it. For decorative images, use alt="" deliberately. For product images, the alt text should at minimum identify the product and its key visual features.

Headings are nested, but not in reading order

Lighthouse checks that your headings do not skip levels (no jumping from H2 to H4) and that the page has at least one H1. It does not check whether the headings tell a coherent story when read in order, which is how a screen-reader user navigates an unfamiliar page.

The most common failure: the H1 says "About Us", the next H2 says "Our Mission", and then mid-page there is an H2 that says "Get a Free Quote" because the design template put a CTA there. A screen-reader user pulling up the heading list to scan the page sees an "About Us" page that suddenly offers a quote, with no clue what is being quoted.

What to do: install the WAVE browser extension (free) and click the "Structure" tab. It shows the heading outline as a nested list. Read it out loud. If the outline does not summarize the page in a way that would make sense to a stranger, your headings need to be rewritten.

Form labels exist, but do not describe the field

The Lighthouse rule for form inputs is whether each input has an associated <label>, an aria-label, or an aria-labelledby. It does not check whether the label text actually describes the field. The following all pass the scan and fail any human test:

  • A field labeled "Field" with placeholder "Enter your email"
  • A series of fields labeled "Item 1", "Item 2", "Item 3"
  • A required birth-date field labeled "Date" with no format hint
  • A faceted-search interface with six different search boxes all labeled "Search"

The screen-reader user hears "edit text, search, edit text, search" and cannot tell which one is the keyword search and which are the date-range filters. They abandon the task.

What to do: read your form labels in isolation, without looking at the rest of the form. If a label would not tell a stranger what to enter, rewrite it. Mark required fields in the label text ("Email (required)") rather than relying on a red asterisk that might not be announced.

Keyboard navigation appears to work, until you actually try it

Lighthouse cannot run your page with a keyboard. It detects a few keyboard hazards (missing skip link, positive tabindex), but it cannot tell whether your custom dropdown opens with Enter, whether your modal traps focus correctly, whether the focus indicator is visible against your background, or whether the user can escape your cookie banner without a mouse.

The recurring failure modes we find in audits:

  • A <div>-based dropdown that opens on click but ignores Enter, Space, and arrow keys
  • A modal that opens but leaves focus on the button behind it, so a screen-reader user does not know it is open
  • A modal that opens but does not trap focus, so Tab takes the user into the hidden page underneath
  • A focus indicator deliberately removed by a developer because "it looked ugly"
  • Hamburger-menu navigation on mobile that opens but cannot be closed without a mouse

What to do: click into the URL bar of any page on your site and press Tab repeatedly until you have visited every interactive element. Ask: can I see where focus is? When I press Enter on a button, does something happen? When a modal opens, can I get back out without my mouse? You do not need a screen reader. A keyboard is enough. Our keyboard navigation testing guide walks through it step by step.

Form errors flash on screen but never reach a screen reader

When a user submits a form with an invalid email, the standard pattern is for a red error message to appear next to the field. Lighthouse does not test whether that error is actually announced to a screen-reader user. It checks the labels and the contrast, but the bridge between the two — whether assistive technology is told the error happened — is not in scope for any automated scan.

The result: a blind user submits the form, hears nothing, and has no idea why nothing happened. They might submit again. They might leave.

What to do: every error message needs to be announced. The simplest pattern is to render the error inside an element with role="alert" or inside a container marked aria-live="polite" so the screen reader picks up the change. If you are not in a position to write the markup yourself, this is a specific request to send your developer or platform support team.

Color contrast is fine on solid backgrounds, broken on photos

Lighthouse measures color contrast by reading the CSS foreground and background colors. It cannot measure the actual contrast of white text rendered on top of a hero photograph, because the background color in CSS is "transparent" and the photograph is not part of the contrast calculation. Same problem with gradient backgrounds, video backgrounds, and CSS background-image patterns.

We see this in nearly every photographer, restaurant, and hospitality audit: the hero "Book Now" button reads as 1.5:1 against a section of the background image — a hard failure under WCAG 1.4.3. Lighthouse reports zero contrast violations.

What to do: test each page with text over an image manually. The WAVE browser extension and the free WhoCanUse contrast tool both let you check with custom backgrounds. The standard fixes are: add a darkening overlay between image and text, add a text shadow, swap to a solid-color band behind the text, or move the text out of the image area.

Custom widgets pass automation, fail screen readers

Calendars, date pickers, accordions, tabs, autocomplete dropdowns, modal dialogs, drag-and-drop uploaders, and rich-text editors are where automated and human results diverge most sharply. axe-core detects a few specific patterns, but it cannot test whether the widget actually works for an assistive-technology user. The widget can be fully scan-clean and fully unusable.

The most consequential case is the booking-and-checkout flow. We have audited dozens of small-business booking systems where the date picker passes Lighthouse, the time picker passes Lighthouse, the contact-info form passes Lighthouse, and the whole flow is unusable with a screen reader because focus order is broken between steps and the success confirmation is a <div> with no live region.

What to do: identify your one or two most important user journeys (booking, checkout, sign-up, contact) and test each one end to end with the keyboard alone. If you can complete the journey without touching the mouse, that is a strong baseline. The next step up is the same journey with a screen reader — on Mac, VoiceOver is built in (Cmd+F5). An hour to learn the basics is worth more than another hundred Lighthouse runs.

What scanners do not look at all

Lighthouse audits the rendered HTML of the page it is told to scan. It does not audit:

  • PDFs linked from the page (your menu, your bulletin, your white paper, your brochure)
  • Embedded videos and their captions
  • Third-party widgets loaded after the audit (chat widgets, scheduling embeds, payment forms, social-media feeds)
  • Pages behind a login (member portals, dashboards, account settings)
  • Mobile-specific layouts when the audit is run in desktop mode (the default)
  • Email templates sent to the same users who visit the site

Each of these is its own audit surface and each is regularly cited in demand letters. A small business with a perfect Lighthouse score on its homepage and a 47-page scanned-PDF menu linked from that homepage has a serious accessibility problem and a clean scanner report.

So what is the right number?

The right number is not a single Lighthouse score. It is a small set of complementary signals. The combination we recommend for non-developers:

A Lighthouse or WAVE automated scan to catch the obvious 30 to 40 percent. A keyboard-only walkthrough of your top three user journeys. A read-out-loud test of your alt text and form labels. A check of every PDF and embedded video for caption and tagging. And, critically, an accessibility statement on your site documenting your conformance commitment, your contact channel for accommodations, and the date of your last review — which is the single most consistent thing demand-letter responses cite as evidence of good faith.

If you want to go further, our five-minute accessibility audit walks through the keyboard test step by step. Our why accessibility overlays don't work post covers a related "false sense of security" pattern. And our internal post-mortem on finding 27 issues a scanner missed on our own site is the case study we point to when a client asks why 95 is not the answer.

We're building a simple accessibility checker for non-developers -- no DevTools, no jargon. Join our waitlist to get early access.

Related Reading

Top comments (0)