← Back to home

Testing with one screen reader and one browser is not a test. It is a sample.

When accessibility testing involves a screen reader, there is a detail that often gets overlooked: the result you see depends not just on your code, but on the specific combination of screen reader and browser you are using.

These are not interchangeable tools. They do not produce the same output.

Light editorial image with the headline 'Testing with one screen reader and one browser is not a test. It's a sample.' A highlighted block reads: when you test with only one combination, you're testing one user's experience and assuming everyone else gets the same.

Why combinations matter

Screen readers interact with browsers through accessibility APIs — interfaces that expose information about the structure and state of a page. The problem is that these APIs are not implemented consistently across browser and operating system combinations, and screen readers handle that inconsistency differently.

NVDA, JAWS, VoiceOver, Narrator, and TalkBack each have their own logic for how they query these APIs, how they handle missing or ambiguous information, and how they translate that into audio output. Browsers also differ in how completely they implement the APIs in the first place.

The result is that the same HTML, the same ARIA attributes, and the same JavaScript can produce entirely different experiences depending on which screen reader and browser the user has.

A modal that announces its role correctly in NVDA with Chrome may not announce it at all in JAWS with Internet Explorer — or may announce it at the wrong moment in VoiceOver with Firefox. A live region that works as expected in one pairing may fire multiple times, fire too early, or never fire in another.

What this means in practice

It means that testing accessibility with a single combination gives you a result that is true for that pairing — and potentially false for every other one.

This is not a reason to avoid manual testing. It is a reason to broaden it.

The WebAIM Screen Reader User Survey, published regularly, shows that real users distribute across multiple screen readers and browsers — and that the most common pairing varies by region, platform, and user need. No single combination represents your entire audience.

The gap automated tools cannot fill

Tools like axe, Lighthouse, and browser extensions test the structure of the accessibility tree. They can tell you whether an element has an accessible name, whether a role is present, whether contrast ratios meet the threshold.

What they cannot tell you is whether a screen reader will actually announce that information in a way that makes sense to a user. That requires listening — with the actual tool, in the actual environment.

Manual testing across combinations is the only way to close that gap. It does not need to be exhaustive to be valuable. Even two or three well-chosen pairings will surface issues that no automated tool would ever catch.

Which combinations to prioritise — and why — is worth a dedicated conversation. That comes next.