Reminder – automatic accessibility testing can detect inaccessibility but can’t detect accessibility

(Loaded 1896 times)

We need to be aware of the limitation of the tools to be able to use them properly and to prevent any bias.

Tools are essential, but as accessibility finally gets more attention, we need to remind ourselves that tools are still only tools and we need knowledge to use them.

I’m helping different organizations with accessibility, mainly training and auditing, and I’ve seen a pattern repeating itself that made me write this post. Sometimes it is too easy to put the tool in the center of accessibility activities and neglect the manual testing. It’s not strange – tools are quick and efficient from day one, while manual testing takes time to learn, to practice, to improve and to produce results. But if we only concentrate on the tools and leave out manual testing we are doing ourselves a giant disfavor!

Automatic tools are very good at finding bad patterns in code

I love automatic accessibility testing tools, but after years of experiences, after really going into their source, I also understand their limitations and never rely solely on them.

A lot of work goes into making and improving them, but we need to really understand that they are mostly based on detecting known code based issues. So – basically – when we find a pattern in the code that is known to cause accessibility issues, we can write code to detect code based issues and map it to Web Content Accessibility Guidelines (WCAG), or other standards (like for example EN 301 549).

This actually means that we make dozens or hundreds of rules, based on the code that is known to cause issues, and then run them against a webpage (or mobile app).

Some rules are extremely valuable and undeniably “right” – for example missing alt attributes on images, wrong Accessible Rich Internet Application (ARIA) attributes and values, obvious color contrast issues and so on. But we need to be aware that these rules target specific code patterns. So – if developer uses a pattern that is not detectable by existing rules – we may have a situation where automatic tool says everything is great, but in reality, page or app has accessibility issues or even totally prevent people with disabilities to use it.

So please – understand that tools use rules that target specific patterns. This is their strength and this is also their weakness – as developers have a lot of possibilities when it comes to implementing same visual experiences (especially on websites).

Heck, web platform is soooo robust that we can write totally poor code and it will still work for most users (visually and with mouse). Browsers (user agents) are so permissive, that we can really make a mess for people with disabilities while most users (and stakeholders, designers, testers) will not even notice.

Jens Oliver Meiert wrote about only 0.5% of top 200 websites use valid HTML (opens in new window). And obviously nobody cares… Because “it works” for them…

Sure, most invalid HTML patterns don’t impact accessibility at all, but it is still a risk that assistive technologies will not get correct information – so still an indication. With recent “deprecation” of WCAG 4.1.1 (parsing) our automatic tools lost a very stable and relying testing capability, as this is the only WCAG success criterion that was perfect for automatic testing tools.

If people involved in production of websites and apps are not aware of accessibility, if they are not trained of manually testing for accessibility issues and if they rely solely on automatic tools, it is very easy to still produce inaccessible websites.

Please don’t get me wrong – using automatic accessibility testing tools regularly will mean that end product is better, no doubt about that, but I am just trying to make sure we all understand that it is not enough and if used without manual testing it is actually a very dangerous thing to do.

Some may believe that their websites and apps are accessible, when the reality is that they just didn’t look well enough.

And, please be aware that automatic tools are also software and have their bugs. Rules we make can be biased. They can also report problems where there are really no problems (so called false positives, as they find the “error” that is really not an error).

A lot of time can be used on working us through a large back-log of accessibility issues generated by automatic tools and having to deal with false positives is even more time consuming (first reporting them, then checking them and then trying to fix them). Sometimes false positives can even cause real accessibility issues as somebody tried to satisfy the tool and made a fix that was actually a new accessibility issue.

Automatic tools can not establish accessibility

As written above – tools are perfect for running rules that check the code and discover known problematic patterns. Such grunt tasks should be trusted to tools (unless we find a bug in the tool or in the rules used by the tool).

But now it should make total sense when we say that tool can not confirm accessibility, I hope at least. Because even if tool does not find any issues we can still have lot of issues in reality. It just means that our code didn’t match the patterns that the rules are made for.

So – it’s not so strange then to claim that not a single tool can claim we are accessible. It’s just not possible, as there are basically unlimited possibilities when it comes to code and rules are always limited.

While writing about accessibility testing tools – we should also consider accessibility overlays. As they kind of work the same. Most of them already understood that they can’t promise automatic conformance anymore, but there are still cases where they do that.

And if we just think for a moment – if detection is so poor, how can anybody claim that their tool can automatically fix accessibility? Well, it just can’t. Not automatically. And I would not trust a vendor that claim to “fix” my website with a “all-powerful” script.

We need to use the tools but know the rules and always test manually

Tools are just tools. Relying solely on tools can actually be a bad thing. We need to understand their limitations, their weaknesses and their powers before just embracing them.

Manual testing will not go away. I will not predict the future, but even with context aware “artificial intelligence” we will still have to do lot of things manually. It’s obvious that AI will help us get more effective, but we will still have to have the knowledge and understand the rules.

I hope that AI will make the tools better, with better bad pattern recognition and with better remediation suggestions, but our understanding of accessibility will still be essential. Using tools combined with knowledge is the only guarantee for improving accessibility.

Will have to wait what veteran accessibility organizations have in plan, but I really hope that their decades of data, combined with supervised AI learning, will move us far into the right direction.

Author: Bogdan Cerovac

I am IAAP certified Web Accessibility Specialist (from 2020) and was Google certified Mobile Web Specialist.

Work as digital agency co-owner web developer and accessibility lead.

Sole entrepreneur behind IDEA-lab Cerovac (Inclusion, Diversity, Equity and Accessibility lab) after work. Check out my Accessibility Services if you want me to help your with digital accessibility.

Also head of the expert council at Institute for Digital Accessibility A11Y.si (in Slovenian).

Living and working in Norway (🇳🇴), originally from Slovenia (🇸🇮), loves exploring the globe (🌐).

Nurturing the web from 1999, this blog from 2019.

More about me and how to contact me: