Testing and monitoring accessibility with automatic tools will get less precise

(Loaded 1626 times)

Automatic testing, although limited, is useful for quick and bulk test of webpages. With current progress I would expect it to be more efficient, but such tests could easily be bypassed and we can get bad data.

It seems that accessibility overlays seem more and more viable for some business owners, even if majority of accessibility specialists and lots of people with disabilities doubt they help (opens in new window).

I still believe that they can fix some marginal, basic, things but often brake others, so accessibility overlays are often doing more damage than good. And when some accessibility veterans merge or join accessibility overlay companies – it may mean that our efforts with automatic accessibility testing tools will get less precise and effective because of that. This may sound strange, especially when we consider the potentials with new technologies, but I really think that it can actually get worse even if it should get better.

First and most important – Accessibility Conformance Testing (ACT) Rules group (opens in new window) is doing an amazing work to add more and more rules that can be automated. As they are available to anybody they can also be used for bad purposes. It’s very easy to make a script that target all known accessibility tests and make the webpage look like there are not errors. This basically means that if somebody wants – they can use the same rules to trick automatic testing tools into thinking that there are no errors on the page. Adrian Roselli proved WebAIM’s WAVE got spoofed (opens in new window) and I totally see that as a potential future practice in the overlay world.

So we can expect that with time (and increased use of accessibility overlays) we might get much less realistic value from our automatic accessibility testing and monitoring. If we take for example WebAIMs Million project – what if lot’s of pages on their list decide to use an accessibility overlay and what if this overlay tries to trick accessibility tests? It would appear that accessibility is drastically improved, even if that would be far from reality. The same can be written about other similar monitoring projects like European Union eGovernment pilot and many other Web Accessibility Directive and European Accessibility Act driven monitoring.

Automatic accessibility testing and monitoring can get fooled and we might believe the statistics that will communicate improvements in accessibility on a large scale if we don’t know better.

My reflection over the situation.

It is for sure very simple to trick automatic accessibility testing tools when you know exactly what and how they can test for accessibility issues. So, instead of reality we might be served with twisted data in the near future, or even worse – we might already have false feeling of accessibility progress based on automatic tooling if sites use overlays that already spoof the situation as better than it really is.

Not to mention all those automatically generated alternative texts (and “craptions” (captions that are no good)). Automatic tools can’t tell us if those are proper either. They can only tell us – “passed, we found that image has an alternative text and that it isn’t image.jpg or similar”.

What can be done to prevent this?

Well, monitoring should actually check if a site has an overlay installed and either block it in a way or skip the testing, to prevent false results polluting the statistics.

Some months ago, I made a simple tool to check for multiple overlays, it’s technically totally doable and not so difficult. It works well and it discovers over 20 accessibility overlays and widgets, and I run regular tests to make sure the logic is working. I am not open-sourcing it for obvious reasons – open sourcing detection methodology could be used to break the detection.

The downside of this tool is that it takes a lot of time to detect if overlay is actually running as they often load asynchronously and our tools need to wait for them to be loaded before we can be certain.

Nonetheless, it will be even more important to know how to test accessibility manually, and unfortunately creating more accessibility testing rules will potentially also mean that more of the failures will be hidden from automatic accessibility testing tools, if somebody decides so.

Once again – we need even more accessibility awareness, knowledge and culture. When more people will be able to find (and prevent) accessibility issues we will also have more accessible digital products, and we won’t need shady “automagic” tools that often make things worse at the end.

Author: Bogdan Cerovac

I am IAAP certified Web Accessibility Specialist (from 2020) and was Google certified Mobile Web Specialist.

Work as digital agency co-owner web developer and accessibility lead.

Sole entrepreneur behind IDEA-lab Cerovac (Inclusion, Diversity, Equity and Accessibility lab) after work. Check out my Accessibility Services if you want me to help your with digital accessibility.

Also head of the expert council at Institute for Digital Accessibility A11Y.si (in Slovenian).

Living and working in Norway (🇳🇴), originally from Slovenia (🇸🇮), loves exploring the globe (🌐).

Nurturing the web from 1999, this blog from 2019.

More about me and how to contact me:

2 thoughts on “Testing and monitoring accessibility with automatic tools will get less precise”

  1. The WebAIM Million analysis uses the WAVE API which blocks overlays which are known to manipulate WAVE results in ways that are different from the default user experience. If we’re aware that an overlay changes WAVE results simply so that pages appear more accessible than they really are, such as by injecting gibberish alt text, adding aria-label to everything, or turning on and then off high contrast mode, then WAVE blocks that overlay before analysis so the results reflect the actual page content.

    In the WAVE extensions and online service, when such overlays are detected the interface presents a notification that the page content has been manipulated.

    1. Hi Jared and thank you very much for your comment here. Let me please start with my sincere appreciation for your efforts for accessibility and that I always learn a lot from WebAIM. Million project is the top reference for large scale automatic accessibility testing and I hope it will also be in the future.

      As for your comment – I am aware of WAVE feature you mentioned and appreciate it, but in my experience, when I developed my own overlay detection tool – I quickly found out that it will be quite difficult to maintain and update it – as overlay vendors often change implementations, behaviors and more.

      That is also the main reason I don’t have any plans to open source the overlay detection tool – it would just make it even more difficult or impossible to maintain and update it for obvious reasons.

      Overlay detection is in my opinion a perfect example of a moving target, and on multiple levels;

      • some are using different anti-detection techniques,
      • some of them offer white labeling (styling, behavior, even on DNS level, so more difficult to block by URL),
      • some provide “first party” plugins that can be installed in the back-end and are quite integrated (difficult to detect and/or block) and so on.

      I just tested some random sites that use overlays – with my own tools and with WAVE (both online and extension versions) – and I didn’t need a lot of time to find out sites where both my script and WAVE failed to detect overlay manipulation
      (0 errors reported to WAVE when overlay loaded (not activated) and different number of errors present when I blocked it manually).

      I can send you the URLs via private channel if needed. Will also make a video capture, to document it…

      This moving target situation was also one of the main reasons for this post actually – and now, with latest obvious plans to bet even more on overlays, I am really concerned our monitoring and testing efforts will be less precise in the (near?) future.

Comments are closed.