Using automatic tools to discover potential accessibility issues

(Read 449 times)

Automatic tests can help a bit. WCAG evaluation methodology provides a good start for test focus. And if we add page popularity scoring and simple page complexity scoring, then we can really focus on the potentially difficult pages in our manual testing efforts.

I’ve been thinking how to be more effective detecting potential accessibility problems most effectively. And automatic tools seems to be the first step, especially if we are not familiar with page technical architecture and if the page has lots of content and interactive elements.

Running accessibility audits in a bulk

The first and most obvious step is for sure to run automatic accessibility tools against all of URLs that the page has. We will most certainly discover some problems like that. Personally I played with an idea to run multiple different automatic tools on same URL, to get most out of it. That’s the whole idea behind aXeSiA – it allows us to run multiple accessibility tests on a list of URLs. I’ve not done extensive testing yet but it makes sense to have a report that shows pages with most errors on the top of the list. And yes, I am totally aware of coverage of those tests and possible false results and sometimes tests that even fail.

But in reality such tests can help to get a slightly better overview of the situation, at least statistically. We can then choose the high-ranking pages to be in the list of URLs that we have to check manually. I like to use the WCAG-EM methodology to select representative patterns so pages with most automatically detectable accessibility errors belong in addition to other obvious pages. For example task oriented pages and obvious templates and random sample as well.

Additional factors that may be detected with automatic tools

If we think about the situation – a page can have a a lot of accessibility errors but little reach, and the other way around. So we should also take other factors into our evaluations. And we must also think beyond reach, for example how complex is the page.

Page reach – for example pages with most visits

It makes sense, right – pages with no errors and no reach are maybe not the first thing we should evaluate manually. So we should include pages that reach most people in our selection. We can for example export it from our favorite website visitor analytics tool or maybe even use an API for that if the tool has it.

Page complexity – compare pages based on their elements and events

I’ve been thinking about methods that can define how complex is the page and what kind of data defines the complexity. And I do not mean that page has lots of texts and images but how complex is the page on the element level.
My conclusion is that we must at least consider the following:

  • number of elements,
  • semantics of elements,
  • all events of elements

I think we can define page complexity based on these three metrics. If page has a lot of elements that have events on them, then it is most probably very interactive. If it is interactive then it is possible that developers would have to respect accessibility patterns to make it accessible. So we should definitely check those pages manually.

On the other hand we have semantic elements that reflect more complexity and we have elements that do not have any semantic value. So we need to think differently about those two groups. At the same time there are also elements that are not defining anything about content and have maybe just meta information.

That’s why we have to think about them like their value is different. And there we can use different weights for them. An form element weights much more than a span element does. And every page must have a title element, so it is not so important compared to a button for example.

When counting events – especially in this JavaScript oriented world – we must be aware that adding events on elements does really add to complexity. And measure events alone – not just interactive elements that may have events – really does indicate that page can be a lot more complex that it is based on it’s HTML. Especially when developers attach events on elements that do not have a semantic value. This is most definitely one of the root causes of accessibility issues out there and can be easily caught with simple static code analyze.

Last but not least – content complexity

We can argue that page with many pictures and maybe even lots of semantic elements wrapping the text or even video hide potential accessibility issues. One of the most prevalent, according to WebAIM’s annual accessibility analysis of million pages (open in new tab) and also based on my experiences is the missing alternative text. Having a lot of images with empty alt attribute does not count as an error for automatic tools. Even if images are not decorative. That’s why we need to check it manually. The same goes also for images that actually have alternative text. Is the text really adding to the content or it just some random text. Maybe somebody used same alternative text on an image that can be used in different contexts. Again – there is currently no other way than manual.

So if page content has a lot of text paragraphs, links, images and videos – it may also have a lot of accessibility issues.

There is also the case of text complexity – how understandable is the text on the page? If content is not understandable it can cause problems for some users and we should not neglect it. Especially on pages with big impact and wide public, like for example official web pages – medical, tax, financial and so on.

WCAG in it’s understandable principle gives us some hints about that. And we would need to include quite advanced natural language processing (NLP) to be able to detect all level AAA success criteria that falls under 3.1 in WCAG. Unusual words, abbreviations, reading level and pronunciation are currently not covered by automatic tools that I used for now.

Ideal automatic tool should take page complexity into account

I’m quite familiar with automatic tooling out there and was not able to find a tool that involves page complexity. In theory it is a simple task to do, unless we want to cover the WCAG 3.1 on level AAA. Static code analysis – code that was generated by JavaScript and not raw HTML that we get from the server – is quite simple. The hard part for me is relevant statistical weighting and algorithm that is “fair” and really provides meaningful scoring.

My page complexity experiment with aXeSiA

I was just adding my simple page complexity calculations to my pet project for automatic accessibility evaluations – aXeSiA (opens in new window), and I must still verify and test it to be confident about it’s real value, but I did experience some rewarding results already. Especially measuring events on a page helps a lot. As developers still abuse non-semantic elements for interactivity and I am afraid they will do so in the future as well.

Author: Bogdan Cerovac

I am IAAP certified Web Accessibility Specialist (from 2020) and was Google certified Mobile Web Specialist.

Work as Agency co-owner web developer and accessibility lead.

Sole entrepreneur behind IDEA-lab Cerovac (Inclusion, Diversity, Equity and Accessibility lab) after work.

Living and working in Norway (🇳🇴), originally from Slovenia (🇸🇮), loves exploring the globe (🌐).

Nurturing the web from 1999, this blog from 2019.

More about me and how to contact me: