Potential problems with accessibility audits and what to do about them

(Read 2014 times)

Don’t think accessibility audit alone will help making things accessible. It can even mean ineffective use of resources as there are several things you need to consider before just auditing.

Auditing digital products for accessibility takes time, at least when done properly. Depending on our conformance target we need to check more than 50 success criteria per unit (webpage, native application screen or other similar interface unit). Surely some success criteria only apply to special elements, for example video or audio and usually they are not present on all units of the user interface, so we don’t need to use a lot of time to check them. Other parts may be repeated on multiple occurrences and need to be checked only once. And there are parts that differ from page to page / from screen to screen and needs to be checked independently. So, we can conclude that time needed for audit depends a lot on the structure and uniformity of the user interface.

With keeping this in mind, we can agree that auditing takes time and time is money. So before we proceed with auditing we need to plan to make our invested time provide us with most value. And if we don’t plan then we may find potential problems hiding behind good intentions of accessibility auditing.

Scope and timing

Making an audit is measuring accessibility of a scoped parts at particular time slot. This obviously means that we need to be careful when scoping what to audit and when to audit. And this is potentially a big problem with auditing if scope is missing important parts or if we do the audit just before a big release of new functionality. Project managers and stakeholders need to really be sure when we should do the audit. Otherwise we can audit something that is not really important. Or maybe even removed.

We need to understand a lot to be able to scope well. And if we are external accessibility consultants then we need to be even more curious. We need to get quality data, we need to understand the user journeys we should focus on, we need to understand the life cycles of the development cycles and what is being delivered, we need to take care to be efficient already in scope phase. Understanding templates, components, design system, most popular and visited parts, most valuable parts is crucial.

Scope can miss a lot if not done correctly. Sometimes it’s very easy to define the scope. With huge websites or apps or other digital products we also have larger risks of auditing wrong parts. And timing is vital as well. Auditing takes time and it’s often not possible to have a total freeze of development. It could harm the business. So planning the timing is as important as planning scope. Digital products are often living things that evolve fast and we need to consider that when auditing or we may just use resources inefficiently.

Sometimes we need to really know the audience to scope correctly as well. I don’t think average users, because there is no average user. I mean audience that may need more than just conformance to WCAG on level A and AA. I really mean that sometimes we need to reach above AA and check what we can do on AAA. And that is also important to know before we define the scope and conformance target.

Sometimes auditors don’t get access to end user feedback. And that is also a big problem when defining scope. If feedback isn’t communicated properly we can potentially miss some important parts of user journey in our scope. And the barriers are not recognized.

Over-relying on automatic testing

Automatic testing helps a lot, especially when dealing with large websites. But over-relying on it can make the situation feel better than it really is. We really need to communicate the statistics of such testing to all stakeholders, so that they are aware that automatic testing is only covering about 30%. And that it can only detect obvious accessibility problems. That automatic testing is software as well and can have false positives, false negatives, or even plain software bugs.

Getting score of 100/100 of any automatic testing tool is only a start. Claiming that our page is 99/100 accessible and only relying on automatic accessibility testing means that we don’t understand accessibility at all. 99/100 only means that we weren’t able to catch automatically detectable accessibility issues and that potentially we may fail 70% of all WCAG.

Audits that don’t provide proper remediation suggestions

Getting information about what is failing is one thing. But lack of good remediation suggestions, especially on mobile can really make the audit a waste of time. If people can’t act on the audit they will throw it away.

Maybe they will make an accessibility statement based on the audit and just move on with making accessibility barriers.

Audit needs to provide remediation suggestions that are understandable and practical. It also needs to provide information about how serious the accessibility issue is for end users and how simple it is to fix it. That way we can get together and make priorities. Plan development lifecycle with built in accessibility remediation. Estimate accessibility fixes more efficiently.

Lack of code access on mobile is extremely problematic. Auditors that just have access to binaries have limited possibilities on how to properly audit the app. Ideally auditors need to have access to design and code and app. Otherwise it’s not possible to really know what is wrong and how to fix it.

Re-test can miss a lot

I just had a discussion with a client where they only wanted me to re-test the parts that were supposedly fixed from previous audit. But when I started the mobile application was so changed that I needed to get back to them and suggest new parts needs checking as well. Especially when there were obvious errors.

In my opinion this is a problem that can be solved with proper project management. Auditing sooner and more often would potentially make this less of a problem. Doing an audit after large parts were changed is most certainly a problem that just produce another backlog with overwhelming number of issues that needs to be documented, prioritized and fixed.

It also means that shift left was not a part of project. Taking audits cost, absolutely, but waiting with them to the end cost even more.

When stakeholders understand that accessibility is not a project, but a process it should be also clear that shifting it left, integrating it, would be easier for all parts.

Audit at the end and fail

As a conclusion, even if I could probably write a short book instead of this post, I can only reflect on fact that auditing at the end means we have failed to deliver accessible product. If accessibility wasn’t a part of the development lifecycle at all times we are doomed with a huge backlog of issues that needs testing, documenting, prioritizing, fixing and re-testing.

It should be obvious that checking accessibility at “the end” will just push issues down the backlog and probably lead to inaccessible product, even if we spent quite a lot of time and money on the audit.

Therefore, make sure to audit earlier, integrate accessibility at the start and audit at the end should not find a lot needed to be fixed, as we actively prevented problems to occur in the first place.

Author: Bogdan Cerovac

I am IAAP certified Web Accessibility Specialist (from 2020) and was Google certified Mobile Web Specialist.

Work as digital agency co-owner web developer and accessibility lead.

Sole entrepreneur behind IDEA-lab Cerovac (Inclusion, Diversity, Equity and Accessibility lab) after work. Check out my Accessibility Services if you want me to help your with digital accessibility.

Also head of the expert council at Institute for Digital Accessibility A11Y.si (in Slovenian).

Living and working in Norway (🇳🇴), originally from Slovenia (🇸🇮), loves exploring the globe (🌐).

Nurturing the web from 1999, this blog from 2019.

More about me and how to contact me: