Web Accessibility Directive (WAD) requires public sector websites to produce accessibility statements. WAD is not only for EU countries but also for Norway. Norwegian authority for accessibility (The Authority for Universal Design of ICT, opens in new window) decided to use centralized database for all accessibility statements and as I wrote I think centralization of accessibility statements is brilliant for multiple reasons. Centralization makes reporting very effective and state of accessibility on a large scale can be very simple to summarize as I will show in this post.
As always – data from accessibility statements isn’t always true
We all make mistakes. Accessibility is still something very new to a lot of people and we must be realistic that accessibility statements are sometimes not totally true. Some people are not comfortable with being transparent over accessibility flaws that they are aware of. And some people just don’t have the necessary knowledge to evaluate websites by Web Content Accessibility Guidelines.
Some accessibility statements of public sector are also made by third party software providers that only evaluate their own parts of the websites and don’t want to evaluate the content itself as it’s often not their responsibility. And, I guess, some providers also don’t want or can’t reveal all of the accessibility failures they made.
So, I think that the relevance of self-evaluations needs to be considered based on the overall data quality. This will probably improve with time and knowledge, but we need to be aware that the reality is probably less accessible than reported. It is also the nature of WCAG evaluation methodology that we don’t really test absolutely all pages (it would take too much time), so we need to keep that in mind.
Self evaluation shows 20% of websites conform, automatic testing on similar websites show 0 to 1.5% don’t have errors
5249 accessibility statements were published in the database. 79% of them partially conforming to WCAG 2.1. AA, only 1% not conforming and whole 20% stated they conform in whole.
20% is way more than I would expect. It’s probably also not true when we compare the number to the results of automatically detected WCAG errors made by me and others.
For example I’ve done my automatic WCAG testing on 356 websites of Norwegian municipalities and only 5 of then didn’t have any automatically detected errors. This means approximately 1.5%.
My tests were limited and didn’t scan absolutely all pages, but Accessibility Cloud did test almost all pages and they actually find WCAG errors on at least one of the pages for all municipalities – so 0% of pages of municipalities conform to WCAG.
Recent Comparison of accessibility of e-government websites in Europe places Norway on first place between all European countries, where 8 of 50 WCAG success criteria were tested on homepages and 84% of Norwegian eGovernment websites met all of them. Which is quite impressive at first sight, but if we know that not all of 8 WCAG success criteria can actually be tested only automatically we can conclude that 16% conformance is again not realistic at all.
Table of failing WCAG success criteria
Although data is not optimal as mentioned earlier in the post we can still use it for some general trends and indications of accessibility failures. As we know automatic testing only covers parts of WCAG, so it’s always interesting to check the data that potentially goes beyond.
WCAG Success Criterion
|
Percent of Failures
|
---|---|
1.1.1 Non-text Content | 41 |
1.4.11 Non-text Contrast | 34 |
1.3.1 Info and Relationships | 34 |
4.1.1 Parsing | 31 |
4.1.2 Name, Role, Value | 27 |
2.1.1 Keyboard | 26 |
2.4.7 Focus Visible | 25 |
1.4.3 Contrast (Minimum) | 25 |
2.4.4 Link Purpose (In Context) | 23 |
3.1.2 Language of Parts | 22 |
1.4.4 Resize text | 21 |
1.2.2 Captions (Prerecorded) | 21 |
2.4.1 Bypass Blocks | 21 |
2.5.3 Label in Name | 20 |
1.4.2 Audio Control | 20 |
1.4.13 Content on Hover or Focus | 18 |
2.4.6 Headings and Labels | 17 |
3.1.1 Language of Page | 16 |
4.1.3 Status Messages | 16 |
3.3.2 Labels or Instructions | 16 |
1.2.1 Audio-only and Video-only (Prerecorded) | 16 |
2.4.5 Multiple Ways | 14 |
2.4.3 Focus Order | 14 |
1.4.12 Text Spacing | 14 |
2.1.2 No Keyboard Trap | 14 |
1.4.1 Use of Color | 14 |
3.3.1 Error Identification | 14 |
1.3.5 Identify Input Purpose | 14 |
3.3.3 Error Suggestion | 12 |
1.4.5 Images of Text | 12 |
2.4.2 Page Titled | 11 |
1.3.4 Orientation | 9 |
1.3.2 Meaningful Sequence | 9 |
1.3.3 Sensory Characteristics | 8 |
3.2.1 On Focus | 7 |
3.2.4 Consistent Identification | 7 |
3.3.4 Error Prevention (Legal, Financial, Data) | 6 |
2.2.1 Timing Adjustable | 5 |
2.5.2 Pointer Cancellation | 5 |
2.2.2 Pause, Stop, Hide | 5 |
3.2.3 Consistent Navigation | 4 |
3.2.2 On Input | 3 |
2.5.4 Motion Actuation | 3 |
2.1.4 Character Key Shortcuts | 3 |
2.5.1 Pointer Gestures | 3 |
2.3.1 Three Flashes or Below Threshold | 1 |
1.4.10 Reflow | 1 |
I don’t have any information about share of automatic testing used to help generate the accessibility statements, but I suspect that some of accessibility statements base their findings on them as well. Just another thing to remember.
Conclusion – centralization helps, but only if data is correct
As mentioned several times in several posts of mine, centralization helps a lot to process the data. It helps to produce consistent accessibility statements. It helps authorities when they need to check if public sector has an accessibility statement and when it was published or updated. It helps making reports and with that it makes efforts more transparent.
But – and a big one – people responsible to fill out the accessibility statements need to be knowledgeable about WCAG and how people with disabilities struggle with digital barriers.
It is also not clear if authorities run large scale automatic testing based on the urls they have in the database and compare them to the accessibility statements to figure out potential problems proactively.
Just a thought I had when considering the situation.
It is also not clear if authorities run large scale automatic testing based on the urls they have in the database and compare them to the accessibility statements to figure out potential problems proactively. I suspect they may do it in the near future if they don’t do it already – because that would be my next logical step.