Better accessibility context for your artificial intelligence agents – help them help you

(Loaded 1366 times)

Model Context Protocol (MCP) is not perfect, but it is a good start – as it can provide reality for Artificial Intelligence (AI). Less hallucinations help with inclusion.

Trying to get the best out of this situation, this new reality, where (Gen)AI (I will just use AI here to make it shorter, but it’s mostly to describe latest Generative Artificial Intelligence) is integrated into our tools and also lives. When we look at the state of digital accessibility, we see a lot of issues still, after all these years. AI learned from them, so it’s often reflecting this built in bias, at the same time we can not prevent hallucinations.

But there are now ways to help the AI with providing it reality, for example providing the accessibility semantics on request – a vital context for so called AI agent that can help us vibe code or at least offer better autocomplete. Leaving AI to it’s own interpretations is obviously dangerous. It leads to less accessibility, most of the time. So adding semantics to the feedback loop is an improvement in my opinion.

An example of what can help

Let’s think about the following process:

Developer wants and prompts a feature, it needs to be accessible, this is a predefined requirement shared in the team. AI provides a suggestion, but before it provides the suggestion it can actually check if the semantics of the suggestion is appropriate to the acceptance criteria. If not sure, it can run the suggested code and check the actual accessibility tree for the suggestion. This needs to happen quite fast and automatically, behind the scenes. A process like this adds additional layer of self-checking to the AI that can help with accessibility of the final suggestion as AI will not just hallucinate over semantics, it will actually check it.

Sure, developer still needs to understand what they really want and how to verify it. Often we check the accessibility tree in our browsers developer tools for the semantics. And if this same accessibility tree is already a part of the context we improve the chances for AI to provide us with a better suggestion.

Next step is for sure AI agent that can also check the output with assistive technologies, like for example screen readers. As we know, accessibility tree is not perfect either and assistive tech. also uses some heuristics and so on, so we need to check with them as well.

We have different opinions and feelings about AI, that is human, and I am not telling anybody they must use AI. I just see that a technology like this already does so much good and that it’s up to us humans to use it for more good.

I am very biased as well, white male in western country with high technology adoption rates – my experience is that AI is a tool that is becoming a part of our daily work. I hope we will get a more sustainable AI, running on our own devices, for free, with less built-in bias and possibilities to totally remove unwanted biases. And am glad that people are at least trying to set some ethical guardrails with acts like the EU AI Act.

Model Context Protocol (MCP) can be used to provide real accessibility information so that AI does not need to imagine

And this is crucial for better, more effective and less hallucination-rich usage of AI. MCP provides a way for AI models/agents (like for example famous ChatGPT) to connect with our tools and data in the way we want them to. Like plugging the model to our world and providing it with our specific context and not just letting it “imagine”, guess and “hallucinate”.

In terms of accessibility context – mostly for developers but also others – we can now use MCP as a feedback provider when we develop websites (and I guess mobile apps will follow soon).

We like to promote – practically it is our mantra – shifting accessibility to the left – which means to plan for accessibility as early as possible. And also to test for it as early as possible. And that leaves developers in need of understanding the results, the impact of code on accessibility. It can be overwhelming with all the semantics of the HTML, influence of CSS and JavaScript on accessibility and even more so when adding Accessible Rich Internet Applications (ARIA) standard to the situation. MCPs now allow us to provide AI with semantics, just like we are used when we test and develop in our browser developer tools where we can see the actual accessibility tree and observe all the semantics, roles, states, values and so on. AI can now get to this same exact context automatically.

This means that we can provide initial semantic context and then integrate it in the feedback loop, hopefully making AI more efficient with understanding the situation.

I use accessibility tree view in developer tools a lot. My code editor runs AI agent that can make me faster and more efficient, but often I need to provide a lot of semantic context to get usable results. Having the MCP-in-the-loop – an MCP (or even better – multiple MCPs) that provides real accessibility tree and all information that is basically same as it is presented to assistive technology (with some exceptions still), I can already see I am more effective and I see that AI works better – delivers better code – as it has proper, updated, context.

Chrome DevTools MCP server can help you with accessibility context

A general warning is in place – MCPs are the hot thing now, well for some time already. With good reason, as I tried to explain above (and beyond only accessibility). But not all of them are there yet – so please do your own due diligence and try to provide feedback to their creators to help us all.

So – please be very careful with testing them as they can execute remote code and may present an security threat as they can access your systems and data. Not to forget that you may be sending (direct or indirect) personally identifiable information (PII) to a third party when using remote services (remember GDPR?)!

Back to Chrome DevTools MCP – it was announced in September 2025 (opens in new window) and it’s highly focused on performance and debugging, you will not see a lot of accessibility documentation in official pages. Which caused me to dig in and check what are the possibilities. And I only found a single tool that their MCP offers – take_shapshot (opens in new window).

Take_snapshot basically provides page elements along with some basic semantics and their unique identifiers (so that AI can for example click on it). When I tested it on a complex website I knew, it was missing a lot of semantics and I dug into the source code.

There I found that MCP is actually calling Puppeteers Accessibility.snapshot function that allowed an option to get back the whole accessibility tree called interestingOnly (opens in new window) and it was set as true by default. So, yes, I finally read the documentation of the MCP and there it was – I could run the take_snapshot verbose:true https://localhost and I would then get the whole semantics back.

Please note that there may be some bugs (like for example when I reported to the team that table semantics was not appropriate in the snapshot (opens in new window)), but this snapshot of the whole accessibility tree really is golden to provide AI with real context about the output, from the real browser and not just letting it hallucinate.

I also wondered why they did not expose Lighthouse accessibility issues (opens in new window), as I believe that providing them could help with accessibility, at least for people just starting with it, to provide them with some possibilities close to the tools they already know. But we will see if the team recognizes this.

We still need human knowledge and understanding and will continue to need in the future

Indeed – AI is still a tool, and we need to know how to use it and what input to provide to it. Not only that – we really need to understand the outputs as well – and how assistive technologies will interpret it. And yes – it is still essential to understand the users, including people with disabilities. All this understanding is still essential as we need to do our best to prevent barriers.

With MCP feedback loop we just provide the tool with better context, but we still need to be responsible and knowledgeable. I still check the accessibility tree to double check and I still test with different assistive technologies manually. I have seen that accessibility tree checking in this phase is still a must as it is also testing with assistive tech.

This makes the shift to left a bit better and I still advocate for testing with people with disabilities. Hope we will get more organizations onboard for both improving accessibility efforts early on and for including diverse testers, including people with disabilities.

Author: Bogdan Cerovac

I am IAAP certified Web Accessibility Specialist (from 2020) and was Google certified Mobile Web Specialist.

Work as digital agency co-owner web developer and accessibility lead.

Sole entrepreneur behind IDEA-lab Cerovac (Inclusion, Diversity, Equity and Accessibility lab) after work. Check out my Accessibility Services if you want me to help your with digital accessibility.

Also head of the expert council at Institute for Digital Accessibility A11Y.si (in Slovenian).

Living and working in Norway (🇳🇴), originally from Slovenia (🇸🇮), loves exploring the globe (🌐).

Nurturing the web from 1999, this blog from 2019.

More about me and how to contact me: