I’ve been looking for automatic solutions for alternative text generation that every graphical content must have. And I’ve seen that although they are getting better and better in recognizing the objects inside the pictures, for example, they are not providing much value when providing contextual alternative texts.
So – my thesis is that authors are still the best source of good alternative text and they should not let the computer vision alone to decide what is their graphic trying to say in the context of their content.
Authors of the future will still need to think about alternative texts, artificial intelligence will not be enough
We can quickly imagine how would it be if our authoring tool would just run the article and image through an machine learning / artificial intelligence powered computer vision tool /API and all of our graphical elements would get the best possible alternative text auto-magically.
Such automatic tools can help authors when it comes to looking for synonyms or phrases with equal meaning but the tool itself should not provide the whole alternative text for the author.
Alt text can maybe be automated when/if our interfaces will gain direct access to our thoughts
If we try to make an educated guess about future interfaces – there is no better than direct connection between human brain and computer.
I will not go into ethics and possibilities of abuse but if we take only the good from it’s possibilities then we can agree that computer being able to read our mind could also automate the alternative texts on our graphical elements if we would let it.