John Lennon’s “Imagine” makes one think of how a world would actually look like if everyone lived as one – an inclusive world which takes into account every individual, especially when it comes to the digital world, in this day and age.
Adding to this, the advent of Artificial Intelligence and Machine Learning seeping into every sphere, is making the tables turn for the realm of digital accessibility.
This immersion of AI and ML has the potential of a new-age accessibility – the way AI is transforming the lives of differently-abled people and accessibility testing for such technology, as well as leveraging AI and ML within accessibility engineering. Let’s look at these two notions in detail.
AI in Transforming the Lives of Differently-abled Users
Needless to say how AI has enhanced user-experience in the last few years – it has given a completely different trajectory to digital accessibility as well, in terms of augmenting the experience of differently-abled users while using AI-enabled applications. The core areas where AI has made an extraordinary impact within the domain are:
- Image Recognition- When it comes to image recognition and alt texts for the same, AI comes into the picture for generating these alt texts through the vision APIs by recognizing the contextual premise behind the image. Often, for visually-impaired users, the pain point of missing alt texts can lead to a dismal user-experience. AI thus, helps enhance this experience by capturing, classifying and recognizing objects within an image which need an alt text. It helps create the same automatically through its robust neural networks. For conducting accessibility testing for such platforms, like Facebook, alt texts which are either missing or illogical can be identified and tested against for better and realistic results.
- Speech Recognition- For motor-impaired users, speech recognition becomes an important facet for efficiently utilizing the digital space. AI sweeps in by helping users give voice commands through smart assistants like Alexa or Google Assistant and accomplish their goals using only speech. When it comes to accessibility testing, identifying the pain points for motor-impaired users in navigating through such smart assistants with the help of real users, helps in delivering realistic results. These are also tested against the set norms and guidelines for digital accessibility, for example, through WCAG compliance testing – the new success criteria within WCAG 2.1 which includes speech recognition. Our domain experts, at QA InfoTech, use speech recognition tools such as Dragon, Voice Access and Voice Control to test applications against WCAG 2.5.3 (Label in Name), which ensures that all of the active UI components are accessible for motor-impaired users using voice commands.
- Facial Recognition- AI has done wonders when it comes to accessibility for differently-abled individuals, herein, motor-impaired users. With facial recognition, usability becomes all-inclusive. Motor-impaired users can easily unlock devices which use facial recognition than physically entering data to unlock. This AI technology can become a leading alternative for Captcha for users with cognitive disabilities as well. Accessibility testing herein, allows to identify how well is this technology capturing users’ facial expression and authentication of users.
- Real-time Captioning- This AI technology has helped hearing-impaired individuals by providing live captions for any audio content. Leading players use this technology by leveraging the way AI comprehends mouth patterns and creates captions in real-time. Although its accuracy is still a work-in-progress, it has surely helped users with low or permanent hearing loss in viewing audio files. In testing this for accessibility, gaps are identified wherein these captions are either incongruous or are viewed with an elapsed time.
- Abstract Summarization- For users with cognitive disabilities, AI has presented itself as a saviour by providing crisp and succinct summaries for long articles, emails or any other content that may be incomprehensible to users. By understanding the context of an elaborate content, ML algorithms are helping summarize lengthy content into understandable and short abstracts by retaining the useful information. Accessibility testing for such technology is carried out in terms of contextual accuracy of the shortened abstract. Through a comprehensive test approach, accessibility pain points can be easily mitigated.
AI has thus come a long way in making the lives of differently-abled individuals easier and seamless for using the digital space. However, it still has a long way to go in ensuring accuracy and reliability – which would never be 100% as retaining a human-centric approach cannot be replaced.
Accessibility testing for such applications too, need to involve real users who can bring out discrepancies through logic – an area far-fetched to be realized with just AI or ML.
Leveraging AI and ML Within Accessibility Engineering
Although accessibility testing for AI applications has become a growing niche, leveraging the same within the whole umbrella of accessibility engineering is also an up and coming field. A specific pain point herein, can be seen in verifying alt texts using AI and ML – a webinar that our domain experts in QA InfoTech conducted as well as a novel solution to incorporate AI within accessibility testing.
Automated accessibility testing via various accessibility testing tools can only identify the presence or absence of an alt text. But what becomes a grave pain point is to identify the relevance or context of the alt text – which automation cannot accomplish. Since its relevance depends on intelligence, our solution fashioned AI to be seamlessly leveraged in two stages –
- Using AI for image recognition- Within accessibility engineering, AI was leveraged for reading images through ML’s vision APIs. This helped in identifying objects within an image to come up with certain keywords in terms of what exactly was the image contained. But to put these objects into concrete context, the second stage was brought into picture.
- Using Natural Language Processing- NLP was leveraged in: a) identifying the context of the image objects; and b) providing remediation and recommendations for the same. NLP was used by segregating the surrounding text of an image into paragraphs and summarizing it. This was then mapped with the context given in the first stage via image AI. The second step then, was used to create coherent phrases and sentences using NLP for the given keywords which were classified and suggestions or recommendations were provided accordingly. Remediations were also provided to fix discrepancies for incongruous sentences.
Our novel solution brings out the added edge in such a niche domain – the need for incorporating AI and ML within accessibility engineering.
Embracing AI and ML in accessibility testing through it transforming the lives of differently-abled individuals as well as leveraging it within accessibility engineering efforts, has become an extremely promising field. It does, however, have its own challenges – it can never replace the need of core manual intervention for keeping the logic intact.
Real users testing the product, thus, becomes extremely important for arriving at realistic results and mitigating actual pain points of differently-abled individuals. Albeit its challenges, AI and ML can still truncate the efforts to a minimum and increase efficiency and scalability of deliverables.
QA InfoTech, a Qualitest Company, is a pioneer in this niche domain with its experts not just savvy in accessibility testing for AI-enabled applications but leveraging AI within accessibility engineering as well – embracing AI and ML in all its glory!