What is the role of artificial intelligence in the development of accessibility benefits In recent years, companies such as Apple and Google have taken advantage of artificial intelligence to enhance the various accessibility features in their devices in order to help people with special needs, the most prominent examples of this are: the Live Speech feature from Apple, which allows people with speech problems to write what they want to say, and then the device pronounces it out loud, and there is an Eye Tracking feature that allows you to control iPhone and iPad devices with eyes only. As for Google, it uses artificial intelligence to operate features such as: Guided Frame, which helps blind and visually impaired Pixel phone users to take good photos via sound and tactile signals, in addition to the Lookout feature that can identify objects and create detailed descriptions of images. On the other hand, OpenAI has partnered with Be My Eyes to develop a Be My AI application that provides detailed real-time descriptions of a person's surroundings in a voice similar to a human voice, for example: if someone with visual impairment wants to order a taxi, the Be My AI application can tell him that the taxi light is on or not, and the place where the driver stopped. Here are more details about the role of artificial intelligence in the development, customization and accessibility of accessibility features for people in general and for people with special needs in particular. Artificial intelligence and bridging accessibility gaps:
Screen content reading technologies, such as VoiceOver in iPhone phones and TalkBack in Android phones, have improved the experience of using phones for many blind and visually impaired people; these technologies aim to read content aloud and allow navigation through the phone screen with special gestures. In May, Google developed the TalkBack feature by integrating the Gemini Nano artificial intelligence model into its smartphones, so TalkBack can provide more detailed descriptions of photos, such as describing the design of clothes in detail while shopping online. This indicates the potential of artificial intelligence to make a tangible difference in enhancing accessibility benefits. getting a direct description of what appears on the screen of a user who suffers from visual impairment or loss may be useful, but it does not always paint the whole picture.This is where the importance of supporting devices with artificial intelligence models to obtain detailed descriptions of the objects visible on the screen comes in, which greatly enhances the user experience. Other innovations in the field of accessibility:
There are many recent innovations powered by artificial intelligence that enhance accessibility for people with special needs, the most prominent of which are: The advantages of eye tracking, voice shortcuts, and musical touch that Apple has added to its iPhone phones. The Google Gemini assistant and Apple's Siri assistant have been developed to become more context-aware, allowing them to answer consecutive questions and provide consistent and comprehensive answers. In addition, the latest new feature for Siri, called Listen for Atypical Speech, allows the voice assistant to understand a wider range of speech better than before, using the machine learning built into the device to help recognize different speech patterns. This update to Siri is the result of an initiative called the Speech Accessibility Project, a collaboration between the University of Illinois at Urbana–Champaign and prominent companies such as Apple, Amazon, Google, Meta, and Microsoft, and this project aims to improve the ability to recognize the speech of people with various physical disabilities. Customize the advantages of artificial intelligence accessibility:
Artificial intelligence can help customize accessibility features such as Apple's Live Speech feature, which pronounces written phrases aloud, and the Personal Voice feature, which allows users at risk of speech loss to create a sound similar to their own. After training this feature by pronouncing a series of text prompts aloud, an iPhone or iPad will generate a sound almost identical to the user's voice based on machine learning. On the other hand, Google's Project Relate app is designed to help people with a slurred speech problem communicate easily with others by copying and paraphrasing what they say using a computerized voice, and this app can also be custom-trained on people's unique speech patterns. Project Relate can be linked to the Google Home application, so that the application can better understand the user's speech and execute his commands, such as turning on the lights and changing the thermostat. Technology is for everyone, not just people with special needs:
One of the most important benefits of advanced technology is that it facilitates the life of everyone, not just people with special needs, many people have become dependent on some modern technologies supported by artificial intelligence, such as: dark mode, which can improve the readability of some people, and the text-to-speech feature, which has become an essential feature in social networking applications such as: TikTok. Even artificial intelligence features that were not originally designed to enhance accessibility for people with disabilities are also useful, for example: visually impaired people can use Google'S AI Overviews feature that summarizes search results instead of searching through multiple pages using a screen reader to get information, and this feature is useful for everyone because it facilitates and speeds up the search process. With the launch of Gemini Live from Google, all users can interact with the Gemini AI robot using only their voice, which means that they will not have to rely solely on text input to get what they need, and ChatGPT provides a similar feature called Advanced Voice Mode, to allow users to have conversations with the robot simulating real-life conversations. On the other hand, self-driving cars have come to offer people a great deal of autonomy when ordering a taxi, especially as more accessibility features are integrated into these cars.
For example, the Waymo application for ordering self-driving taxis now allows blind and visually impaired users to use the VoiceOver feature on their iPhone phones or TalkBack on Android phones, to get instant feedback on traffic directions spoken aloud to help them track the location of the car, and after entering the car, they can use their phones to activate the Global Positioning System (GPS); to track the Route step by step, and get more details about where they are heading.