Technology is making great strides towards integrating the human mind into the machine. after the success of brain-computer interfaces-known as BCIs-in controlling devices and robots, many companies specializing in this field, such as Neuralink, Synchronon, and Synchronon, have now begun to achieve new achievements that will change the lives of millions of people around the world. Over the past few days, Neuralink has received approval from the US Food and Drug Administration (FDA) for its new device (BlindSight), which aims to restore vision in the blind by directly stimulating the brain. Elon Musk, the founder and CEO of the company, explained that this device will help those who have completely lost their sense of sight and the optic nerve to see again, provided that the area responsible for vision in the brain (visual cortex) is intact. Meanwhile, Synchron announced that the implant it created allowed a patient with ALS to control Amazon's Alexa voice assistant with his thoughts. With this implant, the patient can mentally click on the icons inside an Amazon Fire tablet, which gives him access to a wide range of Alexa features, including seeing security cameras, making and answering video calls, and controlling Fire TV by pointing the cursor with his brain. Synchron technology was able to change the life of this person, who did not have the ability to use his voice or limbs. But what if artificial intelligence, especially large linguistic models with brain implants, were used to convert the signals recorded by those implants into voice commands in real time? Could this help people with traumatic brain injuries or neurological diseases such as amyotrophic lateral sclerosis to regain their voices Brain implants and artificial intelligence bring hope to sclerosis patients:
A team of researchers at the University of California, Davis Health (UC Davis Health) has developed a new computer brain interface (BCI) capable of translating the thoughts of people with speech difficulties into intelligible speech. Thanks to advanced artificial intelligence models, this interface has achieved an amazing accuracy of up to 97%, so it represents a quantum leap in the field of human-machine communication and opens new horizons for the treatment of neurological diseases that affect the ability to speak. How do brain-computer communication interfaces work How artificial intelligence helps sclerosis patients regain their voices Brain-computer interfaces are one of the most promising medical technologies of our time, seeking to enable people who have lost vital functions due to injuries or neurological diseases to restore them. The technique relies on networks of microelectrodes implanted on the surface of the brain or inside its tissues to record the electrical activity of the brain, which is a series of electrical signals that transmit information between nerve cells, and then the computer analyzes and interprets these signals to determine the activity that the patient intends to do. Initial research in the field of brain-computer interfaces has focused on restoring movement, especially arm and hand movement, however, aphasia is an even greater challenge, especially for patients with neurological diseases such as amyotrophic lateral sclerosis. But in recent years, researchers have made significant progress in the field of speech brain and computer interfaces, which are able to record brain signals that are formed when a person tries to speak, convert them into written text using complex algorithms, and then turn this text into audible sound using text-to-speech technologies. However, the development of these systems faced many challenges, the most prominent of which are: artificial intelligence programs responsible for decoding brain signals need a huge amount of data and training to learn how to accurately translate these signals. In addition, these programs had difficulty accurately distinguishing words, which led to errors in understanding the patient's speech and obstacles in effective communication. Just as a translator needs a large dictionary and extensive experience to translate text accurately, artificial intelligence programs need a huge amount of data and training to understand the language of the brain and convert it into understandable speech. The study of researchers at the University of California (Davis Health), published in the New England Journal of Medicine, showed that they have successfully overcome previous challenges in the field of brain-computer interfaces, as they have developed a new system based on a set of artificial intelligence models, capable of decoding brain language with high accuracy and converting it into understandable speech. The idea of the system is to translate the patient's thoughts directly into audible words, and this happens by implanting special electronic chips that capture speech signals in the patient's brain, turn them into text that appears on a computer screen, and then the computer reads the text aloud, similar to the person's voice before he got sick. To test the system, Casey Harrell, a 45-year-old man with amyotrophic lateral sclerosis, volunteered to participate in the clinical trial (BrainGate). at the time, Harrell was quadriplegic, his speech was difficult to understand, and he needed the help of others to translate it. In July 2023, the patient was implanted with 4 sets of microelectrodes in the brain, specifically in the left central gyrus, the area responsible for coordinating speech, these sets are designed to record brain activity, consisting of 256 electrodes implanted in the cerebral cortex to capture brain signals with high accuracy. “We capture precise nerve signals that reflect the patient's attempts to move his muscles and produce speech, and these signals are recorded from the area responsible for moving the speech muscles in the brain, and then we translate these complex neural patterns into symbols and then into understandable words,” explains Dr. Sergei Stavisky, assistant professor at the Department of Neurosurgery and co-lead researcher of the study. Previous brain-computer interface systems, which translate ideas into speech, had a big problem, namely, repeating errors in identifying words, and this made communication difficult and unreliable.
David Brandman, a neurosurgeon and co-researcher of the study, explained that their goal in this project was to develop a more accurate system that would allow the user to express himself clearly at any time he wish. Harel has tried the new system in several scenarios, including regular conversations and specific requests. In all cases, the system, with the help of several machine learning models and large language models, deciphered brain signals and converted them into written words instantly. Moreover, the system was reading these words aloud in the same tone of Harrell's voice before he became ill, and this was achieved through a program trained on existing samples of his voice before he became ill. Impressive results: Within a record time, the system achieved high accuracy in speech recognition, surpassing many commercially available systems. During the first training session on speech data, the system achieved an impressive 99.6% accuracy in recognizing 50 different words in just 30 minutes. With the vocabulary volume increased to 125,000 words in the second session, the system needed an additional 1.4 hours of training to achieve an accuracy of 90.2%. By continuously collecting data and training the system, it achieved a high accuracy of 97.5%, representing a significant improvement in the field of speech recognition systems. A promising future: This achievement marks a turning point in the field of brain-computer interfaces and opens up new horizons for the treatment of many other neurological diseases, such as stroke and Sclerosis. As technology continues to evolve, we can expect the emergence of new generations of brain-computer interfaces that are smaller and more efficient, making this technology available to more patients. In conclusion, brain-computer interfaces are one of the most prominent developments in the field of Biomedicine in recent decades, and this technology has proven its ability to improve the quality of life of patients with severe disabilities, giving them hope for a more independent and communicative future.