Have you ever experienced the urge to text someone in the middle of cooking? Have you ever experienced the necessity of responding to a friend’s eager text when performing small tasks such as holding groceries? Brain-machine interfaces, similar to BrainGate, are being used to decode electrical signal and translate it directly into text messages. Joseph Makin and researchers at the University of San Francisco are developing this technology using an algorithm that translates electrical cortical activity of the brain to computer languages, and eventually, text messages without the necessity of physically using the keyboard of the texting device. In addition, this technology could be used in patients who were paralyzed and unable to type texts themselves or use desktop computer interfaces.
Opportunities for improving this device still remain, such as improving the speed of translation. Patients who had implants for treating seizures trained a computer algorithm to detect electrical signals that occur when the patient reads sentences. They performed this by using the intracranial device to record the activity when the patients read sentences out loud for 30 minutes. This algorithm has a small amount of artificial intelligence that organizes a representation of which location of the brain is being activated by specific portions of a sentence. Another set of artificial intelligence algorithms translates these regions of electrocortical activity to a signal that is interpreted by computer to generate texts. Ideally, a single brain signal would be directly translated into computer code; however, this systems uses a three step project in translation due to the challenging nature of the electrical activity in speech. Future optimizing of this technology would hope to include direct translation to make producing the text quicker.
The end product of this study was able to translate thirty to fifty sentences at a time with an error rate similar to professional-level speech transcription. They also found that the algorithm could be improved by multiple people if the algorithm was used subsequently with different patients in sequence.
In the past, neural interfaces were attempting to translate neural signal to speech; however, previous models have only achieved an accuracy of 40 percent. These models were too strict, focusing on individual sounds like vowels and consonants leading to much complexity. This model is different, in that it translates among full words. This process, however, is limited to 250 words at this time. Scientists want to further expand this idea by providing “libraries” of speech for translation. In addition, they would also like to add different languages to the algorithms.
Defense Advanced Research Projects Agency neural engineering system design
To augment on these concepts of devices at the neural interface, it is worth mentioning the Neural Engineering System Design (NESD) program, which seeks to develop high-resolution neurotechnology capable of mitigating injury and disease of the visual and auditory systems. The focused goal is to develop neural interfaces that provide high signal resolution, speed and volume of data transfer between the electronic device and the brain. Funded by the department of defense, this initiative was founded in hopes that one day we will discover have better brain-machine interface capabilities for treating wounded warriors with traumatic brain injuries and subsequent neurological deficit.
They aim to create a device that can read 106 neurons and write up to 105 neurons at a quick pace – a goal that has yet to be obtained currently. This is a joint production involving the fields of neuroscience, photonics, medical device manufacturing, neuroengineering and clinical testing. The main goal of this program is to provide high-resolution neural capture and stimulation making the potential of having a prosthetic that will feel almost identical to a natural arm or leg. In addition, it will significantly enhance scientists’ capability in understating neural underpinnings of hearing, vision, and speech for more optimization of future developments.
Hand Proprioception and Touch Interfaces Program (HAPTIX)
Using the idea of having high resolution sensation capabilities as well as motor strength ability, the Hand Proprioception and Touch Interfaces (HAPTIX) is a program that seeks to enhance prosthetics by giving patients sensory feedback. In addition, goals included being able to have awareness of limb position and movement, also termed proprioception. Without these features, limbs still feel numb to users and thus lowering quality of life. Additionally, it reduces the wearers’ willingness to use them.
The Defense Advanced Research Projects Agency (DARPA) has awarded contracts to 8 developers to begin the HAPTIX program. The overall goal of this program is to provide wearers of prosthetics to feel like an actual hand. A firm belief in this project is that if wearers have feeling and sensation similar to a real hand, they would wear the prosthesis much more often. It would also be helpful in reducing phantom limb pain which is found in 80 percent of amputees. This could be the initial step of the human cyborg that was once only depicted in sci-fi films but may soon become a reality.
Julian Gendreau is a physician and co-author of Future of Technology in Medicine, The: From Cyborgs to Curing Paralysis.