Mind-Reading AI Turns Thoughts Into Text, Gives ALS Patient His Voice Back
Say goodbye to typing and say hello to thinking. Neural tech company Unbabel gave an astonishing live demonstration of its Project Halo at the Web Summit conference, which aims to enable silent, thought-based communication between humans and machines.
Project Halo combines a non-invasive neural interface with generative AI to transform patterns of bioelectrical signals into language.
“There is a universal language that happens inside of our brains,” Unbabel CEO Vasco Pedro said. “What I mean by this is, when you look at fMRI images of people that speak different languages, but are thinking of the same object, they basically activate the same areas of the brain.”
Pedro showed how Project Halo allows users to receive a message narrated via earbuds, then send a reply completely silently by simply thinking about what they want to say.
Providing a method of responding to messages that doesn’t require speaking or typing has numerous potential use cases, Pedro noted, from mundane scenarios like unobtrusively replying to texts while in a dark movie theater to more life-changing situations: giving people with amyotrophic lateral sclerosis (ALS) the ability to communicate via text or even via audio notes.
By training a text-to-speech model with their voice, they can even speak.
Pedro showed a heartwarming example of how Project Halo enabled a patient with ALS to communicate his lunch order to his wife silently. The system decoded his desired reply from neural signals, then synthesized the text response in a digital approximation of his original voice, which was recorded before he lost the ability to speak.
Unbabel’s isn’t the first mind-reading AI. As Decrypt previously reported, Meta recently developed a system that can scan brain activity and visualize the images perceived in the human mind. The AI accomplished this feat by capturing magnetic resonance measurements as participants looked at pictures and reconstructed the original images’ representations.
Moreover, Neuralink, another major player in the field, is working on advanced neural interfaces and has been approved to start testing its brain implant in humans.
The key differentiator of Project Halo, however, is its ability to read minds and generate natural language responses. As Pedro noted, this required integrating a language model that could learn about a user’s personal context, relationships, preferences, and more to craft messages that accurately reflect what they wish to communicate.
It’s not an uncontrolled thought-reading technology: in order for the device to respond, users must really want to input the answer.
“Essentially, what’s happening is, I’m getting the question read to me through my AirPods, and then I’m using a neural interface that is actually in my arm here—it’s an EMG,” Pedro explained. “It’s capturing bio signals, and what’s happening is a large language model that knows a lot about me is trying to create the answer that I would want to give.”
The model also processes around 15 words per minute, which is a major improvement compared to legacy methods that can be twice as slow.
Pedro stated that Project Halo is currently in the early stages, but the company expects to launch it commercially in 2024. He envisions Halo enabling seamless communication for all people regardless of physical abilities.
“Our goal is to make AI for good and to enable everyone to communicate in every language, truly eliminating the lack of language as a barrier,” mused Pedro.
Edited by Ryan Ozawa.