Researchers at Columbia University’s Zuckerman Institute have introduced what they describe as the first real-time brain-controlled hearing device that can help listeners isolate a single voice in a noisy environment.
The study, published in Nature Neuroscience, provides direct evidence from human trials that brain-controlled technology can dynamically separate one conversation from competing sounds, offering a new path toward selective hearing assistance.
Senior author Nima Mesgarani, PhD, a principal investigator at Columbia’s Zuckerman Institute and an associate professor of electrical engineering at Columbia’s Fu Foundation School of Engineering and Applied Science, said the system acts as a “neural extension” that leverages the brain’s natural filtering ability.
He explained that this approach could allow future technologies to move beyond traditional hearing aids, which simply amplify all sound, toward devices that replicate the brain’s selective attention mechanisms.
Here's What They're Not Telling You About Your Retirement
For the study, researchers collaborated with surgeons and epilepsy patients already undergoing brain surgery to locate seizure sources.
Because these patients had electrodes implanted in their brains, the team could measure brain activity as participants listened to overlapping conversations.
The system automatically detected which voice a participant was focusing on and adjusted the volumes in real time, amplifying that conversation while quieting others.
According to the volunteers, the effects were immediate and striking. One participant suspected the researchers were secretly controlling the volume manually, while others said the experience felt like “science fiction.”
This Could Be the Most Important Video Gun Owners Watch All Year
The breakthrough builds on years of research led by Mesgarani’s team, which began in 2012 with studies mapping how brainwaves align with specific speech rhythms.
Those findings revealed that the brain’s electrical patterns correspond closely to the voices we choose to focus on, providing a biological signature for attention.
Over the past decade, the team refined their algorithms to automatically separate multiple voices and compare them to brainwave patterns in real time. The new study marks the first successful validation of this process in live participants.
Vishal Choudhari, the paper’s first author and a recent PhD graduate from Columbia, said this experiment demonstrates a clear real-time benefit for listeners.
“For the first time, we have shown that such a system that reads brain signals to selectively enhance conversations can provide a clear real-time benefit,” he said.
The researchers partnered with teams at Hofstra Northwell School of Medicine, the Feinstein Institutes for Medical Research, New York University School of Medicine, and the University of California San Francisco’s Department of Neurological Surgery.
Tests revealed that the system swiftly identified which conversation each listener was concentrating on and made the desired voice substantially easier to understand. Listeners reported less effort and consistently preferred the enhanced conversations.
One volunteer recalled a relative who struggled with hearing loss, saying the technology could allow him to “live a much more peaceful life.”
According to the World Health Organization, over 430 million people globally experience disabling hearing loss, which is often linked to social isolation, depression, and elevated dementia risk.
The difficulty of understanding speech in group settings is one of the most frustrating challenges for these individuals.
Mesgarani said the team ultimately aims to translate their laboratory system into a wearable device that works without invasive electrodes. He noted that additional research is needed to adapt the technology for complex, real-world environments like restaurants and classrooms.
“This is an important step toward a new generation of brain-controlled hearing technologies that align with the listener’s intent,” said Choudhari, highlighting the potential to transform communication in noisy, multi-talker settings.
The work was funded by the Marie-Josee and Henry R. Kravis Foundation and the National Institute of Health’s National Institute on Deafness and Other Communication Disorders.
The study included authors Vishal Choudhari, Maximilian Nentwich, Sarah Johnson, Jose L. Herrero, Stephan Bickel, Ashesh D. Mehta, Daniel Friedman, Adeen Flinker, Edward F. Chang, and Nima Mesgarani.
Join the Discussion
COMMENTS POLICY: We have no tolerance for messages of violence, racism, vulgarity, obscenity or other such discourteous behavior. Thank you for contributing to a respectful and useful online dialogue.