How your brain stays focused on conversations in a noisy room

The brain processes voices differently depending on the volume of the speaker and if the listener is focused on them.

We now have a good explanation for how our brain keeps track of a conversation while we are in a loud, crowded room, a discovery that could improve hearing aids.

Mechanisms in the brain help us pick out speech in a crowd
Zuckerman Institute, Columbia University (2023)


The general idea for speech perception is that only the voice of the person you are paying attention to gets processed by the brain, says Vinay Raghavan at Columbia University in New York. “But my issue with that idea is that when someone shouts in a crowded place, we don’t ignore it because we’re focused on the person we’re talking to, we still pick it up.”


To better understand how we process multiple voices, Raghavan and his colleagues implanted electrodes into the brains of seven people to monitor the organ’s activity while they underwent surgery for epilepsy. The participants, who were awake throughout the surgery, listened to a 30-minute audio clip of two voices.


During the half-hour period, the participants were repeatedly asked to change their focus between the two voices, one of which belonged to a man and the other to a woman. The voices spoke over each other and were largely the same volume, but, at various points in the clip, one was louder than the other, mimicking the changing volumes of background conversations in a crowded space.


The team then used this brain activity data to produce a model that predicted how the brain processes the quieter and louder voices and how that might differ depending on which voice the participant was asked to focus on.


The researchers found that the louder of the two voices was encoded by both the primary auditory cortex, which is thought to be responsible for the conscious perception of sound, and the secondary auditory cortex, responsible for more complex sound processing, even if the participant was told not to focus on the louder voice.


“This is the first study to show using neuroscience that your brain does encode speech that you’re not paying attention to,” says Raghavan. “It opens the door to understanding how your brain processes things you’re not paying attention to.”


The researchers found that the quieter voice was only processed by the brain, also in the primary and secondary cortices, if they asked the participants to focus on that voice. It then took the brain about 95 milliseconds longer to process this voice as speech compared with when the participants were asked to focus on the louder voice.

“The findings suggest that the brain likely uses different mechanisms for encoding and representing these two different volumes of voices when there is a background conversation ongoing,” says Raghavan.


By targeting the mechanism used to perceive quieter voices, hearing aids could be made more effective, says Raghavan. “If we could make a hearing aid that can tell who you’re paying attention to, then we could turn up the volume on just that person’s voice.”


The team plans to repeat the experiment using less invasive methods to record audio processing in the brain. “Ideally, we don’t want to implant something in your brain to get sufficient brain recordings to decode your attention,” says Raghavan.


Journal reference:

PLoS BiologyDOI: 10.1371/journal.pbio.3002128

Post a Comment

0 Comments