How Does the Brain Learn Categorization for Sounds? The Same Way it Does for Images

By SRAI News posted 04-27-2018 12:00 AM

  

Excerpt from "How does the brain learn categorization for sounds? The same way it does for images," posted on NSF News, April 18, 2018.


Categorization, or the recognition that individual objects share similarities and can be grouped together, is fundamental to how we make sense of the world. Previous research has revealed how the brain categorizes images. Now, researchers funded by the National Science Foundation (NSF) have discovered that the brain categorizes sounds in much the same way.

The results are published today in the journal Neuron.

"Categorization involves applying a single label to a wide variety of sensory inputs," said Max Riesenhuber, professor of neuroscience at Georgetown University and lead co-author of the article. "For example, apples come in many colors, shapes and sizes, yet we label each as an apple. Children do this all the time as they learn language, but we actually know very little about how the brain categorizes the world."

The importance of this work was underlined by Uri Hasson, program director for NSF's Cognitive Neuroscience Program, which supported the work.

"These findings reveal what may not only be a general mechanism about how the brain learns, but also about how learning changes the brain and allows the brain to build on that learning," Hasson said. "The work has potential implications for understanding individual differences in language learning and can provide a foundation for understanding and treating people with learning disorders and other disabilities."

Riesenhuber's group at Georgetown had previously studied how the brain categorizes visual objects and found that at least two distinct regions of the brain were involved. One region, in the visual cortex, encoded images, while a region in the prefrontal cortex signaled their category membership. For their more recent research, Riesenhuber and lead author Xiong Jiang were interested in whether the same processes underlie categorization of auditory cues. They joined forces with co-author Josef Rauschecker, also a professor of neuroscience at Georgetown and an expert on the auditory cortex and neural plasticity.

To find out how the brain categorizes auditory input, the researchers invented new sounds using an acoustic blending tool to produce sounds from two types of monkey calls. The blending produced hundreds of new sounds that differed from the original calls.

Continue to full article


#NewsintheField
#NSF

Permalink