Deep neural networks show promise as models of human hearing
Study shows computational models trained to perform auditory tasks display an internal organization similar to that of the human auditory cortex. Computational models that mimic the structure and function of the human auditory system could help researchers design better hearing aids, cochlear implants, and brain-machine interfaces. A new study from MIT has found that modern computational models derived from machine learning are moving closer to this goal. In the largest study yet of deep neural networks that have been trained to perform auditory tasks, the MIT team showed that most of these models generate internal representations that share properties of representations seen in the human brain when people are listening to the same sounds. The study also offers insight into how to best train this type of model: The researchers found that models trained on auditory input including background noise more closely mimic the activation patterns of the human auditory cortex. "What sets this study apart is it is the most comprehensive comparison of these kinds of models to the auditory system so far. The study suggests that models that are derived from machine learning are a step in the right direction, and it gives us some clues as to what tends to make them better models of the brain," says Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of MIT's McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.

