’Alice in Wonderland’ leads researchers into the brain
Alice in Wonderland is 150 years old this year but the ever-young adventurer recently led Cornell researchers to a part of the brain that helps listeners understand her story. Cornell faculty member John Hale's study, " Modeling fMRI time courses with linguistic structure at various grain sizes ," published in Proceedings of the 6th Workshop on Cognitive Modeling and Computational Linguistics, examines how the individual words of Lewis Carroll's famous tale come together to yield an understanding of each sentence. Hale and his team found a positive relationship between predicted difficulty levels based on grammatical structure and neural signals, as measured in the fMRI scanner. The results highlight a region of the temporal lobe that supports an unconscious "parsing" process. Hale points out the interdisciplinary nature of this neurotechnology research, which encompasses computational linguistics and neuroscience: "Elements from all of these areas that are traditionally distinct come together to say what's going on in the mind during an important cognitive process, language understanding." Study participants listened to the first chapter of "Alice in Wonderland" while in a fMRI scanner at the Cornell MRI Facility. This type of "naturalistic" study has only recently become possible. "They didn't have to press any buttons or do anything except listen," says Hale, associate professor of linguistics in the College of Arts and Sciences.

