Finally, we are starting to post interviews with our fellow graduate students in neuroscience at McGill! The first Q&A session we held was with Emily Coffey, Ph.D. student in the lab of Dr. Robert Zatorre. Earlier this year, Emily’s first-authored article based on her magnetoencephalography (MEG) investigation of the auditory frequency-following response (FFR) was published in Nature Communications.

In their study, Emily and her colleagues showed that the FFR, a neurophysiological response to complex periodic sounds previously considered to be of purely subcortical origins, may in fact partially reflect contributions from the human auditory cortex. In other words, the use of MEG (instead of the electroencephalography, typically used for studying auditory brainstem responses) allowed the authors to see that the auditory cortex works together with the brainstem during sound periodicity analysis. NeuroBlog editor Anastasia Glushko spoke to Emily about this study, and we hope you’ll enjoy reading what came out of this conversation!


A (Anastasia): To start from the basics, I wanted to ask you about the collaboration of the brainstem and the auditory cortex (AC) during sound periodicity analysis. You mention in the article that the connection between these two brain regions can potentially work in both directions. On the one hand, the signals from the brainstem can be transferred to the auditory cortex. But the auditory cortex might also influence the brainstem when fine-grained analysis of sound frequency is performed. What do you think could be the functionality of these two directions of the information flow (also taken into account that the AC seems to be activated later than the subcortical structures during sound frequency processing)?

E (Emily): We know from anatomical work that there are a lot of descending projections in the auditory system (in fact more than ascending projections), but we don’t yet know too much about how lower and higher level areas work together to process sound. This is one of the things we are currently trying to understand about how the auditory system works, and what our new MEG technique will hopefully be useful for.

Interestingly, early signals can be influenced by many things like what you’re listening to, whether the sound is played forwards or backwards, or whether or not you know how to speak a tonal language, like Mandarin. Those observations suggest involvement of higher-level processes like selective attention, but as you point out, these these effects often cannot be an elicited response to sound because of the timescales involved – the signal cannot have made it up to the cortex and then back down to the brainstem again in that short an interval of time. This suggests that top-down processes have either acted on the lower parts of the system to change how they process incoming sound in a relatively permanent fashion (as in the case of long-term music or language experience), or they might be acting to filter sound in an online temporary fashion, like when you direct your attention to one person talking in a crowded restaurant.

The brain signals we’re working with can also be elicited when people are sleeping or awake but distracted doing something else. This is perhaps a good measure of the feed-forward part of the signal, since processes like selective attention are not engaged, but it still could of course have been influenced by long-term training.

Sound is a very complex and rich source of information, and the brain seems to have evolved “built-in” ways of separating and manipulating it, the ability to tune and change those mechanisms over long periods of time, and the capability of voluntarily modifying incoming signals according to task demands. How well the information is preserved as it comes in and is then filtered, enhanced and processed by higher-level systems in turn affects the quality of information that is sent on for purposes like language processing. We have a lot of work to do to figure out how all this takes place!


A: The AC activations in your study were right lateralized. This is in line with the often reported differences in the functionality of the left and right ACs. The right AC seems to be relatively specific for fine-grained sound frequency analysis while the left AC is “better” at temporal resolution. In the article, you mention the possibility that the computations on the brainstem level might contribute to the asymmetry in the AC. Does this mean that certain processes in the brainstem might have lead to the fact that left hemisphere is more tuned for language processing (which normally requires better temporal resolution than music perception)?

E: A lot of processing does go on at the level of the brainstem in the auditory system. Given the level of top-down connectivity, it wouldn’t surprise me if signal frequency specialization was already beginning in these lower areas. However, I don’t think we can say much about how this contributes to the cortical lateralization we see here in humans, from this technique. Although we can differentiate between signals from the left and right cortex and different levels of the brainstem, the pairs of brainstem nuclei are close together and deep, so they will be difficult to resolve using non-invasive methods.


A: It is exciting that the research on auditory brainstem responses has clear clinical potential. You write in the article that certain FFR parameters are reliable biomarkers of some clinical syndromes. What are the existing and/or potential applications of FFR measures in clinical practice? And how does the finding of the cortical component in the FFR contribute to fields where measures of auditory brainstem responses are used in clinical settings?

E: The frequencies observed using the FFR are important for how we perceive the pitch and timbre of sounds. Many problems that have language-related components show differences in the FFR, suggesting that there are abnormalities or deficiencies in how sound is processed. These include autism, language-related learning disorders, and deciphering speech in noisy conditions in older people and in children. The FFR could be used to identify sub-classes of populations that might benefit from a certain kind of treatment and to track improvements in the FFR and behavior over time. There is also evidence to suggest that music practice enhances the FFR and mitigates some of these problems, which would certainly make for an enjoyable treatment solution! The work on FFRs as biomarkers and as measures of treatments is still in development (Prof. Nina Kraus’s lab at Northwestern University is doing some very interesting work on these topics).

The finding of a cortical component to the FFR is I think most relevant on the research side. I doubt we’ll be measuring people’s cortical MEG-FFR responses in the clinic because EEG is far cheaper and more practical! But I hope it will be useful to better understand what the EEG signal means, where it comes from, and which brain regions are at fault when poor auditory system function is contributing to clinical problems.