Spatial hearing is an important component of auditory perception that provides awareness of the locations of sound sources situated within and outside the visual range and supports the ability to segregate and attend to one voice out of many in a crowded room. Dr. Macpherson and his students have investigated a variety of issues related to spatial hearing in normally hearing and hearing impaired listeners, and have provided expertise and methodological assistance for projects conducted with colleagues within the NCA, across the Western campus, and internationally. Utilizing facilities such as the Anechoic Chamber, Reverberation Chamber, Hearing Research Laboratory, and Hearing Research Clinic, research topics have included:
Cue weighting in sound localization. Lacking the two-dimensional retinal receptors available in vision, the auditory system must construct its representation of space by combining multiple sources of information present in the left- and right-ear signals. These include interaural differences of time and intensity, which cue left/right location, and spectral cues arising from the directionally dependent acoustic filtering performed by the pinna, head, and shoulders, which provide up/down and front/back information. When different cues carry common or conflicting information, it is important to understand how the auditory system combines or weights those cues and how background noise or assistive listening devices might affect that weighting.
Use of dynamic cues in spatial hearing. In addition to the binaural difference and spectral cues for sound location mentioned above, information about the front/back location of a sound source is also present in the relationship between changes in listener head orientation and the resulting changes in interaural time and intensity differences. Such dynamic cues generated by listener head rotation are particularly important when spectral cues are unavailable to the listener due either to stimulus characteristics, to listener hearing impairment, or to the bandwidth, microphone positioning, or limited spectral resolution of a hearing aid or cochlear implant. With the support of funding from the National Science Foundation, we were the first to systematically quantify listener sensitivity to dynamic cues as a function of stimulus spectrum and head-turn parameters, and have demonstrated that the salience of those cues depends primarily on the duration of the head rotation, rather than on its amplitude. Recognizing that correct interpretation of changing interaural difference cues for front/back localization requires the auditory system to have accurate knowledge of the direction of head motion, I have progressed in this aspect of my research program (funded by an NSERC Discovery Grant) to studying the role of inputs from the vestibular, proprioceptive, and visual systems in dynamic localization. An M.Sc. thesis completed in my laboratory has demonstrated that for accurate dynamic localization, vestibular input is both necessary and sufficient, whereas proprioceptive information is neither necessary nor sufficient.
Spatial hearing in cochlear implant listening. Studies of cue weighting and dynamic localization have repeatedly demonstrated the importance of low-frequency interaural time-difference cues in spatial hearing. Since typical cochlear implant (CI) signal processing explicitly discards the waveform fine-structure information that carries those cues, it is important to assess the impact of CI listening on spatial hearing. The SPHear Lab has spearheaded (with faculty and staff colleagues at the NCA and the London Health Sciences Centre Cochlear Implant Team) an ambitious project examining, within listeners, the effects of bimodal (CI with contralateral hearing aid) versus bilateral (two CIs) listening and of fine-structure versus envelope-only CI signal encoding on sound localization, speech-in-noise perception, and sound quality in bilateral CI candidates. When the last of our eighteen subjects has completed the final test session (March 2015) two years after their first implant surgery, we believe this will be the largest bimodal-to-bilateral cross-over study ever conducted.
Virtual auditory space methods in spatial hearing research. Many of our studies have involved the use of virtual auditory space techniques in which sounds presented over headphones are synthesized to reproduce naturally occurring cues (or manipulated versions) for a desired spatial arrangement of external sound sources. Such methods are demanded when independent control over the left and right ear signals is essential to the experiment, but can also be expedient when the experimental setting is not conducive to free-field presentation We are also applying virtual auditory space methods to the study of speech perception and hearing aid function in the noisy environment of car interiors. Because headphone-based presentation is inappropriate for simulations involving hearing-aid listening, we have developed in the NCA’s anechoic chamber a high channel-count sound presentation system that will permit free-field reconstruction of sound fields captured with a spherical microphone array.
Ewan Macpherson, Ph.D. is the Director of the Spatial and Prosthetic Hearing (SPHear) Laboratory and an Associate Professor and Faculty Researcher with the National Centre for Audiology. Dr. Macpherson approaches audiological and psychoacoustic research from the perspective of his training in engineering, physics, and auditory experimental psychology and neurophysiology. His research interests include spatial and binaural hearing by hearing impaired and normally hearing listeners, the effects of the processing strategies found in cochlear implants and other assistive listening devices on spatial hearing, sensorimotor integration in sound localization via listener head movements, and the application of virtual auditory space techniques to audiological research.
t. 519-661-2111 x88072