Spoken Language Communication research involves speech perception and disorders, treatment of spoken language disorders, child aural habilitation, and technically-supported auditory verbal therapy.
Approximately 10% of children have difficulty producing sounds in their native language. Such children are commonly identified by Speech-Language Pathologists as having a "phonological disorder". Treatment of such children consumes substantial health care resources.
In addition to their production difficulties, many such children cannot perceive the sounds of their native language correctly (Rvachew & Jamieson, 1989). Training with an appropriate technique can improve perception quite quickly for most such children. Moreover, this improvement transfers quickly to speech production, for an encouraging proportion of children. Together with colleagues at the Alberta Children's Hospital, we are exploring a range of assessment and treatment protocols for this group of children, and refining these procedures to improve their effectiveness and efficiency (Jamieson & Rvachew, 1992; 1994; Rvachew, 1994; Rvachew & Jamieson, 1995).
To make it possible for clinicians to assess the perceptual abilities of "phonologically disordered" children in their own clinics, a new software package has recently been developed. The new system, termed "SAILS" (The Speech Assessment and Interactive Learning System) combines stimuli recorded at the Alberta Children's Hospital with assessment, training and scoring software developed by Avaaz Innovations. This system makes the method routinely available for use in speech and language clinics and research centres.
Payments for speech intelligibility testing consume > 30% of all government payments for audiological services in Ontario. However, the "speech discrimination" testing procedures now used in routine clinical practice are crude and relatively inefficient. Many clinicians and researchers consider them to be of questionable value. 95% of Audiologists in Ontario report that they regularly measure speech intelligibility in their clinical practice, but none report routinely using any advanced speech testing procedure.
To address these concerns, NCA researchers are exploring alternate testing procedures employing digitized speech and computer controlled-testing and scoring. Some of these approaches are extremely useful in quantifying the benefit received from a hearing aid. We believe that they provide useful alternatives to certain traditional clinical speech testing methods.
One approach provides an improved method of measuring speech reception threshold (SRT). This measure identifies the level at which a given listener can identify specific speech sounds at a given proportion of the time.
Our procedure uses a closed set with just six spondaic words as targets, so that there is no uncertainty regarding what words might be heard. The listener's task is to identify which of the possible words was spoken. Testing is under computer control, with the speech level adjusted from trial to trial, using an adaptive tracking procedure, to estimate the desired identification probability level quickly and efficiently. Test-retest reliability is extremely high, and results correlate well with other audiometric measures and with conventional clinical SRT measures.
An initial evaluation of this Adaptive SRT (ASRT) system (Cheesman, 1992) showed that the test is sensitive, time-efficient, and readily accepted by a wide range of persons, and that results are highly reproducible. Subsequent clinical studies further refined our protocol (Jamieson et al. 1994). The ASRT test is now a routine part of our research program. For example in one study, the ASRT showed that a hearing aid noise reduction system produced an advantage for the listener equivalent to about an 11 dB increase in the signal-to-noise ratio.
A second test provides an alternative to traditional word-list testing. This modified "distinctive features differences" (DFD[m]) test, uses a closed set format with 21 English consonant targets, so that there is no uncertainty regarding what words might be heard. The task is to identify which of the possible words was spoken. All consonant sounds are spoken by four talkers (two men and two women) in a fixed "aCil" environment (e.g., abil, akil, atil, afil).
This test is also computer controlled, with response alternatives presented on a computer monitor, and responses by computer mouse recorded and scored automatically. Test-retest reliability is also extremely high with this procedure. Scoring in terms of overall accuracy is complemented by information on the pattern of confusions among the consonant sounds.
Evaluations show that the test is fast, sensitive, reliable and readily accepted by most clients (Cheesman et al., 1992). The test has been used extensively in research studies to evaluate the impact of hearing aids on speech intelligibility and to assess the speech comprehension abilities of learners of English as a Second Language.
Many hearing aid users report that while their hearing aid does help them to understand spoken language, using their hearing aid requires a great deal of effort and causes them to become fatigued quickly. Traditional speech intelligibility measures focus only on how well listeners perform during the specific time of the test, and they ignore factors such as the amount of effort required. A satisfactory measure of the effort required to listen might therefore help predict the longer term benefits of a particular hearing aid fitting. For example, reduced listener fatigue would be a reason to select one hearing aid over another, even if measurable speech intelligibility was equivalent for the two devices.
One possibility is that response (decision) time increases with listening effort. To evaluate this possibility, we have begun a project to develop a new effort-based speech testing procedure, using speech materials spoken in central Canadian English, using a test format involving answering questions, and remembering or classifying a statement. Several sets of stimuli were evaluated. To increase face validity, and expand the range of test formats, we have developed and are evaluating a verbal response system that extracts the elapsed time between stimulus and (vocal) response.