Semin Hear 2004; 25(1): 17-24
DOI: 10.1055/s-2004-823044
Copyright © 2004 by Thieme Medical Publishers, Inc., 333 Seventh Avenue, New York,
NY 10001, USA.Neighborhood Activation in Spoken Word Recognition
Donald D. Dirks1
- 1Professor Emeritus, UCLA School of Medicine, Los Angeles, California; and Consultant,
National Center for Rehabilitative Auditory Research Veterans Administration Medical
Center, Portland, Oregon
Remembering Tom Tillman
During and following World War II, there was significant interest among audiologists
and speech scientists in the development and standardization of speech recognition
tests for clinical use. Clinical observations during the 1940s and 50s had already
indicated that results from pure tone threshold tests often were not predictive of
the receptive auditory communication ability among persons with hearing loss. No doubt,
Tom Tillman, as a graduate student and later as a faculty member at Northwestern University,
was influenced by the growing clinical interest in using speech to measure receptive
communication ability. Several of Tillman's publications1
2
3 during his early professional life reflect his research interest in speech audiometry,
an interest that continued and expanded4
5
6 throughout his career. He is especially remembered-in collaboration with Carhart
-in the development of the Northwestern University Test No. 6, which is still in use
today. Tillman's strategy for measuring speech recognition with phonemically-balanced,
monosyllabic words was symptomatic of the basic and clinical speech research emphasis
of that period, in particular, the general view that the perception of words required
the recovery of a sequence of phonetic or phonemic elements. This orientation lead
to “bottom-up” explanations of speech perception. Since the 1970s, however, basic
speech research has reflected the growing recognition that any comprehensive theory
of speech perception must account for the processes and representations that subserve
the recognition of spoken words beyond the perception of individual consonants and
vowels. The current article reviews several recent investigations conducted at the
UCLA-VA Human Auditory Laboratory that provide evidence that cognitive and linguistic
capabilities (“top-down processing”) play a role in rapid selection of a target word
from other potential choices once an acoustic-phonetic pattern has been activated
in memory. This article is dedicated to Tom Tillman, who served as an example of a
dedicated, meticulous researcher to me during my pre-doctoral studies.