J Am Acad Audiol 2001; 12(10): 514-522
DOI: 10.1055/s-0042-1745642
Original Article

Sentence Recognition Materials Based on Frequency of Word Use and Lexical Confusability

Theodore S. Bell
Department of Communication Disorders, California State University at Los Angeles, Los Angeles, California
Richard H. Wilson
James H. Quillen VA Medical Center, Mountain Home, Tennessee, and Departments of Surgery and Communication Disorders, East Tennessee State University, Johnson City, Tennessee
› Institutsangaben


The sentence stimuli developed in this project combined aspects from several traditional approaches to speech audiometry. Sentences varied with respect to frequency of word use and phonetic confusability. Familiar consonant-vowel-consonant words, nouns and modifiers, were used to form 500 sentences of seven to nine syllables. Based on concepts from the Neighborhood Activation Model for spoken word recognition, each sentence contained three key words that were all characterized as high- or low-use frequency and high or low lexical confusability. Use frequency was determined by published indices of word use, and lexical confusability was defined by a metric based on the number of other words that were similar to a given word using a single phoneme substitution algorithm. Thirty-two subjects with normal hearing were randomly assigned to one of seven presentation levels in quiet, and an additional 32 listeners were randomly assigned to a fixed-level noise background at one of six signal-to-noise ratios. The results indicated that in both quiet and noise listening conditions, high-use words were more intelligible than low-use words, and there was an advantage for phonetically unique words; the position of the key word in the sentence was also a significant factor. These data formed the basis for a sequence of experiments that isolated significant nonacoustic sources of variation in spoken word recognition.

Abbreviations: CVC = consonant-vowel-consonant, HD = high frequency of use word from a dense neighborhood, HS = high frequency of use word from a sparse neighborhood, LD = low frequency of use word from a dense neighborhood, LS = low frequency of use word from a sparse neighborhood, NAM = Neighborhood Activation Model, SIN = Speech in Noise, SNR = signal-to-noise ratio


Artikel online veröffentlicht:
07. März 2022

© 2001. American Academy of Audiology. This article is published by Thieme.

Thieme Medical Publishers, Inc.
333 Seventh Avenue, 18th Floor, New York, NY 10001, USA


  • American National Standards Institute. (1996). American National Standard Specifications for Audiometers. (ANSI S3–1996). New York: ANSI.
  • Bench J, Bamford J, eds. (1979). Speech-Hearing Tests and the Spoken Language of Hearing-Impaired Children. London: Academic Press.
  • Berger KW. (1969). Speech discrimination task using multiple-choice key words in sentences. J Audit Res 9:247–262.
  • Bilger RC, Nuetzel JM, Rabinowitz WM, Rzeczkowski C. (1984). Standardization ofa test of speech perception in noise. J Speech Hear Res 27:32–48.
  • Cotton S, Grosjean F. (1984). The gating paradigm: a comparison of successive and individual presentation formats. Percept Psychophys 35:41–48.
  • Cox RM, Alexander GC, Gilmore C. (1987). Development of the Connected Speech Test (CST). Ear Hear 8( Suppl 5): 119S-126S.
  • Dirks DD, Takayanagi S, Moshfegh A, Noffsinger D, Fausti SA. (2001). Examination of the neighborhood activation theory in normal and hearing-impaired listeners. Ear Hear 22:1–13.
  • Egan J. (1944). Articulation Testing Methods II. OSRD Report No 3802. Cambridge, MA: Psychoacoustic Laboratory Harvard University.
  • Egan J. (1948). Articulation testing methods. Laryngoscope 58:955–991.
  • Fletcher H. (1929). Speech and Hearing. New York: Van Nostrand.
  • Goldinger S, Luce P, Pisoni D. (1989). Priming lexical neighbors of spoken words: effects of competition and inhibition. J Memory Lang 28:501–518.
  • Grosjean F. (1980). Spoken word recognition processes and the gating paradigm. Percept Psychophys 28:267–283.
  • Hirsh IJ, Davis H, Silverman SR, Reynolds EG, Eldert E, Benson RW. (1952). Development of materials for speech audiometry. J Speech Hear Disord 17:321–337.
  • Hudgins CV, Hawkins JE, Karlin JE, Stevens SS. (1947). The development of recorded auditory tests for measuring hearing loss for speech. Laryngoscope 57:57–89.
  • Jerger J, Speaks C, Trammell J. (1968). A new approach to speech audiometry. J Speech Hear Disord 33:318—328.
  • Kalikow DN, Stevens KM, Elliot LL. (1977). Development of a test of speech intelligibility in noise using sentence materials with controlled word predictability. J Acoust SocAm 61:1337–1351.
  • Killion MC, Villchur E. (1993). Kessler was right partly: but SIN test shows some aids improve hearing in noise. Hear J 46:31–35.
  • Kucera F, Francis W. (1969). Computational Analysis of Present Day English. Providence, RI: Brown University Press.
  • Luce PA. (1986). A computational analysis of uniqueness points in auditory word recognition. Percept Psychophys 39:155–159.
  • Luce PA, Pisoni DB. (1998). Recognizing spoken words: the Neighborhood Activation Model. Ear Hear 19:1–36.
  • Marlsen-Wilson WD, Tyler LK. (1980). The temporal structure of spoken language understanding. Cognition 8:1–71.
  • Martin FN, Champlin CA, Chambers JA. (1998). Seventh survey of audiological practices in the United States. J Am Acad Audiol 9:95–104.
  • Miller GA. (1951). Language and Communication. New York: McGraw-Hill.
  • Morton J. (1979). Facilitation in word recognition: experiments causing change in the logogen model. In: Kolers PA, Wrolstal ME, Bouma H, eds. Processing of Visible Language 1. New York: Plenum, 259–268.
  • Nakatani LH, Dukes KD. (1973). A sensitive test of speech communication quality. J Acoust SocAm 53:1083–1092.
  • Nilsson M, Soli SD, Sullivan JA. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am 95:1085–1099.
  • Nusbaum HC, Pisoni DB, Davis CK. (1984). Sizing Up the Hoosier Mental Lexicon: Measuring the Familiarity of20000 Words. Research on Speech Perception Progress Report No. 10. Bloomington, IN: Indiana University Press.
  • Pisoni D, Luce PA. (1987). Acoustic-phonetic representation in word recognition. Cognition 25:21–52.
  • Plomp R, Mimpen AM. (1979). Improving the reliability of testing the speech reception threshold for sentences. Audiology 18:43–52.
  • Salasoo A, Pisoni DP. (1985). Interaction of knowledge sources in spoken word identification. J Memory Lang 24:210–231.
  • Silverman SR, Hirsh IJ. (1955). Problems related to the use of speech in clinical audiometry. Ann Otol Rhinol Laryngol 64:1234–1244.
  • Smoorenburg GF. (1986). Speech perception in individuals with noise-induced hearing loss and its implication for hearing loss criteria. In: Salvi RJ, Henderson D, Hamernik RP, eds. Basic and Applied Aspects of Noise- Induced Hearing Loss. New York: Plenum Press.
  • Smoorenburg GF. (1989). Speech Reception in Quiet and in Noisy Conditions by Individuals with Noise-Induced Hearing Loss in Relation to Their Audiogram. Report 1989–11:1–58. Soesterberg, The Netherlands: TNO Institute for Perception.
  • Speaks C, Jerger J. (1965). Method for measurement of speech identification. J Speech Hear Res 8:185–194.
  • Tillman TW, Carhart R. (1966). An Expanded Test for Speech Discrimination Utilizing CNC Monosyllabic Words. Northwestern University Auditory Test No. 6. Brooks Air Force Base, TX: USAF School of Aerospace Medicine Technical Report.