J Am Acad Audiol 2001; 12(05): 233-244
DOI: 10.1055/s-0042-1745602
Original Article

Effects of Lexical Factors on Word Recognition Among Normal-Hearing and Hearing-Impaired Listeners

Donald D. Dirks
National Center for Rehabilitative Auditory Research, Veterans Administration Medical Center, Portland, OR; Veterans Administration Greater Los Angeles Healthcare System; Division of Head and Neck Surgery, UCLA School of Medicine, Los Angeles, California
,
Sumiko Takayanagi
National Center for Rehabilitative Auditory Research, Veterans Administration Medical Center, Portland, OR; Veterans Administration Greater Los Angeles Healthcare System; Division of Head and Neck Surgery, UCLA School of Medicine, Los Angeles, California
,
Anahita Moshfegh
Veterans Administration Greater Los Angeles Healthcare System, Los Angeles, California
› Author Affiliations

Abstract

An investigation was conducted to examine the effects of lexical difficulty on spoken word recognition among young normal-hearing and middle-aged and older listeners with hearing loss. Two word lists, based on the lexical characteristics of word frequency and neighborhood density and frequency (Neighborhood Activation Model [NAM]), were developed: (1) lexically “easy” words with high word frequency and a low number and frequency of words phonemically similar to the target word and (2) lexically “hard” words with low word frequency and a high number and frequency of words phonemically similar to the target word. Simple and transformed up-down adaptive strategies were used to estimate performance levels at several locations on the performance-intensity functions of the words. The results verified predictions of the NAM and showed that easy words produced more favorable performance levels than hard words at an equal intelligibility. Although the slopes of the performanceintensity function for the hearing-impaired listeners were less steep than those of normal-hearing listeners, the effects of lexical difficulty on performance were similar for both groups.

Abbreviations: ANOVA = analysis of variance, CVC = consonant-vowel-consonant, NAM = Neighborhood Activation Model, NU-6 = Northwestern University Auditory Test No. 6, SNR = single-to-noise ratio



Publication History

Article published online:
02 March 2022

© 2001. American Academy of Audiology. This article is published by Thieme.

Thieme Medical Publishers, Inc.
333 Seventh Avenue, 18th Floor, New York, NY 10001, USA

 
  • REFERENCES

  • Beattie RC. (1989). Word recognition functions for the CID W-22 test in multitalker noise for normally hearing and hearing-impaired subjects. J Speech Hear Disord 54:20–33.
  • Beattie RC, Warren V. (1983). Slope characteristic of CID W-22 word functions in elderly hearing-impaired listeners. J Speech Hear Disord 48:119–127.
  • Bradlow AR, Pisoni DB. (1999). Recognition of spoken words by native and non-native listeners: talker-, listener- and item-related factors. J Acoust Soc Am 106:2074–2086.
  • Department of Veterans Affairs. (1991). Speech recognition and identification materials. (CD 1.1). Long Beach, CA: VA Medical Center.
  • Dirks DD, Takayangi S, Moshfegh A, Noffsinger PD, Fausti SA. (2001). Examination of neighborhood activation theory in normal and hearing-impaired listeners. Ear Hear (in press).
  • Kirk KI, Pisoni DB, Osberger MJ. (1995). Lexical effects on spoken word recognition by pediatric cochlear implant users. Ear Hear 16:470–481.
  • Kirk KI, Pisoni DB, Miyamoto RC. (1997). Effects of stimulus variability on speech perception in listeners with hearing impairment. J Speech Lang Hear Res 40:1395–1405.
  • Kucera F, Francis W. (1967). Computational Analysis of Present Day American English. Providence, RI: Brown University Press.
  • Ladefoged P. (1993). A Course in Phonetics. Fort Worth, TX: Harcourt Brace College.
  • Levitt H. (1971). Transformed up-down methods in psychoacoustics. J Acoust Soc Am 49:467—477.
  • Luce PA. (1986). A computational analysis of uniqueness points in auditory word recognition. Percept Psychophys 39:155–158.
  • Luce PA, Pisoni DB. (1998). Recognizing spoken words: the Neighborhood Activation Model. Ear Hear 19:1—36.
  • Meyer TA, Pisoni DB. (1999). Some computational analysis of the PBK tests: effects of frequency and lexical density on spoken word recognition. Ear Hear 20:363–371.
  • Nusbaum HC, Pisoni DB, Davis CK. (1984). Sizing up the Hoosier Mental Lexicon: measuring the familiarity of 20,000 words. Research on Speech Perception Progress Report No. 10. Bloomington, IN: Speech Research Laboratory, Psychology Department, Indiana University. 122–134.
  • Sommers MS. (1996). The structural organization of the mental lexicon and its contribution to age-related changes in spoken word recognition. Psychol Aging 11:333–341.
  • Sommers MS. (1998). Spoken word recognition in individuals with dementia of the Alzheimer’s type: changes in taker normalization and lexical discrimination. Psychol Aging 13:631–646.
  • Sommers MS, Kirk KI, Pisoni DB. (1997). Some considerations in evaluating spoken word recognition by normal-hearing, noise masked normal-hearing, and cochlear implant listeners. I: the effects of response format. Ear Hear 18:89–99.
  • Torretta GM. (1995). The easy-hard word multi-talker speech database: an initial report. Research on Spoken Language Processing, Progress Report 20. Bloomington, IN: Indiana University, 321–334.
  • Wilson RH, Coley KE, Haenel JL, Browining KM. (1976). Northwestern University Auditory Test No. 0.6: normative and comparative intelligibility functions. J Am Audiol Soc 1:221–228.