J Am Acad Audiol 2017; 28(03): 209-221
DOI: 10.3766/jaaa.16025
Articles
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

The Effect of Signal-to-Noise Ratio on Linguistic Processing in a Semantic Judgment Task: An Aging Study

Nicholas Stanley
*   University of South Alabama, Mobile, Al
,
Tara Davis
*   University of South Alabama, Mobile, Al
,
Julie Estis
*   University of South Alabama, Mobile, Al
› Author Affiliations
Further Information

Corresponding author

Nicholas S. Stanley
Department of Speech Pathology and Audiology, University of South Alabama
Mobile, AL 36688-0002

Publication History

Publication Date:
26 June 2020 (online)

 

Abstract

Background:

Aging effects on speech understanding in noise have primarily been assessed through speech recognition tasks. Recognition tasks, which focus on bottom-up, perceptual aspects of speech understanding, intentionally limit linguistic and cognitive factors by asking participants to only repeat what they have heard. On the other hand, linguistic processing tasks require bottom-up and top-down (linguistic, cognitive) processing skills and are, therefore, more reflective of speech understanding abilities used in everyday communication. The effect of signal-to-noise ratio (SNR) on linguistic processing ability is relatively unknown for either young (YAs) or older adults (OAs).

Purpose:

To determine if reduced SNRs would be more deleterious to the linguistic processing of OAs than YAs, as measured by accuracy and reaction time in a semantic judgment task in competing speech.

Research Design:

In the semantic judgment task, participants indicated via button press whether word pairs were a semantic Match or No Match. This task was performed in quiet, as well as, +3, 0, −3, and −6 dB SNR with two-talker speech competition.

Study Sample:

Seventeen YAs (20–30 yr) with normal hearing sensitivity and 17 OAs (60–68 yr) with normal hearing sensitivity or mild-to-moderate sensorineural hearing loss within age-appropriate norms.

Data Collection and Analysis:

Accuracy, reaction time, and false alarm rate were measured and analyzed using a mixed design analysis of variance.

Results:

A decrease in SNR level significantly reduced accuracy and increased reaction time in both YAs and OAs. However, poor SNRs affected accuracy and reaction time of Match and No Match word pairs differently. Accuracy for Match pairs declined at a steeper rate than No Match pairs in both groups as SNR decreased. In addition, reaction time for No Match pairs increased at a greater rate than Match pairs in more difficult SNRs, particularly at −3 and −6 dB SNR. False-alarm rates indicated that participants had a response bias to No Match pairs as the SNR decreased. Age-related differences were limited to No Match pair accuracies at −6 dB SNR.

Conclusions:

The ability to correctly identify semantically matched word pairs was more susceptible to disruption by a poor SNR than semantically unrelated words in both YAs and OAs. The effect of SNR on this semantic judgment task implies that speech competition differentially affected the facilitation of semantically related words and the inhibition of semantically incompatible words, although processing speed, as measured by reaction time, remained faster for semantically matched pairs. Overall, the semantic judgment task in competing speech elucidated the effect of a poor listening environment on the higher order processing of words.


#

INTRODUCTION

Speech understanding in noise has primarily been assessed through the use of speech recognition tasks in which an individual responds by repeating what they have heard. Since linguistic-cognitive components are essential to replicate everyday communication situations ([Kalikow et al, 1977]; [Nilsson et al, 1994]; [Spyridakou and Bamiou, 2015]), it is important to develop clinically relevant measures of speech understanding in noise that are ecologically valid. Accuracy on a speech-in-noise test reflects performance in an everyday listening situation ([Jerger, Greenwald, et al, 2000]). Therefore, sentence level materials are typically used in clinical audiological assessment to provide a more realistic listening scenario than isolated words, because sentences incorporate linguistic and cognitive skills to a greater extent ([Nilsson et al, 1994]; [McArdle and Chisolm, 2009]; [Spyridakou and Bamiou, 2015]).

While sentence recognition in noise tasks are more realistic than word recognition in noise tasks, a response method of simply repeating what was heard has limitations. [Jerger et al (2014)] stated that there were “almost no tests of genuine speech understanding” that assess if an individual can hear and understand what is presented, as opposed to simply being able to repeat what they have heard. Also, [Tun et al (2012)] stated that there is a need to go beyond recognition accuracy to evaluate comprehension. Presumably, linguistic processing tasks can improve assessment of speech understanding in noise by including cognitive aspects of everyday communication, specifically response formulation. In a linguistic processing task, listeners must first manipulate the linguistic and cognitive load in order to process heard information, and then make a decision based on that information ([Schneider and Pichora-Fuller, 2000]). An example of this type of task is the semantic judgment task, in which a participant hears a pair (or series) of words and decides if the words are in the same semantic category such as food, body parts, and modes of transportation.

Two previous auditory event-related potential (AERP) studies evaluated performance in a semantic judgment task in multitalker babble and one-talker competition ([Romei et al, 2011]; [Davis et al, 2013]), respectively. Use of word pair stimuli in a linguistic processing task eliminates syntactic information and requires less working memory load than tasks that use sentences. The elimination of a syntactic confound is particularly important for AERP studies, because it is challenging to dissociate the overlapping neural sources underlying phonological, semantic, and syntactic processing in auditory sentence processing ([Friederici, 2002]; [Davis et al, 2015]).


#

COGNITIVE INFLUENCE ON SPEECH UNDERSTANDING

Age-related differences in speech understanding have been extensively evaluated and identified through suprathreshold recognition tasks. Older adults (OAs) typically have greater difficulty in noisy environments because of presbycusis, age-related central-auditory changes, and/or age-related cognitive decline ([CHABA, 1988]). Although cognitive ability is not routinely assessed in speech-in-noise tests, cognitive factors (i.e., attention, working memory, and executive function) are utilized when trying to understand speech in noisy environments ([Humes, 1996]; [Rönnberg et al, 2010], [Zekveld et al, 2013]; [Helfer and Freyman, 2014]). Individuals must be able to focus attention on the person to whom they are listening. As the person processes spoken information, he/she must hold words in working memory. Executive function, which is the ability to formulate an appropriate response, plays a significant role in complex tasks ([Salthouse et al, 2003]; [Buckner, 2004]). A recent study showed that executive function was significantly correlated to sentence recognition in noise, while working memory was not ([Helfer and Freyman, 2014]). However, other studies that have evaluated working memory using speech-in-noise tests found that working memory was correlated with speech recognition in noise ([Rönnberg et al, 2010]; [2013]; [Zekveld et al, 2013]). This finding was especially evident for reading span and sentence-in-noise tasks ([Akeroyd, 2008]). The variability in these findings is potentially due to the methodological differences in speech and cognitive tasks used in each study. Regardless, cognitive ability is associated with speech understanding in noise. Executive function is of particular importance for the semantic judgment-in-noise task, because listeners are required to semantically analyze and make a decision on heard information. With these tasks, executive function is utilized to a greater extent than simply repeating what was heard. Based on task and response demands, the semantic judgment task utilizes cognitive processes that may contribute to age-related differences in speech understanding.


#

TYPES OF SPEECH COMPETITION AND AGING

It is important to consider the parameters of the competing signal used in speech-in-noise tests. Specifically, informational masking and energetic masking play a crucial role in speech-in-noise performance. Speech competition can vary from a single talker ([Tun and Wingfield, 1999]; [Cullington and Zeng, 2008]) to upward of 20-talker babble ([Tun and Wingfield, 1999]). With single-talker competition, informational masking occurs when information in the competing message interferes with listening to the target stimuli. During energetic masking, such as 20-talker babble, the spectral energy of the competition overlaps the energy of the target stimuli. Informational masking and energetic masking are caused by widely different mechanisms; however, they exist on a continuum. When two to three talkers are used as speech competition, the result is an effective blend of informational masking and energetic masking ([Brungart et al, 2001]). For this reason, the present study utilized two-talker competition.

It is widely accepted that OAs have greater difficulty understanding speech in noisy environments than younger adults (YAs). [Tun and Wingfield (1999)] investigated age-related differences in a suprathreshold recognition task with various types of speech competition, including one- and two-talker competition. They found that OAs showed a decrease from 60–70% to 40% correct with one- and two-talker competition, respectively, at −6 dB SNR; however, YA accuracy decreased from 90% to 60–70% correct with one- and two-talker competition, respectively, at −6 dB SNR ([Tun and Wingfield, 1999]). These researchers noted that there was no significant decrease in accuracy from two-talker competition to 20-talker babble. At a −6 dB SNR, YA reaction times significantly increased from one to two talkers; however, there was not a significant increase from two-talker competition to multitalker babble. OA reaction times were significantly longer than YAs across all speech competition conditions. Suprathreshold studies have also shown that age-related differences increased as signal-to-noise ratio (SNR) decreased. This study and other suprathreshold studies illustrate that as SNR levels decrease, accuracies decrease and reaction times increase ([Tun and Wingfield, 1999]; [Helfer and Freyman, 2008]). The same response pattern is observed in OAs; however, these changes are greater than in YA ([Tun and Wingfield, 1999]).


#

LINGUISTIC PROCESSING TASKS

Relatively few linguistic processing tasks have been employed in research on speech understanding in noise; however, [Romei et al (2011)] and [Davis et al (2013)] utilized variations of the semantic judgment task performed in different competing speech conditions. In the [Romei et al (2011)] study, 12 YAs performed a semantic judgment task in which participants responded “yes” or “no” to whether a third word was semantically related to the first and/or second word. Word triplets were presented in multitalker babble at a +9 dB SNR from multiple loudspeakers at 45°, 135°, 225°, and 315° azimuths, simultaneously. This specific SNR was chosen because it corresponded to 95% correct for a word recognition task using the same stimuli. A high SNR was needed because the primary purpose of this study was to collect AERP data on linguistic processing in multitalker babble. Results indicated that all accuracies were greater than 75% correct; however, a decrease in accuracy was noted when related first and third words were presented in noise. Results were not influenced by competition when the first and third word were unrelated or when the second and third word were related. These researchers concluded that the combination of their word triplet paradigm and multitalker babble produced increased task difficulty in the cognitive domain, that is, working memory and attention.

[Davis et al (2013)] assessed age-related changes in speech understanding by utilizing a semantic judgment task with one-talker quasi-dichotic competing speech presented from either a right or left loud speaker. As part of an AERP study, 20 YAs (18–24 yr) and 20 middle-aged adults (44–57 yr) were presented with word pairs that were in the same semantic category (Match) or in different semantic categories (No Match). Individual SNR levels were determined based on a word-recognition-in-noise task. SNRs ranged from −6 to −10 dB SNR with the mean SNRs at −9.6 dB in YAs and −8.7 dB in middle-aged adults. Results from this study indicated that both groups were ∼9% more accurate for No Match conditions and had faster reaction times for Match conditions. No behavioral age-related differences in accuracy or reaction time were observed between YAs and middle-aged adults, although the late positive component of the AERP showed group differences in scalp topography, indicating that middle-aged adults recruited additional (i.e., frontal) regions of the brain to successfully complete the task ([Davis and Jerger, 2014]).


#

RATIONALE

The purpose of this study was to determine the effect of SNR on a semantic judgment-in-noise task in YA and OA using two-talker speech competition. To our knowledge, only [Davis et al (2013)] have reported on age-related differences (YAs and middle-aged adults) in a semantic judgment task using competing speech. The current study extends previous research on speech understanding in noise, by evaluating whether semantic processing is more impaired by a poor SNR in OAs than in YAs. It was predicted that as SNR decreased, accuracy would decline to a greater extent in Match, as compared to No Match, word pairs and reaction time would increase to a greater extent in No Match word pairs with overall better accuracy and faster reaction time in YAs than OAs.

With increased audiological interest in the neural markers of linguistic processing, we sought to identify appropriate SNRs for future AERP studies that evaluate linguistic processing in noise. Previous studies ([Romei et al, 2011]; [Davis et al, 2013]) have relied on word-recognition-in-noise scores to set SNR levels for semantic judgment tasks. This is a limitation, because there may be substantial differences in accuracy between semantic judgment and word recognition tasks, even with identical words and speech competition ([Davis, 2009]). Due to the limits of cognitive factors addressed in current clinical speech-in-noise tests, it is hoped that findings from this and future studies will pioneer development of clinically relevant speech-in-noise tasks that evaluate the role of top-down factors in speech understanding in noise.


#

METHODS

Participants

Participants included 17 YAs (range = 20–30 yr; mean age = 23.35; standard deviation [SD] = 3.39) and 17 OAs (range = 60–68 yr; mean age = 64.18; SD = 2.43) for a total of 34 participants. Both groups consisted of 12 females and 5 males. All participants spoke American English, were right-handed as determined by report and questionnaire ([Annett, 1970]), and had no known history of stroke, diabetes, neurologic, psychiatric, reading, speech, or language disorder. Mean years of education were 15.94 yr (SD = 1.98) for YAs and 18.70 yr (SD = 3.50) for OAs. Informed consent was obtained in accordance with the guidelines provided by the Internal Review Board at the University of South Alabama.


#

Cognitive and Audiometric Tests

Participants were screened for mild cognitive impairment using the Montreal Cognitive Assessment. All participants scored ≥26 of 30 points, which indicates normal cognitive performance ([Nasreddine et al, 2005]). YA and OA mean scores were 28.76 (SD = 1.3) and 28.59 (SD = 1.42), respectively.

Each participant received a comprehensive audiological evaluation, which included pure-tone audiometry, word recognition scores (WRS) in quiet, and the Quick Speech-in-Noise Test (QuickSIN). Audiometric data are shown in [Table 1]. Pure-tone thresholds were measured at octave frequencies across the frequency range of 250–8000 Hz. All YAs had normal hearing thresholds (≤20 dB HL). On average, OAs had a greater prevalence of hearing loss than YAs, with 8 of 17 participants exhibiting mild to moderate sensorineural hearing loss, consistent with presbycusis ([Figure 1]). OAs were excluded from the study if pure-tone thresholds were greater than age appropriate (60–69 yr) normative data presented by [Cruickshanks et al (1998)]. All participants had symmetrical hearing (<10 dB HL between ears) with the exception of one OA who exhibited a 15-dB difference between ears at 8 kHz only. Type A tympanograms were recorded in all participants.

Table 1

Mean Audiometric Data for PTA, NU-6 in Quiet, and QuickSIN for YA and OA

Audiometric Test

Test Ear

YA

OA

PTA (dB HL)

Right

8.35 (3.10)

15.18 (6.05)

Left

7.35 (4.05)

15.24 (6.71)

NU-6 in quiet (%)

Right

98.7 (0.03)

99.4 (0.02)

Left

97.8 (0.02)

98.4 (0.02)

QuickSIN (dB SNR)

Binaural

0.05 dB SNR Loss (1.17)

0.88 dB SNR Loss (1.17)

Note: SDs are shown in parentheses.


Zoom Image
Figure 1 Mean YA and OA pure-tone thresholds for the right and left ears. Grayed area indicates normative range, based on findings from “The Beaver Dam Study.” Figure adapted with permission from [Davis (2009)].

WRS in quiet were obtained using prerecorded Auditec (St. Louis, MO) NU-6 monosyllabic 25-item word lists that were presented 40 dB HL above the pure-tone average (PTA) ([Department of Veteran Affairs, 1998]). WRS was excellent for both groups. See [Table 1] for audiometric data. Two QuickSIN lists were presented binaurally at 70 dB HL with an average dB SNR loss reported ([Killion et al, 2004]). The dB SNR loss was calculated by subtracting the total number of repeated keywords from 25.5. QuickSIN results were within normal limits (≤3.0 dB) in all participants in both groups.

Six participants were excluded from this study. Three OAs were excluded because they exceeded the normative hearing loss thresholds, and one YA was excluded due to conductive hearing loss. Two OAs were excluded because of abnormal Montreal Cognitive Assessment scores.


#

Experimental Word Pairs

Semantic Characteristics

The semantic judgment-in-noise task utilized 208 monosyllabic nouns, which were previously used in other studies ([Martin et al, 2007]; [Davis et al, 2012]; [2013]; [2015]; [Davis and Jerger, 2014]). The stimuli were digitally recorded in a sound-treated booth by an adult male, monolingual English speaker. Words were subsequently sampled at a rate of 22050 Hz with 16-bit amplitude resolution (Cool Edit Pro™ 2.1 software; [Syntrillium Software Corporation, 2003]). Using the MRC Psycholinguistic Database (http://www.psych.rl.ac.uk/), linguistic ratings were used to select word stimuli based on concreteness (M = 549.27, SD = 29.53), familiarity (M = 562.51, SD = 40.66), and imagery (M = 592.11, SD = 29.93). Ratings were unavailable for 17 words ([Coltheart, 1981]; [Wilson, 1988]).


#

Acoustic Characteristics

Duration, fundamental frequency, and adjusted average root mean square (rms) amplitude were recorded. Mean duration for words was 550 msec (SD = 13.67 msec; Min = 479 msec; Max = 600 msec) and mean fundamental frequency was 121.7 Hz (SD = 10.9 Hz; Min = 96.68 Hz; Max = 160 Hz). Word intensities were adjusted using Adobe Audition 1.5 software to insure that all rms amplitudes were ∼−23 dB FS. Measurements in dB FS are in reference to a 0 dB FS point that indicates the maximum digital level for the digital recording ([Adobe Systems Inc., 2004]). The mean rms amplitude was −22.99 dB FS (SD = −0.54 dB FS; Min = −26.92 dB FS; Max = −20.28 dB FS), which ensured similar amplitude across stimuli items.


#

Word Pair Generation

A complete list of word pairs used in this study is provided in the Appendix. Word pairs were based on category prototypes suggested by [Van Overschelde et al (2004)], which included, but were not limited to, animals, body parts, clothing, furniture, food, and transportation. Of the 250 total word pairs, 10 blocks of 25 word pairs were created. Although studies ([Praamstra and Stegeman, 1993]; [Perrin and García-Larrea, 2003]) indicated that phonological priming effects are diminished in the presence of semantic priming, the word pairs were inspected to insure that phonological priming did not occur within a word pair or between sequential word pairs. Semantic priming effects between word pairs were avoided. Also, word pairs were inspected to insure that pairs would not be misconstrued as a disyllabic word (e.g., sun/dress).


#
#

Experimental Speech Competition

Two-talker speech competition was created using two prerecorded materials: the Arizona Travelogue and The Wizard of Oz (Arizona Travelogue; Cosmos, Inc., Kelowna, BC; [Baum et al, 2000]). The Wizard of Oz recordings have previously been used in other research studies ([Davis et al, 2013]; [Davis and Jerger, 2014]). Using Adobe Audition 1.5, the Arizona Travelogue was mixed with select portions of The Wizard of Oz. The Arizona Travelogue had a fundamental frequency of 140.87 Hz and an rms amplitude of −23.14 dB FS. The Wizard of Oz recording had a fundamental frequency of 118.94 Hz and an rms amplitude of −21.24 dB FS. Fundamental frequencies and rms amplitude values between word pairs and speech competition material were similar. It should be noted that pauses and irregularities during the reading of The Wizard of Oz were digitally removed. Six recordings of two-talker competition were created; each had a length of 10 min, which served as sufficient competition for word-pair blocks.


#

Procedures

Testing was conducted in ∼2 hours, which included case history and hand dominance questionnaires, cognitive screening, audiological evaluation, QuickSIN, median plane localization, and the experimental semantic judgment-in-noise task. Audiological evaluation, QuickSIN, median plane localization task, and the semantic judgment-in-noise task were all completed in a sound-treated booth. Participants were seated in a salon chair that allowed for adjustment for the participant’s height in relation to loudspeakers.

Median Plane Localization Task

The median plane localization task was used to ensure a balanced simulated midline perception between the right (90° azimuth) and left (270° azimuth) loudspeakers used in the semantic judgment-in-noise task. Participants were presented the two-talker speech competition from both loudspeakers in 3-sec intervals. The right and left loudspeaker presentation levels were manipulated through the GSI-61 audiometer channels. The right loudspeaker was fixed at a presentation level of 65 dBA, while the intensity level of the left loudspeaker was varied over a range of −8 to +10 dB in 2-dB steps based on a quasi-random sequence. Participants indicated the perceived location of the competition at different interaural intensity levels using an 11 point scale (+5 = extreme right, 0 = midline, −5 = extreme left). This protocol was based on procedures described by [Jerger, Moncrieff, et al (2000)]. Presentation levels on the semantic judgment-in-noise task were determined based on the interaural intensity levels that represented midline perception of the speech competition.


#

Experimental Task

[Figure 2] illustrates one trial of the semantic judgment-in-noise task. Participants were presented two words (reference and probe) from a 0° azimuth (front) loudspeaker. Participants were asked to determine if the probe word was in the same semantic category as the reference word (i.e., chin/nose). Responses were recorded via a response pad with a “yes” or “no” button. Each response triggered the next word pair. Participants were asked to respond as quickly and accurately as possible. Participants were familiarized with the task through a practice session. Short breaks were provided between block presentations.

Zoom Image
Figure 2 Schematic representation of an individual trial, from initial alert tone to participant response. Interstimulus and intertrial interval latencies are noted. FM = frequency modulation.

During the semantic judgment-in-noise task, word pairs and speech competition were presented from ear level loudspeakers at a height of 1.25 m. As illustrated in [Figure 3], word pairs were presented from a front loudspeaker located at 0° azimuth, while speech competition was presented simultaneously from two loudspeakers located at 90° and 270° azimuth to create a simulated midline perception. Right and left loudspeakers were 1.1 m away from the participant’s head, while the front loudspeaker was 1.1 m away.

Zoom Image
Figure 3 Schematic representation of the loudspeaker orientation in relationship to participant.

A computer monitor, placed directly below the front loudspeaker, displayed task instructions and signaled the beginning and end of word-pair blocks. Word pairs were presented with the Neuroscan Stim2 presentation software ([Compumedics Neuroscan 2003]). Speech competition was routed from a Sony (Tokyo, Japan) CDP-CE345 compact disc player to a Grason-Stradler (Eden Prairie, MN) GSI-61 audiometer to the left and right loudspeakers.

Various SNRs (+3, 0, −3, and −6 dB) were created by adjusting the presentation levels of the speech competition with a fixed 65 dBA presentation level for the word pairs. Presentation levels for the speech competition were verified through sound field measures as 62, 65, 68, and 71 dBA. Speech competition was presented throughout the duration of each block of trials, with a fixed SNR in each block.

A total of 250 word pairs were created for the semantic judgment-in-noise task, with 125 semantic Match and 125 semantic No Match pairs. Semantic Match and No Match word pairs were quasi-randomly mixed within a single block with 25 word pairs in each block (10 blocks total). Speech competition was presented during the first eight blocks at a randomized SNR, with two blocks per SNR (i.e., 50 word pairs each at +3, 0, −3, and −6 dB SNR). The final two blocks of word pairs, presented in quiet, were composed of previous word pairs with a reversal placement in the pair. Each block took ∼4 min to complete. To control for task effects, each participant was tested using one of four lists that differed only by word order and SNR presentation order.


#
#

Statistical Analyses

Accuracy and Reaction Time

Accuracy and reaction times were recorded for each participant’s response on the semantic judgment-in-noise task. Analyses were conducted using two separate mixed analyses of variance for accuracy and reaction times. Age (YA and OA) was the between-subject factor with semantic judgment condition (Match and No Match), and SNR levels (quiet, +3, 0, −3, and −6 dB SNR) serving as within-subject factors. Both analyses violated the assumption of sphericity as indicated by Mauchley’s Test of Sphericity (p < 0.001); therefore the Greenhouse–Geisser correction was used for all further statistical analyses. Pairwise comparisons were used to interpret all significant interactions. The statistical significance level was set at alpha = 0.05.


#

False-Alarm Rates

False-alarm rates for Match and No Match word pairs were also calculated following the signal detection theory, which has been used to evaluate response discrimination ([Stanislaw and Todorov, 1999]). The major components of signal detection theory are hits and false alarms. In the semantic judgment task, hits occurred when an individual correctly identified a Match or No Match word pair. False alarms occurred when an individual incorrectly identified a Match word pair as a No Match pair and vice versa. Match false-alarm rates were calculated as 1 minus the accuracy for No Match word pairs. Likewise, No Match false-alarm rates were calculated as 1 minus the accuracy for Match word pairs. Consistent with accuracy and reaction time measures, a mixed design analysis of variance using Greenhouse–Geisser correction was used for statistical analysis of false-alarm rates. Within-subject factors were semantic judgment (Match and No Match) and SNR level (quiet, +3, 0, −3, and −6 dB SNR); the between-subject factor was age (YA and OA).


#
#
#

RESULTS

Accuracy

Accuracy on the semantic judgment in noise task is shown in [Table 2]. A significant main effect was observed for semantic judgment [F (1,32) = 57.51, p < 0.001, η2 = 0.642], indicating that accuracy was higher for No Match word pairs (M = 95.3%, standard error [SE] = 0.006) than Match word pairs (M = 86.3%, SE = 0.012). A significant main effect was also observed for SNR [F (2.53,81.03) = 55.98, p < 0.001, η2 = 0.636], indicating that accuracy was highest in the quiet condition (M = 98.9%, SE = 0.003) decreasing to lowest accuracy in the −6 dB SNR condition (M = 79.6%, SE = 0.0018). There was no significant effect of age on accuracy [F (1,32) = 1.878, p = 0.18, η2 = 0.055), indicating equivalent performance between groups: YAs (M = 91.8%, SE = 0.01) and OAs (M = 89.8%, SE = 0.01).

Table 2

Mean Accuracy for YA and OA for Match and No Match Word Pairs in All SNR Conditions

Quiet

+3 dB SNR

0 dB SNR

−3 dB SNR

−6 dB SNR

YA Match

97% (0.06)

92% (0.10)

92% (0.07)

84% (0.08)

72% (0.15)

YA No Match

100% (0.01)

97% (0.06)

97% (0.04)

95% (0.04)

92% (0.06)

OA Match

99% (0.02)

89% (0.11)

88% (0.12)

81% (0.09)

68% (0.19)

OA No Match

100% (0.00)

97% (0.05)

97% (0.03)

93% (0.07)

86% (0.12)

Note: SDs are shown in parentheses.


There were no significant interactions between semantic judgment, SNR level, and age [F (2.322,74.298) = 0.722, p = 0.509, η2 = 0.022]; semantic judgment and age [F (1,32) = 0.060, p = 0.808, η2 = 0.044]; or SNR level and age [F (2.532,81.029) = 1.457, p = 0.236, η2 = 0.003]. However, a significant interaction [F (2.53,81.03) = 13.073, p < 0.001, η2 = 0.290] of semantic judgment and SNR was observed. Pairwise comparisons (see [Figure 4]) revealed significantly different accuracy scores between Match and No Match conditions, with poorer Match accuracy at all SNRs (p = 0.01 in quiet, p = 0.001 at +3 dB, p < 0.001 at 0 dB, p < 0.001 at −3 dB, and p < 0.001 at −6 dB SNR). However, the difference in accuracy scores between semantic conditions increased as SNR decreased (2% difference in quiet, 6.3% at +3 dB, 7.8% at 0 dB, 10.9% at −3 dB, and 18.3% at −6 dB SNR). In other words, both groups demonstrated significantly poorer accuracy for Match than No Match conditions across the SNRs, but Match accuracy decreased at a greater rate in more difficult SNRs than No Match accuracy.

Zoom Image
Figure 4 Mean accuracies for Match and No Match word pairs across SNR levels. All data are shown as % correct.

Accuracy was significantly greater in the quiet condition than the +3 dB SNR condition for both Match (p = 0.001) and No Match (p = 0.004) word pairs. No significant difference was observed between +3 and 0 dB SNR for Match (p = 0.611) or No Match (p = 0.635) word pairs. Match and No Match accuracy significantly decreased as SNR decreased from 0 to −3 dB SNR and −3 to −6 dB SNR with all p values ≤0.001.


#

Reaction Time

Mean reaction time is presented in [Table 3]. Significant main effects were observed for semantic judgment [F (1,32) = 41.29, p < 0.001, η2 = 0.563], indicating that reaction time was shorter for Match word pairs as compared to No Match word pairs. A significant main effect was also seen for SNR [F (2.03,65.02) = 38.34, p < 0.001, η2 = 0.545], showing that as SNR decreased, reaction time increased. The shortest reaction times were observed in the quiet condition, while participants exhibited the longest reaction times at −6 dB SNR. There was no significant age-related difference in reaction time [F (1,32) = 0.617, p = 0.438, η2 = 0.019].

Table 3

Mean Reaction Time for YA and OA for Match and No Match Word Pairs in all SNR Conditions

Quiet

+3 dB SNR

0 dB SNR

−3 dB SNR

−6 dB SNR

YA Match

1,261.92 (304.59)

1,344.23 (249.32)

1,453.80 (300.99)

1,565.10 (351.61)

1,691.79 (442.39)

YA No Match

1,496.14 (567.80)

1,629.90 (484.96)

1,662.12 (380.99)

1,912.33 (500.56)

2,038.57 (571.29)

OA Match

1,164.98 (281.84)

1,319.32 (286.29)

1,425.02 (347.21)

1,528.79 (373.38)

1,593.68 (408.10)

OA No Match

1,149.80 (357.31)

1,395.34 (454.40)

1,448.33 (405.84)

1,705.58 (499.05)

1,792.03 (574.76)

Note: SDs are shown in parentheses.


There were no significant interactions for reaction time between semantic judgment, SNR level, and age [F (2.639,84.441) = 0.506, p = 0.656 η2 = 0.016]; semantic judgment and age [F (1,32) = 1.190, p = 0.283, η2 = 0.036]; or SNR level and age [F (2.032,65.0165) = 0.786, p = 0.462, η2 = 0.024]. A significant interaction was found between semantic judgment and SNR [F (2.64,84.44) = 4.82, p = 0.005, η2 = 0.131]. Pairwise comparisons (see [Figure 5]) revealed significantly different reaction times between Match and No Match conditions, with faster reaction times in the Match condition at all SNRs (p = 0.022 in quiet, p < 0.001 at +3 dB, p < 0.001 at 0 dB, p < 0.001 at −3 dB, and p < 0.001 at −6 dB SNR). Differences in reaction time between semantic judgment conditions increased as SNR decreased (148.42 msec difference in quiet, 240.77 msec at +3 dB, 172.06 msec at 0 dB, 330.00 msec at −3 dB, and 324.55 msec at −6 dB SNR). In other words, reaction time increased at a greater rate in more difficult SNRs for No Match pairs than for Match pairs (in both age groups).

Zoom Image
Figure 5 Mean reaction times for Match and No Match word pairs across SNR levels. All data are reported in milliseconds.

Match reaction times significantly increased in a linear manner as the SNR level decreased; however, No Match reaction times were only significantly different at select SNR levels. No Match reaction time significantly increased (p < 0.001) from the quiet to +3 dB SNR condition; however, no significant difference was noted for No Match reaction time between +3 and 0 dB SNR. No Match reaction times were significantly faster (p < 0.001) in the 0 dB SNR than the −3 dB SNR condition. Reaction times were not significantly different (p = 0.082) for No Match word pairs between −3 and −6 dB SNR conditions.


#

False-Alarm Rates

Mean false-alarm rates are presented in [Table 4]. Significant main effects were seen for semantic judgment [F (1,32) = 55.975, p < 0.001, η2 = 0.636] with Match false alarms occurring less often than No Match false alarms. A significant main effect for SNR [F (2.507,80.233) = 56.915, p < 0.001, η2 = 0.640] was also seen, indicating that false alarms increased as the SNR decreased. No age-related differences were observed [F (1,32) = 2.05, p = 0.162, η2 = 0.062].

Table 4

Mean False-Alarm Rates for YA and OA for Match and No Match Word Pairs across SNR Levels

Quiet

3 dB SNR

0 dB SNR

−3 dB SNR

−6 dB SNR

YA Match

0.2% (0.01)

2.8% (0.06)

2.6% (0.04)

5.4% (0.04)

8.0% (0.06)

OA Match

0.0% (0.00)

3.3% (0.05)

2.6% (0.03)

7.1% (0.07)

14.5% (0.12)

YA No Match

3.3% (0.06)

8.2% (0.10)

7.4% (0.07)

15.4% (0.08)

27.5% (0.15)

OA No Match

0.9% (0.00)

10.6% (0.11)

12.5% (0.12)

18.6% (0.09)

31.6% (0.19)

Note: SDs are shown in parentheses.


Similar to accuracy and reaction time results, no significant interactions for false-alarm rates were observed between semantic judgment, SNR level, and age [F (2.294,73.394) = 0.882, p = 0.431, η2 = 0.027]; semantic judgment and age [F (1,73.394) = 0.105, p = 0.748, η2 = 0.003]; or SNR level and age [F (2.507,73.394) = 1.494, p = 0.227, η2 = 0.045]. However, a significant interaction was found for semantic judgment and SNR [F (2.294,73.394) = 13.25, p < 0.001, η2 = 0.293], indicating that as SNR decreased, false alarms increased to a greater degree in the No Match condition than the Match condition (see [Figure 6]). Pairwise comparisons revealed significantly more No Match false alarms than Match false alarms at all SNRs (p = 0.01 in quiet, p = 0.001 at +3 dB, p < 0.001 at 0 dB, p = 0.001 at −3 dB, and p = 0.001 at −6 dB SNR).

Zoom Image
Figure 6 Mean false-alarm rates for Match and No Match word pairs across SNR levels. All data are shown as percentages.

#
#

DISCUSSION

As predicted, the results of this study revealed an effect of SNR on accuracy and reaction time measures, as indicated by reduced accuracy and increased reaction time as SNR level decreased. Match word pair accuracy was poorer than No Match word pairs in two-talker competition consistently at all SNR levels; however, Match accuracy decreased to a greater extent than No Match word pairs at more difficult SNRs. Match and No Match word pair accuracy significantly increased at each SNR interval with the exception of +3 dB SNR to 0 dB SNR. Also, reaction times were faster for Match word pairs than No Match word pairs across all SNR levels. Match word pair reaction times significantly increased at each SNR interval; however, No Match word pair reaction times only differed from quiet to +3 dB SNR and 0 dB SNR to −3 dB SNR.

Unexpectedly, no significant age effect was found; YAs and OAs exhibited comparable performance at all SNRs, although a greater divergence in accuracy scores between the groups was apparent at the poorest SNR (−6 dB) as compared to the other SNRs (see [Table 2]). Both the [Davis et al (2013)] study and the current study suggest that middle-aged adults and OAs perform similarly to YAs on semantic judgment tasks in competing speech. In contrast, suprathreshold recognition-in-noise studies have shown age-related differences at ∼0 dB to −8 dB SNR ([Tun and Wingfield, 1999]; [Helfer and Freyman, 2008]). The discrepancy between previous studies that report age-related differences in suprathreshold recognition tasks and our findings may relate to task differences. Suprathreshold word recognition tasks are open set and therefore independent of context. On the other hand, the semantic judgement task used in this study required the participant to make a comparison between words, which allowed for context and linguistic experience to potentially influence results. Since context directly influences semantic processing ([Moll et al, 2001]; [Aydelott et al, 2006]), and OAs have greater linguistic experience than YAs, ([Wingfield and Tun, 2001]; [Schneider et al, 2002]), an age-related disadvantage due to SNR may have been counterbalanced by an age-related advantage on context-dependent comprehension. This theory is supported by a line-item analysis of the results, which indicated that the OAs with presbycusis (N = 8) were as accurate as the OAs with normal hearing sensitivity (N = 9) on the semantic judgment task at all SNRs. For example, at −6 dB SNR, accuracy was 70% (SD = 0.21) in the OA group with presbycusis and 66.7% (SD = 0.18) in the OA group without hearing loss. Therefore, equivalent performance appears to have been achieved through reliance on top-down cognitive processes, despite reduced perceptual saliency of the speech stimuli due to peripheral hearing loss. This theory is further supported by the reaction time results, which showed that the OA group exhibited faster response times than YAs at all SNRs, including quiet. It should be noted that the older group had an overall increased level of education (M = 18.7 yr) as compared to the younger group (M = 15.9 yr), which may have contributed to the faster response times. However, both groups recorded high levels of education. The majority of YAs had obtained the highest level of education possible for their age.

The most illuminating effect of SNR on linguistic processing in this study was that accuracy in the Match condition was more affected by an impoverished signal (i.e., a negative SNR) than the No Match condition. A similar effect of noise or competing speech on semantic processing has been observed in other studies although these studies did not systematically examine SNR ([Romei et al, 2011]; [Davis et al, 2013]). Previous linguistic processing studies that have incorporated a masker or filter to degrade the speech signal have used a semantic priming paradigm ([Brown and Hagoort, 1993]; [Moll et al, 2001]; [Aydelott et al, 2006]). These priming studies primarily investigated reaction times, which represent the duration needed to access a word’s meaning. Previous masked priming studies have shown shorter reaction times for semantically related words or congruent sentences, as compared to their unrelated or incongruent counter parts ([Brown and Hagoort, 1993]; [Moll et al, 2001]). Consistent with previous studies, the current study’s reaction time data indicated that both young and elderly listeners were faster at identifying semantically related word meanings. This was likely due to facilitation effects for semantically matched pairs, as compared to inhibitory effects for No Match pairs. Words that do not match the expected category require more time and effort to process, because listeners must suppress “incompatible word candidates” ([Aydelott et al, 2006]).

Our finding that Match word pair accuracy was more disrupted by a poor SNR than No Match pairs highlights the differential effects of speech competition on semantic analysis. [Moll et al (2001)] found that the ability for listeners to use context to facilitate the identification of congruent sentences was reduced when one-talker competition was presented to the same ear as the target sentences. However, the facilitating effects of context were unaffected by speech competition presented in the opposite ear. This result was interpreted as a decrease in perceptibility of the target sentences due to masking, which decreased the facilitation of congruent targets. [Aydelott et al (2006)] collected AERP N400 data in a similar study of degraded sentences. Their results indicated that, when the sentences were low-passed filtered at 1000 Hz, individuals identified congruent sentences less accurately than incongruent sentences. Also, N400 amplitude differences indicated that, when sentences were filtered, greater semantic analysis occurred for congruent targets and less processing occurred for incongruent targets. These authors concluded that degradation of the sentences reduced the amount of semantic information available for semantic facilitation of related targets. Results of the current study are in concert with these previous studies. We propose that, as SNR decreased, listeners received less semantic information, which decreased facilitation of Match word pairs. Therefore, they were less able to activate the semantic features necessary to confirm the semantic relationship between word pairs, which resulted in decreased Match word pair accuracy. As described by [Brown and Hagoort (1993)], when a target word is identified as a semantic Match to a prime word, listeners are biased to respond “yes.” When semantic matching fails, individuals are biased to respond “no.” This could explain why our results showed greater No Match word pair accuracy. If decreased SNR negatively impacts the facilitation of related targets, participants will be biased toward a no response, because semantic matching is less likely to occur. Our false-alarm rates support this semantic matching mechanism of semantic priming proposed by [Neely and Keefe (1989)].

In our study, false-alarm rates were calculated to evaluate whether high No Match accuracies, particularly at decreased SNR conditions, were the result of a response bias for No Match word pairs. Participants could simply be more accurate for No Match word pairs; however, 82% of participants (YA: 14/17; OA: 14/17) indicated in a postexperimental task interview that they were more likely to press “no” (No Match) when unsure. A response bias of “no” during uncertainty could lead to the greater decrease in accuracy for Match as compared to No Match word pairs. If a participant is more likely to press “no,” they are more likely to have a higher accuracy for No Match word pairs. False-alarm rates from this study objectively indicated that participants were more likely to identify heard words as a No Match word pair, particularly in the more difficult SNR conditions. This corresponded to a high false-alarm rate for No Match word pairs that was significantly different than the false-alarm rates for Match word pairs. Overall, results from this study revealed that a degraded linguistic signal reduced the facilitation of semantic matches, which in turn produced a response bias that manifested as a higher accuracy for No Match word pairs.

Considering the findings from this study, there is significant opportunity to investigate the use of semantic judgment in noise to assess difficulty understanding speech in YAs and OAs. Future studies could incorporate AERPs to the semantic judgment-in-noise task to investigate age-related neural differences that are not evident in behavioral results ([Davis et al, 2013]; [Davis and Jerger, 2014]). As mentioned earlier, both [Romei et al (2011)] and [Davis et al (2013)] used a word-recognition-in-noise task to determine the presentation levels of the competing speech in their AERP semantic judgment tasks. The current study extends these two previous studies by thoroughly assessing the effect of SNR on semantic judgment. By systematically decreasing the SNR for the semantic judgment task, we were able to identify differential SNR effects on Match and No Match word pair accuracies, as well as effects on reaction times. This information provides a better understanding of how SNRs influence semantic judgment in both YAs and OAs, which is particularly important for future AERP studies that require high accuracy on semantic processing-in-noise tasks.

Also in future studies, the participation of OAs with hearing loss, difficulty hearing in noise, or cognitive decline (i.e., dementia) might be used to evaluate whether some OAs are better able to use contextual information in noisy environments than others. For instance, does SNR facilitate recognition of semantically related words or inhibit recognition of semantically unrelated words more profoundly in OAs with hearing difficulties? Because speech recognition studies have already highlighted talker effects, a future study could determine if accuracy and/or reaction times on semantic judgment-in-noise tasks are affected by the number of talkers or types of competing noise. For example, would a nonlinguistic masker produce the same masking effects on facilitation of semantic matches as two-talker competition?


#

CONCLUSION

By using the semantic judgment-in-noise task, we were able to investigate a more ecologically valid measure of speech understanding that successfully incorporated linguistic and cognitive factors. In our study, YA and OA performances were similar with decreased accuracy and increased reaction time as SNR decreased. Our results showed that greater degradation of acoustic information caused the accuracy of semantically matched word pairs to decrease at a greater rate than semantically unrelated word pairs, which suggests that semantic facilitation and inhibition were differentially affected. This finding highlights the role of linguistic and cognitive processes related to semantic analysis, which are implicated in speech understanding in noise.


#

APPENDIX

Match and No Match word pairs alphabetized based on first word.

Match Word Pairs

arm

hand

king

queen

bag

box

lake

pond

bag

net

lamp

rug

bed

crib

leg

foot

boat

ship

leg

arm

bone

skin

light

sun

boot

glove

lock

chain

boot

shoe

milk

juice

boy

man

mop

broom

bread

fruit

mouth

lips

brick

stone

mud

dirt

bridge

road

pants

shorts

brush

comb

park

beach

bush

plant

pen

chalk

cake

pie

pig

goat

car

truck

plate

bowl

chain

rope

plum

peach

chair

seat

pot

cup

cheek

face

ranch

farm

chin

nose

rat

bear

cloud

rain

rat

snake

coat

dress

roof

floor

corn

bean

rope

string

couch

bed

rug

mat

cow

horse

sand

rock

deer

moose

sand

dirt

desk

shelf

scarf

glove

door

hall

school

church

drum

flute

sheep

wolf

duck

fox

sheep

pig

duck

moose

ship

train

eyes

ears

shirt

sock

fire

smoke

skin

hair

fish

dog

snake

toad

foot

thumb

snow

ice

fork

knife

snow

rain

frog

toad

spoon

bowl

fruit

bean

stars

moon

gate

fence

store

bank

glove

sock

stove

sink

glue

paste

sun

moon

goose

bird

tape

glue

grape

peach

teeth

tongue

grass

lawn

train

truck

hat

cap

tree

leaf

head

neck

truck

plane

head

face

wall

door

horn

bell

wolf

skunk

house

tent

wood

tree

juice

drink

yarn

thread

No Match Word Pairs

arm

fork

key

plum

ball

food

key

shirt

barn

note

knife

snow

bear

cloud

lamp

mask

bell

salt

lawn

teeth

book

nurse

lips

rat

boot

door

lock

horse

bread

clown

lunch

cave

bridge

plant

moon

bat

bush

ship

mouse

drum

cage

stone

nail

couch

cake

jet

nut

yarn

cap

leaf

paste

deer

chain

bear

peach

mop

chalk

road

pie

tape

chin

bag

pipe

phone

church

fox

queen

corn

clock

dad

rain

cheek

cloud

stove

rain

dog

cloud

scream

ranch

grape

crib

goose

ring

cat

cup

shirt

rock

hat

desk

bone

roof

drink

dirt

milk

rope

sock

dish

pole

rug

gate

dog

hand

school

thumb

dog

seed

seat

nose

doll

cork

shorts

mouth

doll

mud

sign

pond

ear

flag

sink

man

eyes

bed

skunk

cheese

face

pot

smoke

plum

farm

leg

snake

fruit

fish

kite

spoon

coat

fish

seed

store

bat

friend

nest

store

chair

frog

neck

string

cow

girl

plane

sun

seed

glass

sleeve

thread

gum

glove

chain

toad

car

glove

boat

toast

wall

glue

bank

tongue

scarf

goat

shoe

train

boy

grass

shelf

trash

soap

hair

lunch

tree

sink

hair

map

wolf

pants

head

star

wood

king

home

rice

word

crown

horn

fire

yard

toast

house

voice

yard

shelf


#

Abbreviations

AERP: auditory event-related potential
OA: older adult
PTA: pure-tone average
QuickSIN: Quick Speech-in-Noise Test
rms: root mean square
SD: standard deviation
SE: standard error
SNR: signal-to-noise ratio
WRS: word recognition scores
YA: younger adult


#

No conflict of interest has been declared by the author(s).

  • REFERENCES

  • Adobe Systems Inc 2004. Adobe Audition. Version 1.5. San Jose, CA: Adobe Systems Inc; [Computer software]
  • Akeroyd MA. 2008; Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. Int J Audiol 47 (02) (Suppl) S53-S71
  • Annett M. 1970; A classification of hand preference by association analysis. Br J Psychol 61 (03) 303-321
  • Aydelott J, Dick F, Mills DL. 2006; Effects of acoustic distortion and semantic context on event-related potentials to spoken words. Psychophysiology 43 (05) 454-464
  • Baum LF, Hearn MP, Denslow WW. 2000. The Annotated Wizard of Oz. New York, NY: Norton;
  • Brown C, Hagoort P. 1993; The processing nature of the n400: evidence from masked priming. J Cogn Neurosci 5 (01) 34-44
  • Brungart DS, Simpson BD, Ericson MA, Scott KR. 2001; Informational and energetic masking effects in the perception of multiple simultaneous talkers. J Acoust Soc Am 110 (5 Pt 1) 2527-2538
  • Buckner RL. 2004; Memory and executive function in aging and AD: multiple factors that cause decline and reserve factors that compensate. Neuron 44 (01) 195-208
  • Coltheart M. 1981; The MRC psycholinguistic database. Q J Exp Psychol 33: 497-505
  • Committee on Hearing and Bioacoustics and Biomechanics (CHABA) 1988; Speech understanding and aging. J Acoust Soc Am 83 (03) 859-895
  • Compumedics Neuroscan 2003. Stim2 . Charlotte, NC: Compumedics Neuroscan; [Computer software]
  • Cruickshanks KJ, Wiley TL, Tweed TS, Klein BE, Klein R, Mares-Perlman JA, Nondahl DM. 1998; Prevalence of hearing loss in older adults in Beaver Dam, Wisconsin. The Epidemiology of Hearing Loss Study. Am J Epidemiol 148 (09) 879-886
  • Cullington HE, Zeng FG. 2008; Speech recognition with varying numbers and types of competing talkers by normal-hearing, cochlear-implant, and implant simulation subjects. J Acoust Soc Am 123 (01) 450-461
  • Davis T.. 2009. The effect of middle age on interaural asymmetry in a competing speech task. Dissertation Abstract International DAI-B 71-01. PhD dissertation University of Texas at Dallas; Richardson, TX:
  • Davis TM, Jerger J. 2014; The effect of middle age on the late positive component of the auditory event-related potential. J Am Acad Audiol 25 (02) 199-209
  • Davis TM, Jerger J, Martin J. 2013; Electrophysiological evidence of augmented interaural asymmetry in middle-aged listeners. J Am Acad Audiol 24 (03) 159-173
  • Davis T, Martin J, Jerger J, Greenwald R, Mehta J. 2012; Auditory-cognitive interactions underlying interaural asymmetry in an adult listener: a case study. Int J Audiol 51 (02) 124-134
  • Davis T, Stanley N, Foran L. 2015; Age-related effects of dichotic attentional mode on interaural asymmetry: an AERP study with independent component analysis. J Am Acad Audiol 26 (05) 461-477
  • Department of Veteran Affairs 1998. Tonal and Speech Materials for Auditory Perceptual Assessment (DISC 2.0) . Mountain Home, TN: James H. Quillen VA Medical Center;
  • Friederici AD. 2002; Towards a neural basis of auditory sentence processing. Trends Cogn Sci 6 (02) 78-84
  • Helfer KS, Freyman RL. 2008; Aging and speech-on-speech masking. Ear Hear 29 (01) 87-98
  • Helfer KS, Freyman RL. 2014; Stimulus and listener factors affecting age-related changes in competing speech perception. J Acoust Soc Am 136 (02) 748-759
  • Humes LE. 1996; Speech understanding in the elderly. J Am Acad Audiol 7 (03) 161-167
  • Jerger J, Greenwald R, Wambacq I, Seipel A, Moncrieff D. 2000; Toward a more ecologically valid measure of speech understanding in background noise. J Am Acad Audiol 11 (05) 273-282
  • Jerger J, Moncrieff D, Greenwald R, Wambacq I, Seipel A. 2000; Effect of age on interaural asymmetry of event-related potentials in a dichotic listening task. J Am Acad Audiol 11 (07) 383-389
  • Jerger J, Wilson R, Margolis R. 2014; Suggestion for terminological reform in speech audiometry. J Am Acad Audiol 25 (02) 229-230
  • Kalikow DN, Stevens KN, Elliott LL. 1977; Development of a test of speech intelligibility in noise using sentence materials with controlled word predictability. J Acoust Soc Am 61 (05) 1337-1351
  • Killion MC, Niquette PA, Gudmundsen GI, Revit LJ, Banerjee S. 2004; Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners. J Acoust Soc Am 116 (4 Pt 1) 2395-2405
  • Martin J, Jerger J, Mehta J. 2007; Divided-attention and directed-attention listening modes in children with dichotic deficits: an event-related potential study. J Am Acad Audiol 18 (01) 34-53
  • McArdle R, Chisolm T. 2009. Speech audiometry. In: Katz J, Medwetsky L, Burkard R, Hood L. Handbook of Clinical Audiology. Baltimore, MD: Lippincott, Williams & Wilkens; 64-79
  • Moll K, Cardillo E, Utman J. 2001. Effects of competing speech on sentence-word priming: semantic, perceptual, and attentional factors. In: Moore JD, Stennings K. Proceedings of the Twenty-Third Annual Conference of the Cognitive Science Society. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.; 651-656
  • Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, Cummings JL, Chertkow H. 2005; The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc 53 (04) 695-699
  • Neely J, Keefe D. 1989. Semantic context effects on visual word processing: a hybrid prospective/retrospective processing theory. In: Bower GH. The Psychology of Learning and Motivation: Advances in Research and Theory. Vol. 24 New York, NY: Academic Press;
  • Nilsson M, Soli SD, Sullivan JA. 1994; Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am 95 (02) 1085-1099
  • Perrin F, García-Larrea L. 2003; Modulation of the N400 potential during auditory phonological/semantic interaction. Brain Res Cogn Brain Res 17 (01) 36-47
  • Praamstra P, Stegeman DF. 1993; Phonological effects on the auditory N400 event-related brain potential. Brain Res Cogn Brain Res 1 (02) 73-86
  • Romei L, Wambacq IJ, Besing J, Koehnke J, Jerger J. 2011; Neural indices of spoken word processing in background multi-talker babble. Int J Audiol 50 (05) 321-333
  • Rönnberg J, Lunner T, Zekveld A, Sörqvist P, Danielsson H, Lyxell B, Dahlström Ö, Signoret C, Stenfelt S, Pichora-Fuller MK, Rudner M. 2013; The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 7: 31
  • Rönnberg J, Rudner M, Lunner T, Zekveld AA. 2010; When cognition kicks in: working memory and speech understanding in noise. Noise Health 12 (49) 263-269
  • Salthouse TA, Atkinson TM, Berish DE. 2003; Executive functioning as a potential mediator of age-related cognitive decline in normal adults. J Exp Psychol Gen 132 (04) 566-594
  • Schneider BA, Daneman M, Pichora-Fuller MK. 2002; Listening in aging adults: from discourse comprehension to psychoacoustics. Can J Exp Psychol 56 (03) 139-152
  • Schneider B, Pichora-Fuller K. 2000. Implications of perceptual deterioration for cognitive aging research. In: Craik FIM, Salthouse TA. The Handbook of Aging and Cognition. Mahwah, NJ: Lawrence Erlbaum Associates; 155-219
  • Spyridakou C, Bamiou D. 2015; Need of speech-in-noise testing to assess listening difficulties in older adults. Audiol Med 13 (02) 65-76
  • Stanislaw H, Todorov N. 1999; Calculation of signal detection theory measures. Behav Res Methods Instrum Comput 31 (01) 137-149
  • Syntrillium Software Corporation 2003. Cool Edit Pro Version 2.1. Phoenix, AZ: Syntrillium Software Corporation; [Computer software]
  • Tun PA, Williams VA, Small BJ, Hafter ER. 2012; The effects of aging on auditory processing and cognition. Am J Audiol 21 (02) 344-350
  • Tun PA, Wingfield A. 1999; One voice too many: adult age differences in language processing with different types of distracting sounds. J Gerontol B Psychol Sci Soc Sci 54B (05) 317-327
  • Van Overschelde JP, Rawson KA, Dunlosky J. 2004; Category norms: an updated and expanded version of the norms. J Mem Lang 50 (03) 289-335
  • Wilson M. 1988; The MRC psycholinguistic database: machine readable dictionary, version 2. Behav Res Methods 20: 6-10
  • Wingfield A, Tun P. 2001; Spoken language comprehension in older adults: interactions between sensory and cognitive change in normal aging. Semin Hear 22: 287-302
  • Zekveld AA, Rudner M, Johnsrude IS, Rönnberg J. 2013; The effects of working memory capacity and semantic cues on the intelligibility of speech in noise. J Acoust Soc Am 134 (03) 2225-2234

Corresponding author

Nicholas S. Stanley
Department of Speech Pathology and Audiology, University of South Alabama
Mobile, AL 36688-0002

  • REFERENCES

  • Adobe Systems Inc 2004. Adobe Audition. Version 1.5. San Jose, CA: Adobe Systems Inc; [Computer software]
  • Akeroyd MA. 2008; Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. Int J Audiol 47 (02) (Suppl) S53-S71
  • Annett M. 1970; A classification of hand preference by association analysis. Br J Psychol 61 (03) 303-321
  • Aydelott J, Dick F, Mills DL. 2006; Effects of acoustic distortion and semantic context on event-related potentials to spoken words. Psychophysiology 43 (05) 454-464
  • Baum LF, Hearn MP, Denslow WW. 2000. The Annotated Wizard of Oz. New York, NY: Norton;
  • Brown C, Hagoort P. 1993; The processing nature of the n400: evidence from masked priming. J Cogn Neurosci 5 (01) 34-44
  • Brungart DS, Simpson BD, Ericson MA, Scott KR. 2001; Informational and energetic masking effects in the perception of multiple simultaneous talkers. J Acoust Soc Am 110 (5 Pt 1) 2527-2538
  • Buckner RL. 2004; Memory and executive function in aging and AD: multiple factors that cause decline and reserve factors that compensate. Neuron 44 (01) 195-208
  • Coltheart M. 1981; The MRC psycholinguistic database. Q J Exp Psychol 33: 497-505
  • Committee on Hearing and Bioacoustics and Biomechanics (CHABA) 1988; Speech understanding and aging. J Acoust Soc Am 83 (03) 859-895
  • Compumedics Neuroscan 2003. Stim2 . Charlotte, NC: Compumedics Neuroscan; [Computer software]
  • Cruickshanks KJ, Wiley TL, Tweed TS, Klein BE, Klein R, Mares-Perlman JA, Nondahl DM. 1998; Prevalence of hearing loss in older adults in Beaver Dam, Wisconsin. The Epidemiology of Hearing Loss Study. Am J Epidemiol 148 (09) 879-886
  • Cullington HE, Zeng FG. 2008; Speech recognition with varying numbers and types of competing talkers by normal-hearing, cochlear-implant, and implant simulation subjects. J Acoust Soc Am 123 (01) 450-461
  • Davis T.. 2009. The effect of middle age on interaural asymmetry in a competing speech task. Dissertation Abstract International DAI-B 71-01. PhD dissertation University of Texas at Dallas; Richardson, TX:
  • Davis TM, Jerger J. 2014; The effect of middle age on the late positive component of the auditory event-related potential. J Am Acad Audiol 25 (02) 199-209
  • Davis TM, Jerger J, Martin J. 2013; Electrophysiological evidence of augmented interaural asymmetry in middle-aged listeners. J Am Acad Audiol 24 (03) 159-173
  • Davis T, Martin J, Jerger J, Greenwald R, Mehta J. 2012; Auditory-cognitive interactions underlying interaural asymmetry in an adult listener: a case study. Int J Audiol 51 (02) 124-134
  • Davis T, Stanley N, Foran L. 2015; Age-related effects of dichotic attentional mode on interaural asymmetry: an AERP study with independent component analysis. J Am Acad Audiol 26 (05) 461-477
  • Department of Veteran Affairs 1998. Tonal and Speech Materials for Auditory Perceptual Assessment (DISC 2.0) . Mountain Home, TN: James H. Quillen VA Medical Center;
  • Friederici AD. 2002; Towards a neural basis of auditory sentence processing. Trends Cogn Sci 6 (02) 78-84
  • Helfer KS, Freyman RL. 2008; Aging and speech-on-speech masking. Ear Hear 29 (01) 87-98
  • Helfer KS, Freyman RL. 2014; Stimulus and listener factors affecting age-related changes in competing speech perception. J Acoust Soc Am 136 (02) 748-759
  • Humes LE. 1996; Speech understanding in the elderly. J Am Acad Audiol 7 (03) 161-167
  • Jerger J, Greenwald R, Wambacq I, Seipel A, Moncrieff D. 2000; Toward a more ecologically valid measure of speech understanding in background noise. J Am Acad Audiol 11 (05) 273-282
  • Jerger J, Moncrieff D, Greenwald R, Wambacq I, Seipel A. 2000; Effect of age on interaural asymmetry of event-related potentials in a dichotic listening task. J Am Acad Audiol 11 (07) 383-389
  • Jerger J, Wilson R, Margolis R. 2014; Suggestion for terminological reform in speech audiometry. J Am Acad Audiol 25 (02) 229-230
  • Kalikow DN, Stevens KN, Elliott LL. 1977; Development of a test of speech intelligibility in noise using sentence materials with controlled word predictability. J Acoust Soc Am 61 (05) 1337-1351
  • Killion MC, Niquette PA, Gudmundsen GI, Revit LJ, Banerjee S. 2004; Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners. J Acoust Soc Am 116 (4 Pt 1) 2395-2405
  • Martin J, Jerger J, Mehta J. 2007; Divided-attention and directed-attention listening modes in children with dichotic deficits: an event-related potential study. J Am Acad Audiol 18 (01) 34-53
  • McArdle R, Chisolm T. 2009. Speech audiometry. In: Katz J, Medwetsky L, Burkard R, Hood L. Handbook of Clinical Audiology. Baltimore, MD: Lippincott, Williams & Wilkens; 64-79
  • Moll K, Cardillo E, Utman J. 2001. Effects of competing speech on sentence-word priming: semantic, perceptual, and attentional factors. In: Moore JD, Stennings K. Proceedings of the Twenty-Third Annual Conference of the Cognitive Science Society. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.; 651-656
  • Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, Cummings JL, Chertkow H. 2005; The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc 53 (04) 695-699
  • Neely J, Keefe D. 1989. Semantic context effects on visual word processing: a hybrid prospective/retrospective processing theory. In: Bower GH. The Psychology of Learning and Motivation: Advances in Research and Theory. Vol. 24 New York, NY: Academic Press;
  • Nilsson M, Soli SD, Sullivan JA. 1994; Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am 95 (02) 1085-1099
  • Perrin F, García-Larrea L. 2003; Modulation of the N400 potential during auditory phonological/semantic interaction. Brain Res Cogn Brain Res 17 (01) 36-47
  • Praamstra P, Stegeman DF. 1993; Phonological effects on the auditory N400 event-related brain potential. Brain Res Cogn Brain Res 1 (02) 73-86
  • Romei L, Wambacq IJ, Besing J, Koehnke J, Jerger J. 2011; Neural indices of spoken word processing in background multi-talker babble. Int J Audiol 50 (05) 321-333
  • Rönnberg J, Lunner T, Zekveld A, Sörqvist P, Danielsson H, Lyxell B, Dahlström Ö, Signoret C, Stenfelt S, Pichora-Fuller MK, Rudner M. 2013; The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 7: 31
  • Rönnberg J, Rudner M, Lunner T, Zekveld AA. 2010; When cognition kicks in: working memory and speech understanding in noise. Noise Health 12 (49) 263-269
  • Salthouse TA, Atkinson TM, Berish DE. 2003; Executive functioning as a potential mediator of age-related cognitive decline in normal adults. J Exp Psychol Gen 132 (04) 566-594
  • Schneider BA, Daneman M, Pichora-Fuller MK. 2002; Listening in aging adults: from discourse comprehension to psychoacoustics. Can J Exp Psychol 56 (03) 139-152
  • Schneider B, Pichora-Fuller K. 2000. Implications of perceptual deterioration for cognitive aging research. In: Craik FIM, Salthouse TA. The Handbook of Aging and Cognition. Mahwah, NJ: Lawrence Erlbaum Associates; 155-219
  • Spyridakou C, Bamiou D. 2015; Need of speech-in-noise testing to assess listening difficulties in older adults. Audiol Med 13 (02) 65-76
  • Stanislaw H, Todorov N. 1999; Calculation of signal detection theory measures. Behav Res Methods Instrum Comput 31 (01) 137-149
  • Syntrillium Software Corporation 2003. Cool Edit Pro Version 2.1. Phoenix, AZ: Syntrillium Software Corporation; [Computer software]
  • Tun PA, Williams VA, Small BJ, Hafter ER. 2012; The effects of aging on auditory processing and cognition. Am J Audiol 21 (02) 344-350
  • Tun PA, Wingfield A. 1999; One voice too many: adult age differences in language processing with different types of distracting sounds. J Gerontol B Psychol Sci Soc Sci 54B (05) 317-327
  • Van Overschelde JP, Rawson KA, Dunlosky J. 2004; Category norms: an updated and expanded version of the norms. J Mem Lang 50 (03) 289-335
  • Wilson M. 1988; The MRC psycholinguistic database: machine readable dictionary, version 2. Behav Res Methods 20: 6-10
  • Wingfield A, Tun P. 2001; Spoken language comprehension in older adults: interactions between sensory and cognitive change in normal aging. Semin Hear 22: 287-302
  • Zekveld AA, Rudner M, Johnsrude IS, Rönnberg J. 2013; The effects of working memory capacity and semantic cues on the intelligibility of speech in noise. J Acoust Soc Am 134 (03) 2225-2234

Zoom Image
Figure 1 Mean YA and OA pure-tone thresholds for the right and left ears. Grayed area indicates normative range, based on findings from “The Beaver Dam Study.” Figure adapted with permission from [Davis (2009)].
Zoom Image
Figure 2 Schematic representation of an individual trial, from initial alert tone to participant response. Interstimulus and intertrial interval latencies are noted. FM = frequency modulation.
Zoom Image
Figure 3 Schematic representation of the loudspeaker orientation in relationship to participant.
Zoom Image
Figure 4 Mean accuracies for Match and No Match word pairs across SNR levels. All data are shown as % correct.
Zoom Image
Figure 5 Mean reaction times for Match and No Match word pairs across SNR levels. All data are reported in milliseconds.
Zoom Image
Figure 6 Mean false-alarm rates for Match and No Match word pairs across SNR levels. All data are shown as percentages.