CC BY-NC-ND 4.0 · Int Arch Otorhinolaryngol 2020; 24(04): e462-e471
DOI: 10.1055/s-0039-3402441
Original Research

Effect of Quiet and Noise on P300 Response in Individuals with Auditory Neuropathy Spectrum Disorder

1   Department of Speech and Hearing, JSS Institute of Speech & Hearing, Mysuru, India
,
Ajith U. Kumar
2   Department of Audiology, All India Institute of Speech & Hearing, Mysuru, India
› Author Affiliations
 

Abstract

Introduction Auditory neuropathy spectrum disorder (ANSD) is a clinical condition in which individuals have normal cochlear responses and abnormal neural responses. There is a lack of evidence in the literature regarding the neural discrimination skill in individuals with ANSD, especially when the signal is presented in the presence of noise.

Objectives The present study was performed with the aim to investigate auditory discrimination skill, in quiet and in the presence of noise, in individuals with ANSD and to compare the findings with normal-hearing individuals.

Methods A total of 30 individuals with normal hearing sensitivity and 30 individuals with ANSD in the age range of 15 to 55 years old, with the mean age of 27.86 years old, were the participants. P300 response was recorded from both groups using syllable pairs /ba/-/da/ in oddball paradigm and the syllable /da/ in repetitive paradigm in quiet and at +10 dB signal-to-noise ratio (SNR).

Results There was significant prolongation in latency and reaction time, and reduction in amplitude of P300 response and sensitivity in both groups with the addition of noise. The topographic pattern analysis showed activation of the central-parietal-occipital region of the brain in individuals with ANSD, whereas activation of the central-parietal region was observed in individuals with normal hearing. The activation was more diffused in individuals with ANSD compared with that of individuals with normal hearing.

Conclusion The individuals with ANSD showed a significantly more adverse effect of noise on the neural discrimination skill than the normal counterpart.


#

Introduction with Objective

Auditory neuropathy spectrum disorder (ANSD) is a clinical condition in which the individuals have an abnormality in the afferent auditory nervous system. The common site of lesion in individuals with ANSD includes inner hair cells and ribbon synapse (presynaptic disorder), unmyelinated auditory nerve dendrite, auditory ganglion cells and their axons (postsynaptic disorder), and the auditory brainstem pathway.[1] Temporal bone studies have shown normal outer and inner hair cells with loss of auditory nerve fibers and/or demyelination of fibers in adults with ANSD.[2] [3] [4] The causes of ANSD can be categorized as having a genetic cause and an acquired cause. The genetic cause can be syndromic and nonsyndromic. Sininger[5] reported 40% of the individuals with ANSD to have a genetic basis. The acquired causes of ANSD include hypoxia, prematurity, hyperbilirubinemia, immune response, infections, toxic substances, and nutritional deficiencies.[6] [7] The audiological test report shows normal to severe loss of hearing sensitivity as evident on pure-tone audiometry, presence of otoacoustic emission, abnormality in auditory brainstem response and middle ear muscle reflexes.[8] [9] [10] It effects, primarily, the perception of auditory temporal information.[11] [12] The deficit in temporal encoding can impair sound localization and speech perception skills of the individuals.[13] Several test reports have also shown abnormal encoding of speech at the cortical level.[14] [15] [16] [17]

One of the commonly encountered problems by individuals with ANSD is speech perception in the presence of noise.[10] [18] [19] The performance of the individuals with hearing impairment gets modulated by both auditory as well as cognitive capabilities.[20] [21] One of the cognitive components that help in speech perception is working memory. Working memory, also known as short-term memory, is the interplay between echoic memory and long-term memory. The working memory can be assessed using slow cortical potentials that have prolonged refractory periods. The P300 component of the auditory evoked potential is one of the commonly used measures to assess the capacity of the working memory.

Appropriate attention to the stimuli and adequate memory processing speed is necessary for speech perception in adverse listening conditions.[22] [23] [24] The attention toward the stimulus and the fundamental memory processing speed of the individual affects P300 amplitude and latency.[25] P300 amplitude is determined by the gap between the two target stimuli compared with the stimulus probability.[26] P300 amplitude also depends on the attention allocated to the task and the memory load.[27] [28] The amplitude reduces with increase in memory load as the task processing demand increases.[25] The stimuli that receive more attention and get recognized with more confidence are associated with more amplitude of the P300 potential. P300 latency index classification speed is the time required to detect and respond to the target stimulus.[11] [29] [30] P300 latency correlates positively/strongly with mental function speed.[31] [32] The superior the cognitive function of the individuals, the shorter the P300 latency. P300 potential is maximally recorded from the hippocampus, the superior temporal sulcus, the ventrolateral prefrontal cortex, and the intraparietal sulcus.[33]

Few researchers have investigated the speech processing ability in individuals with ANSD using different test measures (P1-N1-P2, MMN, and P300). In these studies, the auditory evoked responses were recorded from limited electrode sites.[14] [15] [16] [34] [35] [36] To our knowledge, there are only two studies reported in the literature that discuss the multichannel recording in individuals with ANSD.[37] [38] Apeksha et al[37] recorded P300 response in individuals with ANSD for speech contrast /ba/-/da/, whereas Apeksha et al[38] recorded P300 response in individuals with ANSD for the three different speech contrasts /ba/-/da/, /ba/-/ma/ and /ba/-/pa/. In both studies, the P300 response was recorded only in quiet listening condition. Since the individuals with ANSD find it difficult to perceive speech in the presence of noise, there was a need to explore the speech discrimination ability of individuals with ANSD in the presence of noise. Obtaining multichannel information in the presence of noise will give an insight into their cortical representation of speech perception ability in a noisy situation. Using high-density electrodes to study the cortical processing will reveal the modulations in scalp topographies which can, in turn, reflect on sources generating these potentials and the compensation happening at the higher level in the auditory pathway due to peripheral abnormality. Therefore, the present study was performed with the aim of investigating the neural discrimination skill in quiet and in presence of noise in individuals with ANSD and with normal hearing sensitivity, and to compare the findings for both groups.


#

Methods

A total of 60 participants were considered for the study. There were 30 participants diagnosed with ANSD (16 females and 14 males) in the age range of 15 to 55 years old (mean age of 27.86 years old), and 30 individuals with normal hearing sensitivity (16 females and 14 males) in the age range of 15 to 55 years old. The individuals were diagnosed as having ANSD by certified audiologists following the recommendation of Starr et al[8] and by neurologists based on detailed clinical neurological examination, including computed tomography [CT] and magnetic resonance imaging [MRI]. All of the participants with ANSD were diagnosed as having ANSD by the neurologists. According to the recommendation by Starr et al, the individuals with ANSD should have normal otoacoustic emission, absent/abnormal auditory brainstem response (ABR) and absence of acoustic reflexes. All of the individuals who fulfilled the criteria of having ANSD by the neurologists and test finding reports, as suggested by Starr et al, were considered for further evaluation. All of the individuals with ANSD reported as having speech understanding difficulty that was acquired in nature. The minimum age of onset of symptoms was 14 years old in the ANSD group. All of the participants with ANSD reported difficulty in understanding speech, especially in the presence of noise. The individuals with ANSD had a pure-tone average ranging from normal hearing sensitivity to moderate hearing loss. The transient evoked otoacoustic emissions (TEOAEs) were recorded using click stimuli of an intensity of 80 dB peak SPL and the response with a minimum of 6 dB signal-to-noise ratios (SNRs) at 3 consecutive frequencies, and response reproducibility > 90% were considered to be a positive response. The auditory ABR was recorded with click stimuli of an intensity of 90 dB nHL, repetition rate of 30.1/s, with rarefaction polarity. The ABR with the minimum of three peaks, with I peak latency lying between 1 millisecond to 2 milliseconds, III peak latency lying between 3 milliseconds and 4 milliseconds and V peak lying between 5 milliseconds and 6 milliseconds and with good waveform replicability were considered to be the normal responses. The acoustic reflexes were elicited using 226 Hz probe tone in both ipsilateral and contralateral ears, and a reflex amplitude ≥ 0.3 is considered to be a normal response. All of the individuals with ANSD had the presence of TEOAEs in both ears and showed the absence of ABR and acoustic reflexes. The ABR waveform obtained from one of the individuals with normal hearing and with ANSD for the left ear using double channel evoked potential system is shown in [Fig. 1]. All of the individuals in the normal hearing sensitivity group had normal TEOAEs, normal ABR and normal acoustic reflexes. Individuals with ANSD were recruited from the Audiology department of the hospital. Individuals in the normal-hearing group were recruited from the general population for the present study. The demographic and audiological details of all the individuals with ANSD are given in [Table 1]. Informed consent was obtained from all the participants following the “Ethical Guidelines for Biobehavioral Research Involving Human Subjects”[39]. Institutional ethical committee approval was obtained prior to the study.

Table 1

Demographic and Audiological Characteristics of Individuals with ANSD

Participant

Age (Years old)/ Gender

Pure-Tone Average

(dB HL)

Speech Identification Scores (%)*

Tympanometry

Acoustic Reflex

Auditory Brainstem Response (ABR)

Otoacoustic Emission (OAE)

Neurological evaluation

ENT evaluation

RE/LE

RE/LE

RE/LE

RE/LE

RE and LE

RE and LE

ANSD1

20/F

32.5/36.2

45/40

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD2

16/F

38.75/20

65/50

As/As

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD3

26/F

15/22.5

90/90

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD4

55/M

46.25/47.5

50/45

A/As

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD5

21/M

30/6.25

50/10

As/As

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD6

36/M

22.5/18.75

30/20

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD7

24/M

43.75/30

35/35

Ad/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD8

18/M

28.75/25

30/30

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD9

20/M

18.75/25

15/60

As/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD10

21/M

31.25/35

40/45

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD11

37/F

20/16.25

40/15

As/As

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD12

35/M

30/22.5

40/25

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD13

19/F

36.25/23.75

30/20

Ad/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD14

26/F

28.75/22

45/35

As/As

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD15

54/M

41.25/36.25

40/35

Ad/Ad

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD16

20/M

31.25/32.5

50/45

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD17

27/M

35/30

45/40

A/Ad

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD18

18/F

48.75/52.5

60/55

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD19

48/M

31.25/30

45/35

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD20

36/F

47.25/37.25

68/76

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD21

21/F

10/12.5

45/65

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD22

30/M

22.5/20

30/25

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD23

24/F

35/45

35/45

As/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD24

37/F

53.75/41.25

60/45

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD25

17/F

37.5/28.75

75/40

As/As

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD26

17/F

27.5/33.75

25/25

As/As

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD27

41/F

8.75/7.4

30/45

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD28

20/F

17.5/15

20/15

As/As

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD29

24/M

28.75/31.25

35/40

A/A

Reflex absent

Response absent

Response present

ANSD

SNHL

ANSD30

40/F

45/43.75

50/50

As/As

Reflex absent

Response absent

Response present

ANSD

SNHL

Abbreviations: F, Female; M, Male; LE, Left ear; RE, Right ear.


Note. *- Speech identification scores in quiet.


Zoom Image
Fig. 1 The auditory brainstem response obtained from one of the individuals with normal hearing (panel A) and with ANSD (panel B) using a double channel evoked potential system. Panel A shows an ABR response obtained from individuals with normal hearing with the three prominent peaks (I, III and V peaks) for 90 dB nHL click stimuli. Panel B shows the response obtained from individuals with ANSD with no prominent peaks both in ipsilateral and contralateral recording.

Stimuli

The stimulus pair /ba/ and /da/ was used to elicit P300 response in active oddball paradigm, and the stimulus /da/ was used in repetitive paradigm. This stimulus pair was selected as it differed in phonetic feature and place of articulation, which is reported to be more susceptible to noise.[40] [41] [42] Both the stimuli /ba/ and /da/ differ in their spectral characteristics, steepness and the direction of the second and the third formant transition.[43] Adobe Audition version 3.0 (Adobe, San Jose, CA, USA) with a MOTU sound card interface (Microbook II, Massachusetts, USA) was used to record the stimuli, at a sampling frequency of 44,100 Hz and 16-bit resolution. The duration of both syllables was 240 milliseconds and was kept equal to minimize discrimination of syllables based on durational cues. Auxviewer software (Kwon, 2012)[44] was used to mix syllables with speech noise at +10 dB SNR. The SNR of +10 dB was considered based on the pilot study. The result of the pilot study showed that the behavioral performance of individuals with ANSD on the discrimination task dropped below the chance level at SNR poorer than +10 dB. The waveform and the spectrogram of the syllable /da/ in quiet and in the presence of noise are shown in [Fig. 2]. The syllable was mixed with speech noise such that the onset of the syllable was 1,000 milliseconds after the onset of the noise and the offset of the syllable was 1,000 milliseconds before the offset of the noise. The 1,000 milliseconds pre-syllable noise was selected based on the pilot study, as it resulted in complete separation of responses elicited by noise from that of speech in noise and it was also found to be sufficient to avoid the influence of response generated by noise on response generated by speech in noise.[45] The continuous background noise was not presented to the participants as it might have caused neural adaptation in individuals with ANSD.[46]

Zoom Image
Fig. 2 The waveform and spectrogram of stimuli /da/ in quiet and in noise (at +10 dB SNR). The speech stimuli in the noise condition was presented such that the onset of the speech stimuli was 1,000 milliseconds after the onset of the noise and the offset of speech stimuli was 1,000 milliseconds before the offset of the noise stimuli.

#

Procedure

Neural responses were recorded using a Neuroscan Scan 4.5 system (Compumedics, Charlotte, NC, USA). QuickCap with 64 sintered electrodes fitted with quick cells was used to record the evoked potentials. The left mastoid was considered as reference and the electrode between FPz and Fz as ground. Extraocular electrodes were placed around the eyes in close proximity to monitor ocular movements (for horizontal and vertical eye movement). Fastrack 3D digitizer (Polhemus, Colchester, USA) was used to digitize the location of the electrodes before the electroencephalogram (EEG) recording. The configuration of electrode placement is shown in [Fig. 3].

Zoom Image
Fig. 3 The configuration of 64 electrodes used in the electroencephalogram recording.
Zoom Image
Fig. 4 Reaction time and sensitivity values obtained from individuals with normal hearing sensitivity and with ANSD in quiet and in noise. The error bar represents one standard error. The asterisk shows the significant difference (p < 0.05) between the conditions (quiet and noise) for RT and sensitivity measures.
Zoom Image
Fig. 5 The grand average waveform obtained from individuals with normal hearing and with ANSD in response to /ba/-/da/ stimuli in oddball paradigm (waveforms in black) and to /da/ stimuli in repetitive paradigm (waveform in red) in quiet and in noise for all the 64 channels. The upper panel of all the windows shows the average response obtained from 30 individuals with normal hearing and 30 individuals with ANSD separately in quiet and in noise conditions. The lower panel of each window shows the global field power (GFPs) for the average reponse. Time in milliseconds is plotted on the x-axis and amplitude in µV is shown on the y-axis.

For recording the P300 response, the frequent (80%) stimuli was /ba/ and the infrequent (20%) stimuli was /da/. A total of 250 stimuli comprising of frequent (/ba/) and infrequent (/da/) stimuli were used for the recording. They were presented in such a way that none of the two infrequent syllables came one after the other. The interstimulus interval (onset of a 1st syllable to the onset of a 2nd syllable) between two consequent stimuli in quiet condition was 2,240 milliseconds, and in noise condition it was 3,240 milliseconds. The trigger was placed at 1,000 milliseconds from the onset of noise in the noise condition. The 75 dB SPL signal was presented using a loudspeaker kept at a distance of 1 meter and at 0° azimuth. The intensity of the signal reaching the ear was ensured to be loud enough to elicit the response and was at a most comfortable level for both groups of participants. A total of 50 sweeps of /da/ stimuli were used to elicit the response in the oddball and in the repetitive paradigm in quiet and at +10 dB SNR. The instruction given to the participants for recording P300 in the oddball paradigm was ‘You will hear two stimuli, /ba/ and /da/ in random order, press the response button given to you as early as possible after hearing stimulus /da/ and do not press the button for stimulus /ba/’. The approximate duration of recording the P300 response in the oddball paradigm was 10 minutes in quiet condition and 14 minutes in noise condition. After each recording, a rest period of 5 minutes was given to the participants. Behavioral measures (sensitivity and reaction time) were estimated based on the button press response while recording the P300 response. Sensitivity (d') is an estimate of the strength of the signal. It is the statistic that incorporates both hit rate and false alarm rate. In other words, sensitivity also suggests the accuracy with which the task is performed. It is calculated using the formula [(d'= z (Hit rate) – z (False alarm rate)]. The value of sensitivity ranges from 0 to 1. Reaction time (RT) is the time taken from the onset of the stimulus to the button press response. The participants were made to watch a silent video for recording response in the repetitive paradigm.


#

Analyses

The continuous EEG obtained from both groups of individuals with normal hearing and with ANSD in quiet and at +10 dB SNR was analyzed using a script written in the SCAN module of the Neuroscan. The script includes steps for DC offset correction, ocular artifact reduction, filtering, epoching, baseline correction, and rereferencing. The response was bandpass filtered from 0.1 to 30 Hz using a FIR filter and was epoched from 200 milliseconds prestimulus to 800 milliseconds poststimulus. Bad electrodes are defined as those electrodes with amplitude spikes > 75 µV. The data from the bad electrodes were interpolated using spline interpolation. The amplitude and latency of the P300 response and scalp topography were analyzed using the Cartool software (https://sites.google.com/site/cartoolcommunity/home). Point-wise paired randomization analysis and topographic pattern analysis procedure were used to analyze the responses obtained from individuals with normal hearing and with ANSD across both listening conditions (in quiet and in noise). In the pointwise paired randomization analysis procedure, the responses obtained in quiet condition were compared with the responses obtained in noise condition at all the different points of time from 0 milliseconds to 800 milliseconds of duration. The regions with statistically significant differences between the two responses are shown as the dark shaded region across the time frame. This analysis also gives information about the global field power (GFP). Global field power is the single reference independent measure of response strength. Mathematically, GFP is the root mean square amplitudes across average referenced electrodes at a given instance in time.[47] [48] In the topographic pattern analysis procedure, the scalp activation patterns are compared between the conditions (in quiet and in noise) across the time frames and the significantly different activation patterns are shown as the template maps. Behavioral measures (sensitivity and RT) were calculated for the button press response and the data obtained were analyzed using Statistical Package for the Social Sciences Version 17 (SPSS Inc, Illinois, USA). The Shapiro-Wilk test of normality was performed to assess the distribution of the data for the RT and sensitivity, and the data was found to be nonnormally distributed (p < 0.05), thus nonparametric tests were used. The Mann-Whitney U test was performed to compare the data obtained between the groups (e.g., sensitivity of normal hearing individuals with the sensitivity of individuals with ANSD) and the Wilcoxon signed rank test was used to compare response within groups (e.g., sensitivity of individuals with normal hearing in quiet with that of sensitivity of normal hearing in noise).

Zoom Image
Fig. 6 The grand average P300 response obtained from individuals with normal hearing and with ANSD in response to deviant stimuli in oddball paradigm in quiet and in noise. The dark shaded area in the lower panel shows the region of significant difference (p < 0.05) on point-wise paired randomization test. Time in milliseconds is plotted on the x-axis and scalp electrode locations are shown on the y-axis in the bottom panel.
Zoom Image
Fig. 7 The result of topographic pattern analysis showing the time region at which statistically different template maps occurred is shown in Panel A. The color shaded regions show the global field power (GFP) for the two groups of individuals with normal hearing and with ANSD, in quiet and in noise conditions. The numbers below the GFPs represent the significantly different template maps/activation pattern seen for both groups. Different colors represent different template maps. Y-axis in panel A represents the response in quiet and in noise condition for both groups. Panel B shows the six significantly different templates superimposed on the head model which lies in the time region of P300. These template maps show the difference in scalp activation pattern with scale ranging from no activation region on the scalp (dark blue areas) to the area of maximum activation (pink shaded region) in response to the task in the oddball paradigm. All of the six significantly different templates show difference in scalp activation pattern as shown by the area and location of activation.

#
#

Results

The sensitivity and the RT for the identification of oddball stimuli are shown in [Fig. 4]. The Mann-Whitney U test results showed a statistically significant difference between individuals with normal hearing and with ANSD for RT and sensitivity in both conditions (p < 0.05). When compared within groups and across conditions, the Wilcoxon signed rank test showed significantly shorter RT (z = 3.65, p < 0.05, r = 0.66) and greater sensitivity (z = 3.06, p < 0.01, r = 0.55) in quiet condition compared with in noise condition with large effect size for normal-hearing individuals. Similarly, RT (z = 4.40, p < 0.05, r = 0.86) was significantly shorter, and sensitivity (z = 3.64, p < 0.05, r = 0.68) was significantly greater in quiet condition compared with noise condition with large effect size in individuals with ANSD.

The grand average responses (average of responses obtained from 30 individuals with normal hearing and 30 individuals with ANSD separately) for the stimulus pair /ba/-/da/ presented in oddball paradigm and for /da/ in repetitive paradigm across 64 channels in quiet and in noise for individuals with normal hearing and for ANSD are shown in [Fig. 5]. The upper panel of each window in [Fig. 5] shows the average response obtained from 30 individuals with normal hearing and 30 individuals with ANSD. The lower panel of each window in [Fig. 5] shows the average GFPs of response obtained from 30 individuals with normal hearing and 30 individuals with ANSD for the 64 electrodes. It is clear from [Fig. 5] that prominent P300 response with clear morphology could be elicited from individuals with normal hearing and with ANSD. Individuals with ANSD showed a greater reduction in amplitude of P300 response with the addition of noise. [Fig. 5] shows the neural response across listening conditions (quiet and noise) in individuals with normal hearing and with ANSD. Comparing responses across quiet and noise condition showed an overall reduction in amplitude of P300 response in both groups of individuals with normal hearing and with ANSD, with a greater reduction in amplitude of P300 for individuals with ANSD. The pointwise difference in P300 response between quiet and in noise condition was calculated using the paired randomization method in Cartool. The result showed a significant difference in event related potential (ERP) response in quiet and in noise condition as shown in the lower panel of [Fig. 6]. The dark shaded area in the lower panel of [Fig. 6] shows the region of significant difference (p < 0.05) when the responses in quiet are compared with the responses obtained in noise for both groups of individuals across channels and time. Overall, there was significant prolongation in latency and reduction in amplitude of P300 response with the addition of noise in individuals with ANSD.

The topographic pattern analysis was done to see the difference in scalp activation pattern in quiet and in noise condition for individuals with normal hearing and with ANSD. The result showed a total of 10 statistically significant maps accounting for 87% of the variance in the group average data, and the result is shown in [Fig. 7]. In both groups of individuals, there was centro-parietal scalp activation with minor but statistically significant variations in topographies during the P300 time window, as shown on the pattern analysis result. Individuals with ANSD showed more activation in the central-parietal-occipital region (pink shaded area in the lower panel of [Fig. 7]) of the brain, whereas individuals with normal hearing showed activation of the central-parietal region. Scalp distribution was more diffused in individuals with ANSD compared with those with normal hearing, as shown by the area and the location of the activation site. As can be observed in [Fig. 7], there was a band of less activation in the frontal and in the occipital region of the skull (blue shaded region) for individuals with normal hearing, as shown in templates of [Figures 7A] and [7B].


#

Discussion

To our knowledge, there are no published reports that discuss eliciting P300 potential in the presence of noise in individuals with ANSD. In the majority of the studies, researchers have recorded P1-N1-P2 potentials[14] [15] [36] [49] [50] , mismatch negativity[16] [17] and P300[34] [37] [38] in quiet condition for individuals with ANSD. The result of the previous study investigating P300 response in individuals with ANSD[34] [37] showed prolongation in latency of P300 response, reduction in amplitude of P300 response, prolonged RT and poorer sensitivity in quiet condition for different stimuli in individuals with ANSD compared with individuals with normal hearing sensitivity. Similar findings were observed in the present study, suggesting that the individuals with ANSD might have difficulty in stimulus evaluation and in the speech discrimination process. The individuals with ANSD require more time to discriminate a particular signal, and the accuracy with which they discriminate a particular signal is also compromised in individuals with ANSD. Single unit cortical data suggests that the cortical neurons are more sensitive to temporal cues compared with the intensity cues for the representation of an auditory signal.[51] In individuals with ANSD, the poor phase locking property of the auditory neurons leads to poor representation of the temporal cues, and thus prolonged latency of the P300 response. P300 response showed prolongation in latency for individuals with ANSD with the addition of noise, suggesting slow processing speed for the stimuli in noise as compared with in quiet. They require more time to detect and respond to the target stimuli in the presence of noise.[29] [30] P300 amplitude also showed a reduction in amplitude in presence of noise as compared with in quiet, which could be because of the increase in memory load and deficit in attention allocation to the task[25] in the presence of noise. The individuals with ANSD might have poor working memory, as suggested by the delay in the P300 latency and reduction in P300 amplitude. Behavioral working memory test results might give more information about the working memory capacity in individuals with ANSD and will also supplement the present finding. In the present study, behavioral working memory tests were not included and thus become a limitation of the study.

Scalp topography for P300 response showed neural activation in the central-parietal region of the scalp in individuals with normal hearing, and activation of the central-parietal-occipital region of the scalp in individuals with ANSD. There was a clear band of activation in the central-parietal region with anterior and posterior negativity in individuals with normal hearing. There was additional activation toward the occipital lobe with a more diffused activation pattern in individuals with ANSD. The difference in activation pattern across groups suggests the differential distribution of the electrical field across the scalp. This difference in the scalp activation pattern might have been caused by the difference in the configuration of underlying brain sources generating these potentials, and differential activation of brain networks.[52] A study investigating the current source density in individuals with ANSD will give a clear idea about the generators for these potentials.


#

Conclusion

The individuals with ANSD required more time to discriminate the stimuli and showed less accuracy in identifying the target stimuli compared with individuals with normal hearing sensitivity. There was deterioration in behavioral performance (sensitivity and RT) in both groups with the addition of noise, and variation in behavioral performance was higher for individuals with ANSD compared with individuals with normal hearing. P300 response showed prolongation in latency and reduction in amplitude in individuals with ANSD, compared with normal hearing individuals. Based on the RT and sensitivity (behavioral measures), and latency, amplitude, and scalp topography of P300 response (neural measures), it is evident that the individuals with ANSD showed deviation in both behavioral and neural measures compared with individuals with normal hearing, which could be the result of the difference in the underlying generation sources for the responses.


#
#

Conflict of Interests

The authors have no conflict of interests to declare.

  • References

  • 1 Rance G, Starr A. Pathophysiological mechanisms and functional hearing consequences of auditory neuropathy. Brain 2015; 138 (Pt 11): 3141-3158 DOI: 10.1093/brain/awv270.
  • 2 Roche JP, Huang BY, Castillo M, Bassim MK, Adunka OF, Buchman CA. Imaging characteristics of children with auditory neuropathy spectrum disorder. Otol Neurotol 2010; 31 (05) 780-788 . Doi: 10.1097/MAO.0b013e3181d8d528
  • 3 Liu C, Bu X, Wu F, Xing G. Unilateral auditory neuropathy caused by cochlear nerve deficiency. Int J Otolaryngol 2012; 2012: 914986 . Doi: 10.1155/2012/914986
  • 4 Buchman CA, Roush PA, Teagle HFB, Brown CJ, Zdanski CJ, Grose JH. Auditory neuropathy characteristics in children with cochlear nerve deficiency. Ear Hear 2006; 27 (04) 399-408 . Doi: 10.1097/01.aud.0000224100.30525.ab
  • 5 Sininger YS. Identification of auditory neuropathy in infants and children. Semin Hear 2002; 23: 193-200 . Doi: 10.1055/s-2002-34456
  • 6 Teagle HF, Roush PA, Woodard JS. et al. Cochlear implantation in children with auditory neuropathy spectrum disorder. Ear Hear 2010; 31 (03) 325-335 . Doi: 10.1097/AUD.0b013e3181ce693b
  • 7 Hood LJ. Auditory neuropathy/dyssynchrony disorder Diagnosis and Management. Otolaryngol Clin North Am 2015; 48 (06) 1027-1040 . Doi: 10.1016/j.otc.2015.06.006
  • 8 Starr A, Sininger YS, Pratt H. The varieties of auditory neuropathy. J Basic Clin Physiol Pharmacol 2000; 11 (03) 215-230 . Doi: 10.1515/jbcpp.2000.11.3.215
  • 9 Berlin CI, Hood LJ, Morlet T. et al. Absent or elevated middle ear muscle reflexes in the presence of normal otoacoustic emissions: a universal finding in 136 cases of auditory neuropathy/dys-synchrony. J Am Acad Audiol 2005; 16 (08) 546-553 . Doi: 10.3766/jaaa.16.8.3
  • 10 Berlin CI, Hood LJ, Morlet T. et al. Multi-site diagnosis and management of 260 patients with auditory neuropathy/dys-synchrony (auditory neuropathy spectrum disorder). Int J Audiol 2010; 49 (01) 30-43 . Doi: 10.3109/14992020903160892
  • 11 Picton T. Hearing in time: evoked potential studies of temporal processing. Ear Hear 2013; 34 (04) 385-401 . Doi: 10.1097/AUD.0b013e31827ada02
  • 12 Starr A, Picton TW, Kim R. Pathophysiology of Auditory Neuropathy. In: Sininger Y, Starr A. , eds. Auditory Neuropathy:a new perspective on hearing disorders. Singular, San Diego; 2001: 67-82
  • 13 Moser T, Starr A. Auditory neuropathy--neural and synaptic mechanisms. Nat Rev Neurol 2016; 12 (03) 135-149 . Doi: 10.1038/nrneurol.2016.10
  • 14 Narne VK, Vanaja C. Speech identification and cortical potentials in individuals with auditory neuropathy. Behav Brain Funct 2008; 4: 15 . Doi: 10.1186/1744-9081-4-15
  • 15 Narne VK, Prabhu P, Chandan H, Deepthi M. Audiological profiling of 198 individuals with auditory neuropathy spectrum disorder. Hear Balance Commun 2014; 12: 112-120 . Doi: 10.3109/21695717.2014.938481
  • 16 Kumar AU, Jayaram M. Auditory processing in individuals with auditory neuropathy. Behav Brain Funct 2005; 1: 21 . Doi: 10.1186/1744-9081-1-21
  • 17 Kraus N, Bradlow AR, Cheatham MA. et al. Consequences of neural asynchrony: a case of auditory neuropathy. J Assoc Res Otolaryngol 2000; 1 (01) 33-45 . Doi: 10.1007/s101620010004
  • 18 Narne VK, Chatni S, Kalaiah M. et al. Temporal processing and speech perception in quiet and noise across different degrees of ANSD. Hear Balance Commun 2015; 13: 100-110 . Doi: 10.3109/21695717.2015.1021565
  • 19 Apeksha K, Kumar AU. Speech perception in quiet and in noise condition in individuals with auditory neuropathy spectrum disorder. J Int Adv Otol 2017; 13 (01) 83-87 . Doi: 10.5152/iao.2017.3172
  • 20 Akeroyd MA. Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. Int J Audiol 2008; 47 (Suppl. 02) S53-S71 . Doi: 10.1080/14992020802301142
  • 21 Wingfield A, Tun PA. Cognitive supports and cognitive constraints on comprehension of spoken language. J Am Acad Audiol 2007; 18 (07) 548-558
  • 22 Wingfield A, Tun PA. Spoken Language Comprehension in Older Adults: Interactions between Sensory and Cognitive Changes in Normal Aging. Semin Hear 2001; 22: 287-302
  • 23 Rönnberg J, Lunner T, Zekveld A. et al. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 2013; 7: 31 . Doi: 10.3389/fnsys.2013.00031
  • 24 McCoy SL, Tun PA, Cox LC, Colangelo M, Stewart RA, Wingfield A. Hearing loss and perceptual effort: downstream effects on older adults' memory for speech. Q J Exp Psychol A 2005; 58 (01) 22-33 . Doi: 10.1080/02724980443000151
  • 25 Donchin E, Coles M. Is the P300 component a manifestation of context updating?. Behav Brain Sci 1988; 11: 355-425 . Doi: 10.1017/S0140525 × 00058027
  • 26 Gonsalvez CL, Polich J. P300 amplitude is determined by target-to-target interval. Psychophysiology 2002; 39 (03) 388-396 . Doi: 10.1017/S0048577201393137
  • 27 Key A, Dove G, Maguire M. Linking Brainwaves to the Brain: An ERP Primer. J Chem Inf Model 2013; 53: 1689-1699 . Doi: 10.1017/CBO9781107415324.004
  • 28 Rossini PM, Rossi S, Babiloni C, Polich J. Clinical neurophysiology of aging brain: from normal aging to neurodegeneration. Prog Neurobiol 2007; 83 (06) 375-400 . Doi: 10.1016/j.pneurobio.2007.07.010
  • 29 Kutas M, McCarthy G, Donchin E. Augmenting mental chronometry: the P300 as a measure of stimulus evaluation time. Science 1977; 197 (4305): 792-795 DOI: 10.1126/science.887923.
  • 30 Magliero A, Bashore TR, Coles MGH, Donchin E. On the dependence of P300 latency on stimulus evaluation processes. Psychophysiology 1984; 21 (02) 171-186 . Doi: 10.1111/j.1469-8986.1984.tb00201.x
  • 31 Tsolaki A, Kosmidou V, Hadjileontiadis L, Kompatsiaris IY, Tsolaki M. Brain source localization of MMN, P300 and N400: aging and gender differences. Brain Res 2015; 1603: 32-49 . Doi: 10.1016/j.brainres.2014.10.004
  • 32 Cóser MJS, Cóser PL, Pedroso FS, Rigon R, Cioqueta E. P300 auditory evoked potential latency in elderly. Rev Bras Otorrinolaringol (Engl Ed) 2010; 76 (03) 287-293 . Doi: 10.1590/S1808-86942010000300003
  • 33 Halgren E, Marinkovic K, Chauvel P. Generators of the late cognitive potentials in auditory and visual oddball tasks. Electroencephalogr Clin Neurophysiol 1998; 106 (02) 156-164
  • 34 Apeksha K, Kumar AU. P300 in individuals with auditory neuropathy spectrum disorder. J Indian Speech Lang Hear Assoc 2017; 31: 23-28 . Doi: 10.4103/jisha.JISHA
  • 35 Gabr TA. Mismatch negativity in auditory neuropathy/auditory dys-synchrony. Audiol Med 2011; 9: 91-97 . Doi: 10.3109/1651386X.2011.605623
  • 36 Michalewski HJ, Starr A, Zeng FG, Dimitrijevic A. N100 cortical potentials accompanying disrupted auditory nerve activity in auditory neuropathy (AN): effects of signal intensity and continuous noise. Clin Neurophysiol 2009; 120 (07) 1352-1363 . Doi: 10.1016/j.clinph.2009.05.013.N100
  • 37 Apeksha K, Kumar UA. Cortical processing of speech in individuals with auditory neuropathy spectrum disorder. Eur Arch Otorhinolaryngol 2018; 275 (06) 1409-1418 . Doi: 10.1007/s00405-018-4966-8
  • 38 Apeksha K, Kumar UA. Effect of acoustic features on discrimination ability in individuals with auditory neuropathy spectrum disorder: an electrophysiological and behavioral study. Eur Arch Otorhinolaryngol 2019; 276 (06) 1633-1641 . Doi: 10.1007/s00405-019-05405-9
  • 39 Venkatesan S. Ethical Guidelines for Bio-behavioral Research Involving Human Subjects. All India Institute of Speech and Hearing, Mysore 2009
  • 40 Miller G, Nicely P. An analysis of perceptual confusions among some English consonants. J Acoust Soc Am 1955; 27: 338-352 . Doi: 10.1121/1.1907526
  • 41 Boothroyd A. Auditory perception of speech contrasts by subjects with sensorineural hearing loss. J Speech Hear Res 1984; 27 (01) 134-144 . Doi: 10.1044/jshr.2701.134
  • 42 Hornsby BWY, Trine TD, Ohde RN. The effects of high presentation levels on consonant feature transmission. J Acoust Soc Am 2005; 118 (3 Pt 1): 1719-1729 DOI: 10.1121/1.1993128.
  • 43 Sawusch JR, Pisoni DB. On the identification of place and voicing features in synthetic stop consonants. J Phonetics 1974; 2 (03) 181-194
  • 44 Kwon BJ. AUX: a scripting language for auditory signal processing and software packages for psychoacoustic experiments and education. Behav Res Methods 2012; 44 (02) 361-373 . Doi: 10.3758/s13428-011-0161-1
  • 45 Kaplan-Neeman R, Kishon-Rabin L, Henkin Y, Muchnik C. Identification of syllables in noise: electrophysiological and behavioral correlates. J Acoust Soc Am 2006; 120 (02) 926-933 . Doi: 10.1121/1.2217567
  • 46 Wynne DP, Zeng FG, Bhatt S, Michalewski HJ, Dimitrijevic A, Starr A. Loudness adaptation accompanying ribbon synapse and auditory nerve disorders. Brain 2013; 136 (Pt 5): 1626-1638 DOI: 10.1093/brain/awt056.
  • 47 Lehmann D, Skrandies W. Reference-free identification of components of checkerboard-evoked multichannel potential fields. Electroencephalogr Clin Neurophysiol 1980; 48 (06) 609-621 . Doi: 10.1016/0013-4694(80)90419-8
  • 48 Murray MM, Brunet D, Michel CM. Topographic ERP analyses: a step-by-step tutorial review. Brain Topogr 2008; 20 (04) 249-264 . Doi: 10.1007/s10548-008-0054-5
  • 49 Abdeltawwab M. Auditory N1–P2 cortical event related potentials in auditory neuropathy spectrum disorder patients. J Int Adv Otol 2014; 10: 270-274 . Doi: 10.5152/iao.2014.104
  • 50 Narne VK, Vanaja CS. Perception of speech with envelope enhancement in individuals with auditory neuropathy and simulated loss of temporal modulation processing. Int J Audiol 2009; 48 (10) 700-707 . Doi: 10.1080/14992020902931574
  • 51 Phillips DP. Neural representation of sound amplitude in the auditory cortex: effects of noise masking. Behav Brain Res 1990; 37 (03) 197-214 . Doi: 10.1016/0166-4328(90)90132-x
  • 52 Song J, Davey C, Poulsen C. et al. EEG source localization: Sensor density and head surface coverage. J Neurosci Methods 2015; 256: 9-21 . Doi: 10.1016/j.jneumeth.2015.08.015

Address for correspondence

Kumari Apeksha, PhD
Lecturer, Department of Speech and Hearing
JSS Institute of Speech & Hearing, Mysuru, Karnataka
India   

Publication History

Received: 12 July 2019

Accepted: 04 November 2019

Article published online:
11 March 2020

© .

Thieme Revinter Publicações Ltda
Rio de Janeiro, Brazil

  • References

  • 1 Rance G, Starr A. Pathophysiological mechanisms and functional hearing consequences of auditory neuropathy. Brain 2015; 138 (Pt 11): 3141-3158 DOI: 10.1093/brain/awv270.
  • 2 Roche JP, Huang BY, Castillo M, Bassim MK, Adunka OF, Buchman CA. Imaging characteristics of children with auditory neuropathy spectrum disorder. Otol Neurotol 2010; 31 (05) 780-788 . Doi: 10.1097/MAO.0b013e3181d8d528
  • 3 Liu C, Bu X, Wu F, Xing G. Unilateral auditory neuropathy caused by cochlear nerve deficiency. Int J Otolaryngol 2012; 2012: 914986 . Doi: 10.1155/2012/914986
  • 4 Buchman CA, Roush PA, Teagle HFB, Brown CJ, Zdanski CJ, Grose JH. Auditory neuropathy characteristics in children with cochlear nerve deficiency. Ear Hear 2006; 27 (04) 399-408 . Doi: 10.1097/01.aud.0000224100.30525.ab
  • 5 Sininger YS. Identification of auditory neuropathy in infants and children. Semin Hear 2002; 23: 193-200 . Doi: 10.1055/s-2002-34456
  • 6 Teagle HF, Roush PA, Woodard JS. et al. Cochlear implantation in children with auditory neuropathy spectrum disorder. Ear Hear 2010; 31 (03) 325-335 . Doi: 10.1097/AUD.0b013e3181ce693b
  • 7 Hood LJ. Auditory neuropathy/dyssynchrony disorder Diagnosis and Management. Otolaryngol Clin North Am 2015; 48 (06) 1027-1040 . Doi: 10.1016/j.otc.2015.06.006
  • 8 Starr A, Sininger YS, Pratt H. The varieties of auditory neuropathy. J Basic Clin Physiol Pharmacol 2000; 11 (03) 215-230 . Doi: 10.1515/jbcpp.2000.11.3.215
  • 9 Berlin CI, Hood LJ, Morlet T. et al. Absent or elevated middle ear muscle reflexes in the presence of normal otoacoustic emissions: a universal finding in 136 cases of auditory neuropathy/dys-synchrony. J Am Acad Audiol 2005; 16 (08) 546-553 . Doi: 10.3766/jaaa.16.8.3
  • 10 Berlin CI, Hood LJ, Morlet T. et al. Multi-site diagnosis and management of 260 patients with auditory neuropathy/dys-synchrony (auditory neuropathy spectrum disorder). Int J Audiol 2010; 49 (01) 30-43 . Doi: 10.3109/14992020903160892
  • 11 Picton T. Hearing in time: evoked potential studies of temporal processing. Ear Hear 2013; 34 (04) 385-401 . Doi: 10.1097/AUD.0b013e31827ada02
  • 12 Starr A, Picton TW, Kim R. Pathophysiology of Auditory Neuropathy. In: Sininger Y, Starr A. , eds. Auditory Neuropathy:a new perspective on hearing disorders. Singular, San Diego; 2001: 67-82
  • 13 Moser T, Starr A. Auditory neuropathy--neural and synaptic mechanisms. Nat Rev Neurol 2016; 12 (03) 135-149 . Doi: 10.1038/nrneurol.2016.10
  • 14 Narne VK, Vanaja C. Speech identification and cortical potentials in individuals with auditory neuropathy. Behav Brain Funct 2008; 4: 15 . Doi: 10.1186/1744-9081-4-15
  • 15 Narne VK, Prabhu P, Chandan H, Deepthi M. Audiological profiling of 198 individuals with auditory neuropathy spectrum disorder. Hear Balance Commun 2014; 12: 112-120 . Doi: 10.3109/21695717.2014.938481
  • 16 Kumar AU, Jayaram M. Auditory processing in individuals with auditory neuropathy. Behav Brain Funct 2005; 1: 21 . Doi: 10.1186/1744-9081-1-21
  • 17 Kraus N, Bradlow AR, Cheatham MA. et al. Consequences of neural asynchrony: a case of auditory neuropathy. J Assoc Res Otolaryngol 2000; 1 (01) 33-45 . Doi: 10.1007/s101620010004
  • 18 Narne VK, Chatni S, Kalaiah M. et al. Temporal processing and speech perception in quiet and noise across different degrees of ANSD. Hear Balance Commun 2015; 13: 100-110 . Doi: 10.3109/21695717.2015.1021565
  • 19 Apeksha K, Kumar AU. Speech perception in quiet and in noise condition in individuals with auditory neuropathy spectrum disorder. J Int Adv Otol 2017; 13 (01) 83-87 . Doi: 10.5152/iao.2017.3172
  • 20 Akeroyd MA. Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. Int J Audiol 2008; 47 (Suppl. 02) S53-S71 . Doi: 10.1080/14992020802301142
  • 21 Wingfield A, Tun PA. Cognitive supports and cognitive constraints on comprehension of spoken language. J Am Acad Audiol 2007; 18 (07) 548-558
  • 22 Wingfield A, Tun PA. Spoken Language Comprehension in Older Adults: Interactions between Sensory and Cognitive Changes in Normal Aging. Semin Hear 2001; 22: 287-302
  • 23 Rönnberg J, Lunner T, Zekveld A. et al. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 2013; 7: 31 . Doi: 10.3389/fnsys.2013.00031
  • 24 McCoy SL, Tun PA, Cox LC, Colangelo M, Stewart RA, Wingfield A. Hearing loss and perceptual effort: downstream effects on older adults' memory for speech. Q J Exp Psychol A 2005; 58 (01) 22-33 . Doi: 10.1080/02724980443000151
  • 25 Donchin E, Coles M. Is the P300 component a manifestation of context updating?. Behav Brain Sci 1988; 11: 355-425 . Doi: 10.1017/S0140525 × 00058027
  • 26 Gonsalvez CL, Polich J. P300 amplitude is determined by target-to-target interval. Psychophysiology 2002; 39 (03) 388-396 . Doi: 10.1017/S0048577201393137
  • 27 Key A, Dove G, Maguire M. Linking Brainwaves to the Brain: An ERP Primer. J Chem Inf Model 2013; 53: 1689-1699 . Doi: 10.1017/CBO9781107415324.004
  • 28 Rossini PM, Rossi S, Babiloni C, Polich J. Clinical neurophysiology of aging brain: from normal aging to neurodegeneration. Prog Neurobiol 2007; 83 (06) 375-400 . Doi: 10.1016/j.pneurobio.2007.07.010
  • 29 Kutas M, McCarthy G, Donchin E. Augmenting mental chronometry: the P300 as a measure of stimulus evaluation time. Science 1977; 197 (4305): 792-795 DOI: 10.1126/science.887923.
  • 30 Magliero A, Bashore TR, Coles MGH, Donchin E. On the dependence of P300 latency on stimulus evaluation processes. Psychophysiology 1984; 21 (02) 171-186 . Doi: 10.1111/j.1469-8986.1984.tb00201.x
  • 31 Tsolaki A, Kosmidou V, Hadjileontiadis L, Kompatsiaris IY, Tsolaki M. Brain source localization of MMN, P300 and N400: aging and gender differences. Brain Res 2015; 1603: 32-49 . Doi: 10.1016/j.brainres.2014.10.004
  • 32 Cóser MJS, Cóser PL, Pedroso FS, Rigon R, Cioqueta E. P300 auditory evoked potential latency in elderly. Rev Bras Otorrinolaringol (Engl Ed) 2010; 76 (03) 287-293 . Doi: 10.1590/S1808-86942010000300003
  • 33 Halgren E, Marinkovic K, Chauvel P. Generators of the late cognitive potentials in auditory and visual oddball tasks. Electroencephalogr Clin Neurophysiol 1998; 106 (02) 156-164
  • 34 Apeksha K, Kumar AU. P300 in individuals with auditory neuropathy spectrum disorder. J Indian Speech Lang Hear Assoc 2017; 31: 23-28 . Doi: 10.4103/jisha.JISHA
  • 35 Gabr TA. Mismatch negativity in auditory neuropathy/auditory dys-synchrony. Audiol Med 2011; 9: 91-97 . Doi: 10.3109/1651386X.2011.605623
  • 36 Michalewski HJ, Starr A, Zeng FG, Dimitrijevic A. N100 cortical potentials accompanying disrupted auditory nerve activity in auditory neuropathy (AN): effects of signal intensity and continuous noise. Clin Neurophysiol 2009; 120 (07) 1352-1363 . Doi: 10.1016/j.clinph.2009.05.013.N100
  • 37 Apeksha K, Kumar UA. Cortical processing of speech in individuals with auditory neuropathy spectrum disorder. Eur Arch Otorhinolaryngol 2018; 275 (06) 1409-1418 . Doi: 10.1007/s00405-018-4966-8
  • 38 Apeksha K, Kumar UA. Effect of acoustic features on discrimination ability in individuals with auditory neuropathy spectrum disorder: an electrophysiological and behavioral study. Eur Arch Otorhinolaryngol 2019; 276 (06) 1633-1641 . Doi: 10.1007/s00405-019-05405-9
  • 39 Venkatesan S. Ethical Guidelines for Bio-behavioral Research Involving Human Subjects. All India Institute of Speech and Hearing, Mysore 2009
  • 40 Miller G, Nicely P. An analysis of perceptual confusions among some English consonants. J Acoust Soc Am 1955; 27: 338-352 . Doi: 10.1121/1.1907526
  • 41 Boothroyd A. Auditory perception of speech contrasts by subjects with sensorineural hearing loss. J Speech Hear Res 1984; 27 (01) 134-144 . Doi: 10.1044/jshr.2701.134
  • 42 Hornsby BWY, Trine TD, Ohde RN. The effects of high presentation levels on consonant feature transmission. J Acoust Soc Am 2005; 118 (3 Pt 1): 1719-1729 DOI: 10.1121/1.1993128.
  • 43 Sawusch JR, Pisoni DB. On the identification of place and voicing features in synthetic stop consonants. J Phonetics 1974; 2 (03) 181-194
  • 44 Kwon BJ. AUX: a scripting language for auditory signal processing and software packages for psychoacoustic experiments and education. Behav Res Methods 2012; 44 (02) 361-373 . Doi: 10.3758/s13428-011-0161-1
  • 45 Kaplan-Neeman R, Kishon-Rabin L, Henkin Y, Muchnik C. Identification of syllables in noise: electrophysiological and behavioral correlates. J Acoust Soc Am 2006; 120 (02) 926-933 . Doi: 10.1121/1.2217567
  • 46 Wynne DP, Zeng FG, Bhatt S, Michalewski HJ, Dimitrijevic A, Starr A. Loudness adaptation accompanying ribbon synapse and auditory nerve disorders. Brain 2013; 136 (Pt 5): 1626-1638 DOI: 10.1093/brain/awt056.
  • 47 Lehmann D, Skrandies W. Reference-free identification of components of checkerboard-evoked multichannel potential fields. Electroencephalogr Clin Neurophysiol 1980; 48 (06) 609-621 . Doi: 10.1016/0013-4694(80)90419-8
  • 48 Murray MM, Brunet D, Michel CM. Topographic ERP analyses: a step-by-step tutorial review. Brain Topogr 2008; 20 (04) 249-264 . Doi: 10.1007/s10548-008-0054-5
  • 49 Abdeltawwab M. Auditory N1–P2 cortical event related potentials in auditory neuropathy spectrum disorder patients. J Int Adv Otol 2014; 10: 270-274 . Doi: 10.5152/iao.2014.104
  • 50 Narne VK, Vanaja CS. Perception of speech with envelope enhancement in individuals with auditory neuropathy and simulated loss of temporal modulation processing. Int J Audiol 2009; 48 (10) 700-707 . Doi: 10.1080/14992020902931574
  • 51 Phillips DP. Neural representation of sound amplitude in the auditory cortex: effects of noise masking. Behav Brain Res 1990; 37 (03) 197-214 . Doi: 10.1016/0166-4328(90)90132-x
  • 52 Song J, Davey C, Poulsen C. et al. EEG source localization: Sensor density and head surface coverage. J Neurosci Methods 2015; 256: 9-21 . Doi: 10.1016/j.jneumeth.2015.08.015

Zoom Image
Fig. 1 The auditory brainstem response obtained from one of the individuals with normal hearing (panel A) and with ANSD (panel B) using a double channel evoked potential system. Panel A shows an ABR response obtained from individuals with normal hearing with the three prominent peaks (I, III and V peaks) for 90 dB nHL click stimuli. Panel B shows the response obtained from individuals with ANSD with no prominent peaks both in ipsilateral and contralateral recording.
Zoom Image
Fig. 2 The waveform and spectrogram of stimuli /da/ in quiet and in noise (at +10 dB SNR). The speech stimuli in the noise condition was presented such that the onset of the speech stimuli was 1,000 milliseconds after the onset of the noise and the offset of speech stimuli was 1,000 milliseconds before the offset of the noise stimuli.
Zoom Image
Fig. 3 The configuration of 64 electrodes used in the electroencephalogram recording.
Zoom Image
Fig. 4 Reaction time and sensitivity values obtained from individuals with normal hearing sensitivity and with ANSD in quiet and in noise. The error bar represents one standard error. The asterisk shows the significant difference (p < 0.05) between the conditions (quiet and noise) for RT and sensitivity measures.
Zoom Image
Fig. 5 The grand average waveform obtained from individuals with normal hearing and with ANSD in response to /ba/-/da/ stimuli in oddball paradigm (waveforms in black) and to /da/ stimuli in repetitive paradigm (waveform in red) in quiet and in noise for all the 64 channels. The upper panel of all the windows shows the average response obtained from 30 individuals with normal hearing and 30 individuals with ANSD separately in quiet and in noise conditions. The lower panel of each window shows the global field power (GFPs) for the average reponse. Time in milliseconds is plotted on the x-axis and amplitude in µV is shown on the y-axis.
Zoom Image
Fig. 6 The grand average P300 response obtained from individuals with normal hearing and with ANSD in response to deviant stimuli in oddball paradigm in quiet and in noise. The dark shaded area in the lower panel shows the region of significant difference (p < 0.05) on point-wise paired randomization test. Time in milliseconds is plotted on the x-axis and scalp electrode locations are shown on the y-axis in the bottom panel.
Zoom Image
Fig. 7 The result of topographic pattern analysis showing the time region at which statistically different template maps occurred is shown in Panel A. The color shaded regions show the global field power (GFP) for the two groups of individuals with normal hearing and with ANSD, in quiet and in noise conditions. The numbers below the GFPs represent the significantly different template maps/activation pattern seen for both groups. Different colors represent different template maps. Y-axis in panel A represents the response in quiet and in noise condition for both groups. Panel B shows the six significantly different templates superimposed on the head model which lies in the time region of P300. These template maps show the difference in scalp activation pattern with scale ranging from no activation region on the scalp (dark blue areas) to the area of maximum activation (pink shaded region) in response to the task in the oddball paradigm. All of the six significantly different templates show difference in scalp activation pattern as shown by the area and location of activation.