CC BY-NC-ND 4.0 · Int Arch Otorhinolaryngol 2018; 22(04): 460-468
DOI: 10.1055/s-0037-1605598
Systematic Review
Thieme Revinter Publicações Ltda Rio de Janeiro, Brazil

Parameters for Applying the Brainstem Auditory Evoked Potential with Speech Stimulus: Systematic Review

Luísa Bello Gabriel
1   Phonoaudiology, Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA), Porto Alegre, RS, Brazil
,
Luíza Silva Vernier
2   Speech Therapy, Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA), Porto Alegre, RS, Brazil
,
Maria Inês Dornelles da Costa Ferreira
1   Phonoaudiology, Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA), Porto Alegre, RS, Brazil
,
Adriana Laybauer Silveira
3   Phonoaudiology, Hospital de Clínicas de Porto Alegre, Porto Alegre, RS, Brazil
,
Márcia Salgado Machado
1   Phonoaudiology, Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA), Porto Alegre, RS, Brazil
› Author Affiliations
Further Information

Address for correspondence

Márcia Salgado Machado
Fonoaudiologia, Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA)
Rua Sarmento Leite, 245, Centro Histórico, Porto Alegre, RS 90050-170
Brazil   

Publication History

09 February 2017

29 June 2017

Publication Date:
28 August 2017 (online)

 

Abstract

Introduction Studies using the Brainstem Auditory Evoked Potential with speech stimulus are increasing in Brazil, and there are divergences between the methodologies used for testing.

Objectives To analyze the parameters used in the study of the Brainstem Auditory Evoked Potentials with speech stimulus.

Data Synthesis The survey was performed using electronic databases. The search strategy was as follows: “Evoked potentials, auditory” OR “Brain stem” OR “Evoked potentials, auditory, brain stem” AND “Speech.” The survey was performed from June to July of 2016. The criteria used for including articles in this study were: being written in Portuguese, English or Spanish; presenting the description of the testing parameters and the description of the sample. In the databases selected, 2,384 articles were found, and 43 articles met all of the inclusion criteria. The predominance of the following parameters was observed to achieve the potential during study: stimulation with the syllable /da/; monaural presentation with greater use of the right ear; intensity of 80 dB SPL; vertical placement of electrodes; use of in-ear headphones; patient seated, distracted in awake state; alternating polarity; use of speech synthesizer software for the elaboration of stimuli; presentation rate of 10.9/s; and sampling rate of 20 kHz.

Conclusions The theme addressed in this systematic review is relatively recent. However, the results are significant enough to encourage the use of the procedure in clinical practice and advise clinicians about the most used procedures in each parameter.


#

Introduction

The goal of auditory evoked potentials with speech stimuli is to assess the neurophysiological activity. The responses generated by the brainstem from the auditory stimulation created by complex sounds generate measurements of time and magnitude that provide reliable information about the neural coding of speech sounds.[1] The test is indicated for neurological and mental cases and for hearing, language and learning changes.[1] In addition, it is used as a registration to ensure the efficacy of previous auditory training.

The speech signal is considered a complex stimulus when compared with the click stimulus, because in order to decode it a coordinated and simultaneous activation of a large set of neurons, from the peripheral auditory system to the cortex is required.[2] Furthermore, the perception of a speech signal requires processes such as peripheral analysis and retrieval of its characteristics from the brainstem nuclei, making it possible to classify phonemes and words.[3] [4]

The type of stimulus that has been usually employed for the study of brainstem auditory evoked potentials (BAEPs) with speech stimuli is the syllable /da/. In the same manner of the syllable, the response of the brainstem to speech can be divided into two stages: the transient portion (onset response), which corresponds to consonants, and the sustained portion (frequency-following response [FFR]), which corresponds to vowels.[1] [5] [6]

There is a region in the brain specializing in responding to speech sounds as opposed to simple sounds,[7] that is, different cerebral mechanisms are responsible for the auditory processing of speech when compared with the processing of other sounds. There are also other advantages for the use of BAEP with speech stimulus.[8] One of them is the fact that speech stimuli are more common in nature than simpler stimuli, such as the click and tone burst. In addition, due to the non-linearity of the auditory system, the understanding of how it responds can only be achieved by using the speech stimulus. It should be noted that the exposure to speech sounds and their use in linguistic contexts allow for the neural plasticity of the auditory pathway, which does not take place with the click stimulus, for instance.[9]

Studies using brainstem auditory evoked potentials with speech stimulus in Brazil are on the rise;[3] [10] however, there are still divergences between the methodologies being used for testing. Thus, it is important to study and analyze the parameters addressed in the BAEPs with speech stimulus to help clinicians who use this procedure.

The goal of this paper is to analyze the parameters used to evaluate the BAEPs with speech stimulus.


#

Review of Literature

This review was conducted based on the following guiding question: What are the parameters most used for the execution of BAEPs with speech stimulus?

This is a systematic literature review conducted through a survey in the electronic databases Scientific Electronic Library Online (SCIELO), Latin-American and Caribbean Health Sciences Literature System (LILACS), National Library of Medicine (MEDLINE and PUBMED) and Spanish Health Sciences Bibliographic Index (IBECS). The descriptors used in the searches were preestablished through the trilingual structured dictionary platform Health Sciences Descriptors (DeCs), organized by the Virtual Health Library - BIREME.

The search strategy that was used to conduct this survey was as follows: “Evoked potentials, auditory” OR “Brain stem” OR “Evoked potentials, auditory, brain stem” AND “Speech.”

The survey had its period restricted to June and July of 2016 and the search strategy was made without restriction of publication date. Articles in Portuguese, English and Spanish languages, and the following criteria were adopted when choosing the articles:

  • Presenting a description of the following testing parameters

    • Type of speech stimulus used;

    • Intensity of the speech stimulus presentation;

    • Position of electrodes;

    • Types of headphones;

    • Patient status at the time of examination;

    • Polarity;

    • Presentation rate;

    • Number of stimuli presented;

    • Stimulus source;

  • Description of sample under study

All articles that did not meet the above criteria were excluded, as well as articles that did not present clear criteria in the methodology, including those that did not clearly show the parameters used in their study. The selection took place in three stages. In the first stage, two independent researchers chose articles found through a search in each database by matching descriptors. Subsequently, titles were analyzed and lastly the article abstracts were analyzed. Finally, publications considered suitable under the inclusion criteria were selected. For this analysis, a judge reviewed the same studies independently and chose articles that were thought to be relevant to the survey. The inconsistencies observed were resolved through a consensus among researchers.

[Fig. 1] shows the results obtained through the search strategy employed in this systematic review.

Zoom Image
Fig. 1 Number of articles found, selected and the reasons for exclusion.

In [Table 1], the characteristics used by each study for the testing of BAEPs with speech stimulus can be observed.

Table 1

Presentation of parameters used for the BAEP speech test in the studies selected

Article

Stimulus

Ear

B/M

Intensity

Electrodes

Headphone

Hayes et al (2003)[11]

Syllables /da/ and /ga/

RE

M

80 dB SPL

Cz for ipsilateral ear lobe; forehead: ground

IE

Wible et al (2004)[12]

Syllable /da/

RE

M

80 dB SPL

Reference: right mastoid; ground: forehead; Cz: active

IE

Russo et al (2004)[1]

Syllable /da/

RE

M

80 dB SPL

Vertical mounting*

IE

Wible et al (2005)[13]

Syllable /da/

RE

M

80 dB SPL

Right mastoid, forehead and Cz

IE

Kouni et al (2006)[14]

Syllable /ma/

RE

M

80 dB SPL

Cz: active; mastoids: reference; forehead: ground

IE

Song et al (2006)[15]

Syllable /da/

RE

M

80.3 dB SPL

Vertical mounting

IE

Johnson et al (2007)[16]

Syllable /da/

RE

M

80 dB SPL

Vertical mounting

IE

Johnson et al (2008)[17]

Syllables /ga/, /da/ and /ba/

RE

M

83 dB SPL

Vertical mounting

IE

Song et al (2008)[18]

Syllable /da/

RE

M

80 dB SPL

Vertical mounting

IE

Russo et al (2008)[19]

Syllable /ya/

RE

M

60 dB SPL

Vertical mounting

IE

Johnson et al (2008)[20]

Syllable /da/

RE

M

80 dB SPL

Vertical mounting

IE

Russo et al (2009)[21]

Syllable /da/ bubble noise and white noise in background

RE

M

80 dB SPL

Cz: scalp; ipsilateral ear lobe: reference; forehead: ground

IE

Dhar et al (2009)[22]

Syllable /da/

RE

M

80.3 dB SPL

Cz: scalp; ipsilateral ear lobe: reference; forehead: ground

IE

Hornickel et al (2009)[23]

Syllable /da/ with initial noise burst

BE (one at a time)

M

80.3 dB SPL

Vertical mounting

IE

Skoe et al (2010)[24]

Syllable /da/

BE

B and M

60 - 80 dB SPL

Vertical mounting

IE

Wang et al (2010)[25]

“Danny”

BE

B

70 dB SPL

Cz; left ear: reference; forehead: ground

IE

Krizman et al (2010)[26]

Syllable /da/ with initial noise burst

RE

M

80.3 dB SPL

Vertical mounting

IE

Anderson et al (2011)[27]

Syllable /da/ with bubble noise

BE

B

80 dB SPL

Vertical mounting

IE

Werff et al (2011)[28]

Syllable /da/ with initial noise burst

BE

B

82 dB SPL

Cz: non-inverse; mastoids: inverse; forehead: ground

IE

Rana et al (2011)[29]

Syllable /da/

BE

M

80 dB SPL

Cz: non-inverse; mastoid test ear: inverse; mastoid non-test ear: ground

IE

Skoe et al (2011)[30]

Syllables /ba/, /da/ pseudo-randomized

RE

M

80 dB SPL

Vertical mounting

IE

Tierney et al (2011)[31]

Syllable /da/ with bubble noise

BE

B

80 dB SPL

Vertical mounting

IE

Parbery-Clark et al (2011)[32]

Syllable /da/

BE

B

80 dB SPL

Vertical mounting

IE

Song et al (2011)[33]

Syllable /da/ in silence and two noise conditions

RE

M

80.3 dB SPL

Vertical mounting

IE

Strait et al (2011)[34]

Syllable /da/

RE

M

80 dB SPL

Vertical mounting

IE

Song et al (2011)[35]

Syllable /da/ 170 ms (silence and background noise) and syllable /da/ 40 ms

RE

M

80.3 dB SPL

Vertical mounting

IE

Anderson et al (2012)[36]

Syllable /da/

BE

B

80 dB SPL

Vertical mounting

IE

Krizman et al (2012)[37]

Syllable /da/

RE

M

80 dB SPL

Vertical mounting

IE

Hornickel et al (2012)[38]

Syllable /da/

RE

M

80 dB SPL

Vertical mounting

IE

Hornickel et al (2012)[39]

Syllable /da/ in silence and in bubble noise

RE

M

80 dB SPL

Vertical mounting

IE

Song et al (2012)[40]

Syllable /da/ in silence and in bubble noise

RE

M

80.3 dB SPL

Vertical mounting

IE

Gonçalves (2013)[41]

Syllable /da/

RE

B

80 dBNA

Fz: active; FPZ: ground; reference: mastoids left (M1) and right (M2)

IE

Laroche et al (2013)[42]

Vowel /a/

RE

M

E1: 77, 76 and 67 dB SLP in silence and 80, 79 and 76 dB SPL in noise E2: 80 dB SPL in silence

Cz; reference: right ear lobe (ipsilateral); ground: left ear lobe

IE

Anderson et al (2013)[43]

Syllable /da/ and without pink background noise

BE

B

80.3 dB SPL

Vertical mounting

IE

Hornickel et al (2013)[44]

Syllable /da/ in silence and in bubble noise

RE

M

80 dB SPL

Vertical mounting

IE

Hornickel et al (2013)[45]

Syllables /ga/ and /ba/

RE

M

80 dB SPL

Vertical mounting

IE

Fujihira et al (2014)[46]

Syllable /da/

RE

M

70 dB SPL

Vertical mounting

IE

Ahadi et al (2014)[47]

Syllable /da/ with initial noise burst

RE, LE and BE

M and B

80 dB SPL

Cz: non-inverse; ear lobes: inverted; forehead (Fpz): ground.

IE

Ahadi et al (2014)[48]

Syllable /da/ with initial noise burst

BE

B

80 dB SPL

Cz: non-inverse ear lobes: inverted; forehead (Fpz): ground.

IE

Strait et al (2014)[49]

Syllables /ga/ and /ba/

BE and only RE

Ad: B C: M

80 dB SPL

Vertical mounting

IE

Jafari et al (2014)[50]

Initial noise burst and transient formant between consonant and vowel of syllable /da/

RE

M

80 dB SPL

Right mastoid, forehead and Cz

IE

Bellier et al (2015)[51]

Syllable /ba/

BE

B

80 dB SPL

Nose: reference; Afz: ground

IE and individual sound amplification device with 2 settings

Skoe et al (2015)[52]

Syllable /da/

RE

M

80 dB SPL

Vertical recording over the head, active electrode at the vertex

IE

Article

Patient State

P

Presentation/Sampling Rate

No. of Stimuli

Stimulus Source

Sample

Hayes et al (2003)[11]

Condition: WV

A in /da/

20 kHz

3 blocks of 3,000 scans

Klatt-based synthesizer

C (8–12 yo)

Wible et al (2004)[12]

Condition: WV

A

10 kHz

6,000 scans

Digital speech synthesizer (SenSyn)

C (mean 11.1 yo)

Russo et al (2004)[1]

Condition: WV

A

20 kHz

3 blocks of 1,000 scans

Klatt-based synthesizer

C (8–12 yo)

Wible et al (2005)[13]

Condition: WV

Inverted

20 kHz

6,000 scans

Digital speech synthesizer (SenSyn)

C (mean 11.1 yo)

Kouni et al (2006)[14]

Position: seated

A

11.1/s

3,000 scans

Digital speech synthesizer

Ad (18–23 yo)

Song et al (2006)[15]

Condition: WV

A

11.1 Hz

3 blocks of 1,000 scans

Klatt-based synthesizer

C (8–12 yo)

Johnson et al (2007)[16]

Condition: WV

A

10 kHz

3 blocks of 2,000 scans

Klatt-based synthesizer

C (8–12 yo)

Johnson et al (2008)[17]

Condition: WV

A

4.35/s

4,000 -4,100 scans

Klatt-based synthesizer

C (8–12 yo)

Song et al (2008)[18]

Condition: WV

A

20 kHz

3 blocks of 2,000 scans

Digital speech synthesizer

C (8–12 yo)

Russo et al (2008)[19]

Position: seated with parents

A

20 kHz

2 blocks of 1,200 scans

Recorded by a native female English voice and manipulated with Praat

C

Johnson et al (2008)[20]

Condition: WV

A

10.9/s

3 blocks of 2,000 scans

Klatt-based synthesizer

C (3–5 yo) and (8–12 yo)

Russo et al (2009)[21]

Position: seated. Condition: WV

A

10.9/s

3 blocks of 2,000 scans

Klatt-based synthesizer

C

Dhar et al (2009)[22]

Condition: WV

A

10.9 Hz

2 blocks of 6,000 scans

Klatt-based synthesizer

Ad (19–30 yo)

Hornickel et al (2009)[23]

Condition: WV

A

10.9 Hz

3 blocks of 2,000 scans

Klatt-based synthesizer

Ad (21–30 yo)

Skoe et al (2010)[24]

Condition: keeping subject relaxed and quiet

A

6000–20000 Hz

2 or more subaverages of 2,000–3,000 scans

Recording using programs such as Pratt or Adobe Audition

Ad or C

Wang et al (2010)[25]

Position: seated at 3 m from an LCD projector for visual stimulus

A

1.67/s

3,240 scans

Female voice

Ad (20–30 yo)

Krizman et al (2010)[26]

Condition: WV

A

15.4 Hz (quick), 10.9 Hz (standard), and 6.9 Hz (slow)

3,000 scans

Klatt-based synthesizer

Ad (21–33 yo)

Anderson et al (2011)[27]

Condition: WV

A

20 kHz

6,000 scans

Klatt-based synthesizer

Ad (60–73 yo)

Werff et al (2011)[28]

Position: seated. Condition: relaxed

A

11.1/s

2 blocks of 1,500 scans

Obtained from Auditory Neuroscience Laboratory of Nina Kraus and colleagues

Ad (20–26 yo) and El (61–78 yo)

Rana et al (2011)[29]

Position: reclined. Condition: encouraged to relax and sleep

A

9.1/s

3000 scans

Klatt-based synthesizer

Ad (18–23 yo)

Skoe et al (2011)[30]

Position: seated. Condition: WV

A

20 kHz

3000 scans

Parallel formant synthesizer

C

Tierney et al (2011)[31]

Position: seated. Condition: WV

A

20 kHz

3000 scans

Klatt-based synthesizer

Ad (18–32 yo)

Parbery-Clark et al (2011)[32]

Condition: WV

A

3.95/s

6,000 scans

Recorded in a laboratory

Ad (18–32 yo and 46–65 yo)

Song et al (2011)[33]

Condition: WV

A

20 kHz

6,300 scans

Klatt-based synthesizer

Ad (20–31 yo)

Strait et al (2011)[34]

Position: seated. Condition: WV

A

20 kHz

700 scans

Klatt-based synthesizer

C (8–13 yo)

Song et al (2011)[35]

Condition: WV

A

E1: 20 kHz E2: 10 kHz

E1: 6,300 scans E2: two blocks of 3,000 scans

Klatt-based synthesizer

Ad E1: (19–31 yo) and E2: (19–36 yo)

Anderson et al (2012)[36]

Position: seated. Condition: WV

A

3.95 Hz

6,000 scans

Klatt-based synthesizer

Ad (18–30 yo) and El (60–67 yo)

Krizman et al (2012)[37]

Position: seated. Condition: WV

A

10.9 Hz

6,000 scans

Klatt-based synthesizer

Ad (22–29 yo)

Hornickel et al (2012)[38]

Condition: WV

A

4.3 Hz

3 blocks of 2,000 scans

Klatt-based synthesizer

C (8–13 yo)

Hornickel et al (2012)[39]

Position: seated. Condition: WV

A

20 kHz

2 repeats of 3,000 scans

Klatt-based synthesizer

C (8–13 yo)

Song et al (2012)[40]

Condition: WV

A

4.35 Hz

6,300 scans

Klatt-based synthesizer

Ad (19–35 yo)

Gonçalves (2013)[41]

Position: seated

A

11.1/s

3 blocks of 1,000 scans

Obtained at the auditory neuroscience laboratory of Nina Kraus and colleagues

C (7–11 yo)

Laroche et al (2013)[42]

Position: seated. Condition: WV

A

3/s

3,000 scans

Klatt-based synthesizer

Ad (18–33 yo) and (25–33 yo)

Anderson et al (2013)[43]

Condition: WV

A

10.9Hz

3 blocks of 3,000 scans

Klatt-based synthesizer

Ad (61–68 yo)

Hornickel et al (2013)[44]

Position: seated. Condition: WV

A

20 kHz

6,000 scans

Klatt-based synthesizer

C (6–14 yo)

Hornickel et al (2013)[45]

Position: seated. Condition: WV

A

4.35/s

6,000 scans

Klatt-based synthesizer

C (6–13 yo)

Fujihira et al (2014)[46]

Position: seated. Condition: relaxed

A

10 kHz

2,000 scans

Klatt-based synthesizer

Ad (61–73 yo)

Ahadi et al (2014)[47]

Position: seated

A

10.9/s

6000 (2 blocks of 3,000 scans)

Performed using the BioMARK module

Ad (20–28 yo)

Ahadi et al (2014)[48]

Position: seated

A

10.9/s

6000 (2 blocks of 3,000 scans)

Performed using the BioMARK module

Ad (20–28 yo)

Strait et al (2014)[49]

Condition: WV

A

20 kHz

700 scans for Ad and 850 for C

Klatt-based synthesizer

C and Ad

Jafari et al (2014)[50]

Position: seated

A

10.9/s

2 blocks of 2000 scans

Obtained at the auditory neuroscience laboratory of Nina Kraus and colleagues

C (8–12 yo)

Bellier et al (2015)[51]

Position: seated. Condition: WV

A

2.78/s

3,000 scans

Recorded in a female French voice

Ad (22–25 yo)

Skoe et al (2015)[52]

Position: seated. Condition: WV. C: position: seated on the parents' lap. Condition: distracted

A

10.9/s

6,000 scans

Klatt-based synthesizer

Ad and C (0.25–72.4 yo)

Abbreviations: A, alternating; Ad, adults; B, Binaural; BE, both ears; C, children; E1, experiment 1; E2, experiment 2; El, elderly; IE, in-ear headphone; LE, left ear; M, monaural; P, polarity; RE, right ear; WV, Watching a video; yo, years old.


*Vertical mounting: active electrode at Cz, reference in the ipsilateral ear lobe for the stimulus presentation ear, with electrode on the forehead.


A few papers were excluded because their methodologies were not clear, for example, as to whether the parameters described referred to the speech stimulus or another type of stimulus, such as the click stimulus; hence, only articles that clarified the resources and parameters used in their studies were selected.


#

Discussion

According to the results, it is possible to see that the stimuli /da/, /ga/, /ma/, /ba/, /ya/, /danny/ and /a/ were applied, the syllable /da/ being the most frequently used. This occurs because /da/ is an acoustically complex syllable that begins with a plosive phoneme and provides considerable phonetic information. In addition, it is vulnerable to background noise both in populations without changes and in those with clinical problems who undergo the examination to help in their diagnosis.[1] This sound also comprises a transient segment followed by a sustained segment and, therefore, it is quite similar to a click followed by a tone. Due to these similarities, the initial response (transient) is similar to the result of the BAEP-click, and the sustained response to the vowel is similar to the tone-evoked FFR.[24]

Most of the selected studies conducted a test only in the right ear,[1] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [26] [30] [33] [34] [35] [37] [38] [39] [40] [41] [42] [44] [45] [46] [47] [49] [50] [52] two studies showed a stimulus in the right and left ears separately,[23] [47] while the others used binaural stimuli. The stimulation of the left and right ears elicits similar, but not identical responses.[53] The responses elicited by the stimulation in the right ear can show shorter latency times than those obtained in the left ear, since the left hemisphere specializes in processing linguistic stimuli.[23] However, when a sound is heard by both ears, it is known to be perceived up to 6 dB louder than when presented at the same intensity in only one ear.

Thus, a binaural presentation is recommended for the adult public, not only because it elicits greater and more robust responses, but also because this is closer to reality, since we usually hear through both ears.[24] However, the recommendation for using binaural stimulation for adults was not found in the literature. Among the articles found in this review that had an adult sample, most of them used monaural stimulation.

Monaural stimulation is recommended for children, people suffering from asymmetrical hearing losses, or when an individual should pay attention to another sound,[24] a fact that is observed in most studies involving children.[1] [11] [12] [13] [15] [16] [17] [18] [19] [20] [21] [30] [34] [38] [39] [44] [45] [49] [50] [52]

Regarding the intensity used for the speech stimulus presentation, many studies use the stimulus at ∼ 80 dB SPL,[1] [11] [12] [13] [15] [16] [17] [18] [20] [21] [22] [23] [24] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [42] [43] [44] [45] [47] [48] [49] [50] [51] [52] which can be explained by the fact that a normal conversation usually ranges from 60 to 80 dB SPL.[24] In addition, the latency of the responses to BAEP speech increases as intensity decreases as similarly occurs with BAEP-click.[54] Only one study used the intensity of 80 dBNA; however, the choice for this intensity was not justified.

A lot of variability was seen regarding the placement of electrodes, and several types of placement were reported. A method that drew our attention was the so-called vertical mounting,[1] [15] [16] [17] [18] [19] [20] [23] [24] [26] [27] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [43] [44] [45] [46] [49] in which the electrodes are positioned with the active electrode at Cz, the reference electrode in the ipsilateral ear lobe for the stimulus and the ground electrode on the forehead. The preference of many for the placement of the electrode on the earlobe instead of the mastoid[1] [11] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [26] [27] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [42] [43] [44] [45] [46] [47] [48] [49] is believed to have occurred because it is a non-cephalic site, causing fewer artifacts from bone vibration.[24]

As already known, the use of supra-aural headphones increases the chances for artifacts and, therefore, a way to get rid of these is by using in-ear headphones.[53] All articles were found to follow this recommendation.

The state of patients during the examination was similar in many articles, in which the individuals were encouraged to remain relaxed or even to sleep to reduce muscle artifacts. To exclude differences in the potential in the sleep and waking states, several studies[1] [11] [12] [13] [15] [16] [17] [18] [20] [21] [22] [23] [25] [26] [27] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [42] [43] [44] [45] [49] [51] [52] encouraged their patients to remain awake during the test,[24] making them watch videos or read books. In some of these studies, the video was played at a low intensity (around 40 dB SPL), allowing it to be heard by the non-test ear and also avoiding masking the auditory stimulus. In other cases of binaural stimulation, many provided subtitles for the recordings.

Regarding polarity, most studies used alternating polarity, and this was also the most clinically used, as it is a common way of avoiding artifactual responses.[53] Positive and negative polarities induce similar results to BAEP-speech, since the ear is insensitive to phase changes and, consequently, artifactual effects are cancelled.[53]

The presentation and sampling rates were grouped in the table because many studies did not make the two sets of data available separately. The presentation rate depends on the duration of each stimulus and the interval between the silence of the offset of one stimulus and the onset of another.[24] This was the parameter with greatest variability among the studies analyzed. In the studies selected for this review, the presentation rate was shown in stimuli per second and in hertz (Hz). Most used the amount of 10.9 stimuli per second,[20] [21] [41] [48] [50] [52] and the values ranged from 1.67 to 11.1/s. The sampling rate, which determines how many times per second the neural signal is collected by the response recording system,[24] ranged from 4.3 Hz to 20 kHz, the latter being the most used value in the articles.[1] [11] [13] [18] [19] [27] [30] [31] [33] [34] [35] [39] [44] [49]

As for the number of stimuli, it is well established that at least 1,000 to 2,000 scans are required to collect waves from the BAEP. The higher the number of scans per polarity, the greater the chance of creating subaverages to determine reproducibility. In addition, it is only possible to observe subtle responses or differences amid a small group by using a larger number of stimuli, which would not show up without additional scans.[24] In the results, it is possible to see that only two articles performed less than 1,000 scans in their studies,[34] [49] and none of them justified this choice.

As for the source of stimuli, it is possible to record the natural voice for stimulation using specific software for this. Both natural and synthetic sounds must be created with a high digitalization rate (> 20 kHz).[24]

However, natural stimuli have difficulty pointing out the extent to which physical characteristics are actually being represented on the subcortical level. Additionally, this control is paramount when multiple stimuli are compared in a single dimension. In these cases, researchers use speech synthesizer software, such as Klatt,[55] to create a stimulus with precise characteristics of temporal variation and length.[24] This software program, which allows the user to control and specify the stimulus control parameters, such as formant frequencies and duration,[55] was widely used in the articles found.[1] [11] [15] [16] [17] [20] [21] [22] [23] [26] [27] [29] [31] [33] [34] [35] [36] [37] [38] [39] [40] [42] [43] [44] [45] [46] [49] [52]

Although most of the studies performed with children preferred the use of monaural stimulation, as mentioned above, this number was well balanced for adults, with twelve studies conducted with binaural stimulation. No differences were found between the parameters used for the two types of sample (adults and children).


#

Final Comments

According to the analysis of the compiled studies and findings obtained for each parameter individually, the most used parameters were: stimulation with the syllable /da/; monaural presentation with greater use of the right ear; intensity of 80 dB SPL; vertical placement of electrodes and preference for the use of the ipsilateral ear lobe instead of the mastoid; use of in-ear headphones; patient seated, distracted in awake state; alternating polarity; use of speech synthesizer software Klatt for the development of stimuli; presentation rate of 10.9 stimuli per second; and sampling rate of 20 kHz.

The topic addressed in this systematic review is relatively recent, and the studies conducted specially in Brazil are still growing. Hence, new studies are necessary to assess the best parameters to be used for each type of sample. However, this study is significant as it encourages the use of the procedure in clinical practice in relation to the most frequently used procedures for each parameter.


#
#

No conflict of interest has been declared by the author(s).

  • References

  • 1 Russo N, Nicol T, Musacchia G, Kraus N. Brainstem responses to speech syllables. Clin Neurophysiol 2004; 115 (09) 2021-2030
  • 2 Filippini R. Eficácia do treinamento auditivo por meio do potencial evocado para sons complexos nos transtornos de audição e linguagem [dissertation]. São Paulo: Faculdade de Medicina da Universidade de São Paulo; 2011
  • 3 Rocha CN, Filippini R, Moreira RR, Neves IF, Schochat E. Brainstem auditory evoked potential with speech stimulus. Pro Fono 2010; 22 (04) 479-484
  • 4 Kraus N, Nicol T. Aggregate neural responses to speech sounds in the central auditory system. Speech Commun 2003; 41 (01) 35-47
  • 5 Johnson KL, Nicol T, Kraus N. Brain stem response to speech: a biological marker of auditory processing. Ear Hear 2005; 26 (05) 424-434
  • 6 Sanfins MD, Borges LR, Ubiali T, Colella-Santos MF. Speech auditory brainstem response (speech ABR) in the differential diagnosis of scholastic difficulties. Rev Bras Otorrinolaringol (Engl Ed) 2017; 83 (01) 112-116
  • 7 Binder JR, Frost JA, Hammeke TA. , et al. Human temporal lobe activation by speech and nonspeech sounds. Cereb Cortex 2000; 10 (05) 512-528
  • 8 Abrams D, Kraus N. Auditory pathway representation of speech sounds in humans. In: Katz J, Hood L, Burkard R, Medwetsky L. , ed. Handbook of Clinical Audiology. 6th ed. Alphen aan den Rijn; Netherlands Wolters Kluwer; 2009: 611-626
  • 9 Gonçalves IC. Potencial evocado auditivo de tronco encefálico com estímulo de fala em crianças com distúrbio fonológico [dissertation]. São Paulo Faculdade de Medicina da Universidade de São Paulo; 2009
  • 10 Filippini R, Schochat E. Potenciais evocados auditivos de tronco encefálico com estímulo de fala no transtorno do processamento auditivo. Rev Bras Otorrinolaringol (Engl Ed) 2009; 75 (03) 449-455
  • 11 Hayes EA, Warrier CM, Nicol TG, Zecker SG, Kraus N. Neural plasticity following auditory training in children with learning problems. Clin Neurophysiol 2003; 114 (04) 673-684
  • 12 Wible B, Nicol T, Kraus N. Atypical brainstem representation of onset and formant structure of speech sounds in children with language-based learning problems. Biol Psychol 2004; 67 (03) 299-317
  • 13 Papavramidis TS, Pliakos I, Michalopoulos N. , et al. Classic clamp-and-tie total thyroidectomy for large goiters in the modern era: To drain or not to drain. World J Otorhinolaryngol 2014; 4 (01) 1-5
  • 14 Kouni SN, Papadeas ES, Varakis IN, Kouvelas HD, Koutsojannis CM. Auditory brainstem responses in dyslexia: comparison between acoustic click and verbal stimulus events. J Otolaryngol 2006; 35 (05) 305-309
  • 15 Song JH, Banai K, Russo NM, Kraus N. On the relationship between speech- and nonspeech-evoked auditory brainstem responses. Audiol Neurootol 2006; 11 (04) 233-241
  • 16 Johnson KL, Nicol TG, Zecker SG, Kraus N. Auditory brainstem correlates of perceptual timing deficits. J Cogn Neurosci 2007; 19 (03) 376-385
  • 17 Johnson KL, Nicol T, Zecker SG, Bradlow AR, Skoe E, Kraus N. Brainstem encoding of voiced consonant–vowel stop syllables. Clin Neurophysiol 2008; 119 (11) 2623-2635
  • 18 Song JH, Banai K, Kraus N. Brainstem timing deficits in children with learning impairment may result from corticofugal origins. Audiol Neurootol 2008; 13 (05) 335-344
  • 19 Russo NM, Skoe E, Trommer B. , et al. Deficient brainstem encoding of pitch in children with Autism Spectrum Disorders. Clin Neurophysiol 2008; 119 (08) 1720-1731
  • 20 Johnson KL, Nicol T, Zecker SG, Kraus N. Developmental plasticity in the human auditory brainstem. J Neurosci 2008; 28 (15) 4000-4007
  • 21 Russo N, Nicol T, Trommer B, Zecker S, Kraus N. Brainstem transcription of speech is disrupted in children with autism spectrum disorders. Dev Sci 2009; 12 (04) 557-567
  • 22 Dhar S, Abel R, Hornickel J. , et al. Exploring the relationship between physiological measures of cochlear and brainstem function. Clin Neurophysiol 2009; 120 (05) 959-966
  • 23 Hornickel J, Skoe E, Kraus N. Subcortical laterality of speech encoding. Audiol Neurootol 2009; 14 (03) 198-207
  • 24 Skoe E, Kraus N. Auditory brain stem response to complex sounds: a tutorial. Ear Hear 2010; 31 (03) 302-324
  • 25 Wang JQ, Nicol T, Skoe E, Sams M, Kraus N. Emotion and the auditory brainstem response to speech. Neurosci Lett 2010; 469 (03) 319-323
  • 26 Krizman JL, Skoe E, Kraus N. Stimulus rate and subcortical auditory processing of speech. Audiol Neurootol 2010; 15 (05) 332-342
  • 27 Anderson S, Parbery-Clark A, Yi H-G, Kraus N. A neural basis of speech-in-noise perception in older adults. Ear Hear 2011; 32 (06) 750-757
  • 28 Vander Werff KR, Burns KS. Brain stem responses to speech in younger and older adults. Ear Hear 2011; 32 (02) 168-180
  • 29 Rana B, Barman A. Correlation between speech-evoked auditory brainstem responses and transient evoked otoacoustic emissions. J Laryngol Otol 2011; 125 (09) 911-916
  • 30 Skoe E, Nicol T, Kraus N. Cross-phaseogram: objective neural index of speech sound differentiation. J Neurosci Methods 2011; 196 (02) 308-317
  • 31 Tierney A, Parbery-Clark A, Skoe E, Kraus N. Frequency-dependent effects of background noise on subcortical response timing. Hear Res 2011; 282 (1-2): 145-150
  • 32 Parbery-Clark A, Anderson S, Hittner E, Kraus N. Musical experience offsets age-related delays in neural timing. Neurobiol Aging 2012; 33 (07) 1483.e1-1483.e4
  • 33 Song JH, Skoe E, Banai K, Kraus N. Perception of speech in noise: neural correlates. J Cogn Neurosci 2011; 23 (09) 2268-2279
  • 34 Strait DL, Hornickel J, Kraus N. Subcortical processing of speech regularities underlies reading and music aptitude in children. Behav Brain Funct 2011; 7: 44
  • 35 Song JH, Nicol T, Kraus N. Test-retest reliability of the speech-evoked auditory brainstem response. Clin Neurophysiol 2011; 122 (02) 346-355
  • 36 Anderson S, Parbery-Clark A, White-Schwoch T, Kraus N. Aging affects neural precision of speech encoding. J Neurosci 2012; 32 (41) 14156-14164
  • 37 Krizman J, Skoe E, Kraus N. Sex differences in auditory subcortical function. Clin Neurophysiol 2012; 123 (03) 590-597
  • 38 Hornickel J, Anderson S, Skoe E, Yi H-G, Kraus N. Subcortical representation of speech fine structure relates to reading ability. Neuroreport 2012; 23 (01) 6-9
  • 39 Hornickel J, Knowles E, Kraus N. Test-retest consistency of speech-evoked auditory brainstem responses in typically-developing children. Hear Res 2012; 284 (1-2): 52-58
  • 40 Song JH, Skoe E, Banai K, Kraus N. Training to improve hearing speech in noise: biological mechanisms. Cereb Cortex 2012; 22 (05) 1180-1190
  • 41 Gonçalves IC. Aspectos audiológicos da gagueira: evidências comportamentais e eletrofisiológicas. [tese]. São Paulo Faculdade de Medicina: Universidade de São Paulo; 2013
  • 42 Laroche M, Dajani HR, Prévost F, Marcoux AM. Brainstem auditory responses to resolved and unresolved harmonics of a synthetic vowel in quiet and noise. Ear Hear 2013; 34 (01) 63-74
  • 43 Anderson S, Parbery-Clark A, White-Schwoch T, Drehobl S, Kraus N. Effects of hearing loss on the subcortical representation of speech cues. J Acoust Soc Am 2013; 133 (05) 3030-3038
  • 44 Hornickel J, Lin D, Kraus N. Speech-evoked auditory brainstem responses reflect familial and cognitive influences. Dev Sci 2013; 16 (01) 101-110
  • 45 Hornickel J, Kraus N. Unstable representation of sound: a biological marker of dyslexia. J Neurosci 2013; 33 (08) 3500-3504
  • 46 Fujihira H, Shiraishi K. Correlations between word intelligibility under reverberation and speech auditory brainstem responses in elderly listeners. Clin Neurophysiol 2015; 126 (01) 96-102
  • 47 Ahadi M, Pourbakht A, Jafari AH, Jalaie S. Effects of stimulus presentation mode and subcortical laterality in speech-evoked auditory brainstem responses. Int J Audiol 2014; 53 (04) 243-249
  • 48 Ahadi M, Pourbakht A, Jafari AH, Shirjian Z, Jafarpisheh AS. Gender disparity in subcortical encoding of binaurally presented speech stimuli: an auditory evoked potentials study. Auris Nasus Larynx 2014; 41 (03) 239-243
  • 49 Strait DL, O'Connell S, Parbery-Clark A, Kraus N. Musicians' enhanced neural differentiation of speech sounds arises early in life: developmental evidence from ages 3 to 30. Cereb Cortex 2014; 24 (09) 2512-2521
  • 50 Jafari Z, Malayeri S, Rostami R. Subcortical encoding of speech cues in children with attention deficit hyperactivity disorder. Clin Neurophysiol 2015; 126 (02) 325-332
  • 51 Bellier L, Veuillet E, Vesson JF, Bouchet P, Caclin A, Thai-Van H. Speech Auditory Brainstem Response through hearing aid stimulation. Hear Res 2015; 325: 49-54
  • 52 Skoe E, Krizman J, Anderson S, Kraus N. Stability and plasticity of auditory brainstem function across the lifespan. Cereb Cortex 2015; 25 (06) 1415-1426
  • 53 Akhoun I, Moulin A, Jeanvoine A. , et al. Speech auditory brainstem response (speech ABR) characteristics depending on recording conditions, and hearing status: an experimental parametric study. J Neurosci Methods 2008; 175 (02) 196-205
  • 54 Akhoun I, Gallégo S, Moulin A. , et al. The temporal relationship between speech auditory brainstem responses and the acoustic pattern of the phoneme /ba/ in normal-hearing adults. Clin Neurophysiol 2008; 119 (04) 922-933
  • 55 Klatt D. Software for cascade/parallel formant synthesizer. J Acoust Soc Am 1976; 67 (03) 971-975

Address for correspondence

Márcia Salgado Machado
Fonoaudiologia, Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA)
Rua Sarmento Leite, 245, Centro Histórico, Porto Alegre, RS 90050-170
Brazil   

  • References

  • 1 Russo N, Nicol T, Musacchia G, Kraus N. Brainstem responses to speech syllables. Clin Neurophysiol 2004; 115 (09) 2021-2030
  • 2 Filippini R. Eficácia do treinamento auditivo por meio do potencial evocado para sons complexos nos transtornos de audição e linguagem [dissertation]. São Paulo: Faculdade de Medicina da Universidade de São Paulo; 2011
  • 3 Rocha CN, Filippini R, Moreira RR, Neves IF, Schochat E. Brainstem auditory evoked potential with speech stimulus. Pro Fono 2010; 22 (04) 479-484
  • 4 Kraus N, Nicol T. Aggregate neural responses to speech sounds in the central auditory system. Speech Commun 2003; 41 (01) 35-47
  • 5 Johnson KL, Nicol T, Kraus N. Brain stem response to speech: a biological marker of auditory processing. Ear Hear 2005; 26 (05) 424-434
  • 6 Sanfins MD, Borges LR, Ubiali T, Colella-Santos MF. Speech auditory brainstem response (speech ABR) in the differential diagnosis of scholastic difficulties. Rev Bras Otorrinolaringol (Engl Ed) 2017; 83 (01) 112-116
  • 7 Binder JR, Frost JA, Hammeke TA. , et al. Human temporal lobe activation by speech and nonspeech sounds. Cereb Cortex 2000; 10 (05) 512-528
  • 8 Abrams D, Kraus N. Auditory pathway representation of speech sounds in humans. In: Katz J, Hood L, Burkard R, Medwetsky L. , ed. Handbook of Clinical Audiology. 6th ed. Alphen aan den Rijn; Netherlands Wolters Kluwer; 2009: 611-626
  • 9 Gonçalves IC. Potencial evocado auditivo de tronco encefálico com estímulo de fala em crianças com distúrbio fonológico [dissertation]. São Paulo Faculdade de Medicina da Universidade de São Paulo; 2009
  • 10 Filippini R, Schochat E. Potenciais evocados auditivos de tronco encefálico com estímulo de fala no transtorno do processamento auditivo. Rev Bras Otorrinolaringol (Engl Ed) 2009; 75 (03) 449-455
  • 11 Hayes EA, Warrier CM, Nicol TG, Zecker SG, Kraus N. Neural plasticity following auditory training in children with learning problems. Clin Neurophysiol 2003; 114 (04) 673-684
  • 12 Wible B, Nicol T, Kraus N. Atypical brainstem representation of onset and formant structure of speech sounds in children with language-based learning problems. Biol Psychol 2004; 67 (03) 299-317
  • 13 Papavramidis TS, Pliakos I, Michalopoulos N. , et al. Classic clamp-and-tie total thyroidectomy for large goiters in the modern era: To drain or not to drain. World J Otorhinolaryngol 2014; 4 (01) 1-5
  • 14 Kouni SN, Papadeas ES, Varakis IN, Kouvelas HD, Koutsojannis CM. Auditory brainstem responses in dyslexia: comparison between acoustic click and verbal stimulus events. J Otolaryngol 2006; 35 (05) 305-309
  • 15 Song JH, Banai K, Russo NM, Kraus N. On the relationship between speech- and nonspeech-evoked auditory brainstem responses. Audiol Neurootol 2006; 11 (04) 233-241
  • 16 Johnson KL, Nicol TG, Zecker SG, Kraus N. Auditory brainstem correlates of perceptual timing deficits. J Cogn Neurosci 2007; 19 (03) 376-385
  • 17 Johnson KL, Nicol T, Zecker SG, Bradlow AR, Skoe E, Kraus N. Brainstem encoding of voiced consonant–vowel stop syllables. Clin Neurophysiol 2008; 119 (11) 2623-2635
  • 18 Song JH, Banai K, Kraus N. Brainstem timing deficits in children with learning impairment may result from corticofugal origins. Audiol Neurootol 2008; 13 (05) 335-344
  • 19 Russo NM, Skoe E, Trommer B. , et al. Deficient brainstem encoding of pitch in children with Autism Spectrum Disorders. Clin Neurophysiol 2008; 119 (08) 1720-1731
  • 20 Johnson KL, Nicol T, Zecker SG, Kraus N. Developmental plasticity in the human auditory brainstem. J Neurosci 2008; 28 (15) 4000-4007
  • 21 Russo N, Nicol T, Trommer B, Zecker S, Kraus N. Brainstem transcription of speech is disrupted in children with autism spectrum disorders. Dev Sci 2009; 12 (04) 557-567
  • 22 Dhar S, Abel R, Hornickel J. , et al. Exploring the relationship between physiological measures of cochlear and brainstem function. Clin Neurophysiol 2009; 120 (05) 959-966
  • 23 Hornickel J, Skoe E, Kraus N. Subcortical laterality of speech encoding. Audiol Neurootol 2009; 14 (03) 198-207
  • 24 Skoe E, Kraus N. Auditory brain stem response to complex sounds: a tutorial. Ear Hear 2010; 31 (03) 302-324
  • 25 Wang JQ, Nicol T, Skoe E, Sams M, Kraus N. Emotion and the auditory brainstem response to speech. Neurosci Lett 2010; 469 (03) 319-323
  • 26 Krizman JL, Skoe E, Kraus N. Stimulus rate and subcortical auditory processing of speech. Audiol Neurootol 2010; 15 (05) 332-342
  • 27 Anderson S, Parbery-Clark A, Yi H-G, Kraus N. A neural basis of speech-in-noise perception in older adults. Ear Hear 2011; 32 (06) 750-757
  • 28 Vander Werff KR, Burns KS. Brain stem responses to speech in younger and older adults. Ear Hear 2011; 32 (02) 168-180
  • 29 Rana B, Barman A. Correlation between speech-evoked auditory brainstem responses and transient evoked otoacoustic emissions. J Laryngol Otol 2011; 125 (09) 911-916
  • 30 Skoe E, Nicol T, Kraus N. Cross-phaseogram: objective neural index of speech sound differentiation. J Neurosci Methods 2011; 196 (02) 308-317
  • 31 Tierney A, Parbery-Clark A, Skoe E, Kraus N. Frequency-dependent effects of background noise on subcortical response timing. Hear Res 2011; 282 (1-2): 145-150
  • 32 Parbery-Clark A, Anderson S, Hittner E, Kraus N. Musical experience offsets age-related delays in neural timing. Neurobiol Aging 2012; 33 (07) 1483.e1-1483.e4
  • 33 Song JH, Skoe E, Banai K, Kraus N. Perception of speech in noise: neural correlates. J Cogn Neurosci 2011; 23 (09) 2268-2279
  • 34 Strait DL, Hornickel J, Kraus N. Subcortical processing of speech regularities underlies reading and music aptitude in children. Behav Brain Funct 2011; 7: 44
  • 35 Song JH, Nicol T, Kraus N. Test-retest reliability of the speech-evoked auditory brainstem response. Clin Neurophysiol 2011; 122 (02) 346-355
  • 36 Anderson S, Parbery-Clark A, White-Schwoch T, Kraus N. Aging affects neural precision of speech encoding. J Neurosci 2012; 32 (41) 14156-14164
  • 37 Krizman J, Skoe E, Kraus N. Sex differences in auditory subcortical function. Clin Neurophysiol 2012; 123 (03) 590-597
  • 38 Hornickel J, Anderson S, Skoe E, Yi H-G, Kraus N. Subcortical representation of speech fine structure relates to reading ability. Neuroreport 2012; 23 (01) 6-9
  • 39 Hornickel J, Knowles E, Kraus N. Test-retest consistency of speech-evoked auditory brainstem responses in typically-developing children. Hear Res 2012; 284 (1-2): 52-58
  • 40 Song JH, Skoe E, Banai K, Kraus N. Training to improve hearing speech in noise: biological mechanisms. Cereb Cortex 2012; 22 (05) 1180-1190
  • 41 Gonçalves IC. Aspectos audiológicos da gagueira: evidências comportamentais e eletrofisiológicas. [tese]. São Paulo Faculdade de Medicina: Universidade de São Paulo; 2013
  • 42 Laroche M, Dajani HR, Prévost F, Marcoux AM. Brainstem auditory responses to resolved and unresolved harmonics of a synthetic vowel in quiet and noise. Ear Hear 2013; 34 (01) 63-74
  • 43 Anderson S, Parbery-Clark A, White-Schwoch T, Drehobl S, Kraus N. Effects of hearing loss on the subcortical representation of speech cues. J Acoust Soc Am 2013; 133 (05) 3030-3038
  • 44 Hornickel J, Lin D, Kraus N. Speech-evoked auditory brainstem responses reflect familial and cognitive influences. Dev Sci 2013; 16 (01) 101-110
  • 45 Hornickel J, Kraus N. Unstable representation of sound: a biological marker of dyslexia. J Neurosci 2013; 33 (08) 3500-3504
  • 46 Fujihira H, Shiraishi K. Correlations between word intelligibility under reverberation and speech auditory brainstem responses in elderly listeners. Clin Neurophysiol 2015; 126 (01) 96-102
  • 47 Ahadi M, Pourbakht A, Jafari AH, Jalaie S. Effects of stimulus presentation mode and subcortical laterality in speech-evoked auditory brainstem responses. Int J Audiol 2014; 53 (04) 243-249
  • 48 Ahadi M, Pourbakht A, Jafari AH, Shirjian Z, Jafarpisheh AS. Gender disparity in subcortical encoding of binaurally presented speech stimuli: an auditory evoked potentials study. Auris Nasus Larynx 2014; 41 (03) 239-243
  • 49 Strait DL, O'Connell S, Parbery-Clark A, Kraus N. Musicians' enhanced neural differentiation of speech sounds arises early in life: developmental evidence from ages 3 to 30. Cereb Cortex 2014; 24 (09) 2512-2521
  • 50 Jafari Z, Malayeri S, Rostami R. Subcortical encoding of speech cues in children with attention deficit hyperactivity disorder. Clin Neurophysiol 2015; 126 (02) 325-332
  • 51 Bellier L, Veuillet E, Vesson JF, Bouchet P, Caclin A, Thai-Van H. Speech Auditory Brainstem Response through hearing aid stimulation. Hear Res 2015; 325: 49-54
  • 52 Skoe E, Krizman J, Anderson S, Kraus N. Stability and plasticity of auditory brainstem function across the lifespan. Cereb Cortex 2015; 25 (06) 1415-1426
  • 53 Akhoun I, Moulin A, Jeanvoine A. , et al. Speech auditory brainstem response (speech ABR) characteristics depending on recording conditions, and hearing status: an experimental parametric study. J Neurosci Methods 2008; 175 (02) 196-205
  • 54 Akhoun I, Gallégo S, Moulin A. , et al. The temporal relationship between speech auditory brainstem responses and the acoustic pattern of the phoneme /ba/ in normal-hearing adults. Clin Neurophysiol 2008; 119 (04) 922-933
  • 55 Klatt D. Software for cascade/parallel formant synthesizer. J Acoust Soc Am 1976; 67 (03) 971-975

Zoom Image
Fig. 1 Number of articles found, selected and the reasons for exclusion.