Open Access
CC BY-NC-ND 4.0 · Laryngorhinootologie 2023; 102(S 01): S3-S11
DOI: 10.1055/a-1973-5087
Referat

Hearing and Cognition in Childhood

Artikel in mehreren Sprachen: deutsch | English
Andrej Kral
1   Institut für AudioNeuroTechnologie (VIANNA) & Abt. für experimentelle Otologie, Exzellenzcluster Hearing4All, Medizinische Hochschule Hannover (Abteilungsleiter und Institutsleiter: Prof. Dr. A. Kral) & Australian Hearing Hub, School of Medicine and Health Sciences, Macquarie University, Sydney, Australia
› Institutsangaben
 

Abstract

The human brain shows extensive development of the cerebral cortex after birth. This is extensively altered by the absence of auditory input: the development of cortical synapses in the auditory system is delayed and their degradation is increased. Recent work shows that the synapses responsible for corticocortical processing of stimuli and their embedding into multisensory interactions and cognition are particularly affected. Since the brain is heavily reciprocally interconnected, inborn deafness manifests not only in deficits in auditory processing, but also in cognitive (non-auditory) functions that are affected differently between individuals. It requires individualized approaches in therapy of deafness in childhood.


1. Introduction

Pediatric hearing loss has far-reaching consequences for brain development [1], because the cerebral cortex develops depending on sensory and motor (i. e. active) experiences [2]. Children learn sensorimotor skills and access the environment using an internal (mental) model of the environment. Conscious human experience takes place within this model, and the model is permanently aligned with the environment via the sensory organs.

An essential tool in this process is human language, which is laid out in the first months of life [3] [4]. Language creates a specifically human abstract level of representation. Thus, the environment can be mentally anchored and processed using linguistic representations. Mental processes make use of linguistic representations. In this way, language shapes our thoughts (so-called Sapir-Whorf hypothesis) [5] [6]. Linguistically defined categories indeed influence also elementary sensory perception, e. g. early visual processing of most basic features such as colors [7] [8] [9] [10] [11].

Language as an essential component of cognition is in exchange with other cognitive functions. Sensory systems interact with each other to generate multisensory representations that further feed cognition. Moreover, sensory systems themselves have a cardinal function for cognition, generating a form of specific representation and providing the brain with a high-resolution subprocessor- and storage-unit [12] that cognition can make use of.

Sensory impairments in early development, especially congenital hearing impairment, consequently exerts an influence on the development of cognition directly and indirectly (through language effects) [13]. This review discusses such influences and gives an overview of the consequences of congenital hearing loss on brain maturation and the development of cognitive functions. Congenital hearing loss has specific consequences, as complete sensory deprivation (“total deprivation”) differs from partial deprivation (with residual hearing or a period of residual hearing) regarding the consequences [1].


2. Proximal effects of deafness: speech and hearing

Cochlear function normally begins in midgestation [14] [15] [16], and from this time onward, the development of the human brain can be shaped by auditory activity. There is an ontogenetic difference between the subcortical and cortical structures: typically development occurs from peripheral to central, and thus the peripheral structures mature earlier than the central ones. Individual functions of the auditory system thereby correspondingly develop from simpler to more complex ones in a mutually-dependent nested step-by-step sequence [17].

While the brainstem largely completes its development intrauterine, cortical development is only in early stage at birth and not completed until adulthood. For example, most of the processes of myelination in the brainstem and thalamus are largely completed at birth [18], while myelination in the cortex continues until adulthood [19] [20]. However, it is mainly synaptic function that defines neural processing. Synaptogenesis is largely completed in the brainstem at birth, whereas in the cortex, this is just beginning around the time of birth and is not completed before the age of 20 in humans (human cortex: [21], cat cortex: [22], overview in [1] [23]).

Consequently, the structuring influence of sensory experience on the ontogenesis of the auditory system is mainly observed in the cerebral cortex. In congenitally deaf cats, an influence of experience on cortical synaptic development could be precisely studied: in the absence of auditory experience, there were delays in synaptogenesis and ultimately extensive loss of functional synapses in the auditory cortex [22] (reviewed in [24]). This process is closely related to the critical phase in neuronal plasticity during cochlear implantation in the same animal model [25] [26], demonstrating that sensitive phases are closed by synaptic degradation and that way acquire their critical character (review in [1] [2]).

Cortical synapses can be divided into two groups: (i) thalamocortical synapses that mediate sensory input to the cortex and have a strong influence on cortical activity, and (ii) corticocortical synapses that mediate the actual integration of sensory input into ongoing cortical processing and are thus responsible for the integration function of cortical processing (see below). These latter synapses have a weaker influence on the activity, but at the same time they act through their multiplicity and their influence on what is called recurrent processing (see below).

If synaptic development plays a crucial role in closing critical periods, the question then arises, which synapses are actually lost in deafness – all of them, randomly distributed, more the thalamocortical ones (that form the sensory input to the cortex and thus mediate primarily the detection of auditory stimuli) or the corticortical ones that are specific for subsequent cortical processing (and thus enable discrimination and pattern formation)? Until recently, this had not been clarified in neither the visual nor auditory systems.

If thalamocortical synapses are lost, the responsiveness of the auditory cortex will be primarily impaired. If the corticocortical synapses are lost, the discriminative and pattern forming properties of the auditory cortex are primarily impaired. In order for a pattern to emerge, an object or category must be subsequently determined from sensory (acoustic) properties that are represented according to biological meaning in the primary auditory areas [1] [27]. An auditory object is defined as a neural representation of a defined acoustic pattern that can be the subject of foreground-background discriminations [1]. For this purpose, features in the stimulus that are distinctive (discriminative) to the object must be recognized and variations in non-distinctive ones ignored. Categorization is then the corresponding processing process that generates an auditory object (the category) from concrete acoustic events. The resulting categories often do not exist in the real world, there are only concrete examples of them. William Ockham used the term “rose” for illustration, which describes a category of flowers and is formed on the basis of examples from the real world, but does not exist in the environment as such (universality problem of philosophy).

Examples of auditory categories are a door falling shut, a bottle falling over, or the ringing of a telephone. Different events can have correspondingly different acoustic (spectral) properties (in the case of the door, in the office or the front door), and still be identified as the same category of event (falling door). Phonological units are also such categories formed from the phonetics of speech by abstraction. For example, three formants of a periodic sound event define a vowel [28]. Variations of sound properties within one category, which normally also occur in the same speaker, are ignored – we always hear the same phoneme. Phonemes are further grouped into syllables, morphemes, words, and statements. Thus, one categorization is nested in the next, forming the hierarchical system of language. According to this hierarchical structure of the language, a circuit in the cortex can be defined, whereby individual areas (like the Broca and Wernicke area) can be assigned to different language functions [29] [30] [31].

Interestingly, brain development in the affected areas also occurs in a corresponding temporal sequence, beginning with phonological analysis (in the first year of life), followed by morphosyntactic and lexical analysis (in the second year of life), and ending with sentence structure analysis (in the third year of life and beyond) (overview in [32]). It should be noted that these steps typically overlap in the temporal development sequence, i. e., before the previous one is completed, the next one is already beginning.

Two parallel processes are evident in auditory postnatal development [27] [33] ([Fig. 1]): (i) The ability to respond to differences in acoustic properties (i. e., to discriminate stimuli) is innate through an appropriate genetic program but is further enhanced or stabilized by experience. (ii) The ability to recognize differences that do not play a role in life circumstances is lost during development. This gives rise to the auditory categories.

Zoom
Fig. 1 The psychophysical development of auditory skills can be divided into two parts: the development of the ability to discriminate acoustic features (discrimination ability) and the ability to categorize them into auditory objects. A: Regarding the ability to discriminate the acoustic features, there can be both improvement and loss of the ability after birth. B: Categorization is dependent on experience, as categories are typically first developed through interaction with the environment. Taken from [27] [rerif].

These developmental processes can also be observed in language development. In the first year of life, the ability to form categories of phonemes (as described above) develops. The formation of categories must inevitably lead to the abandonment of the recognition of their unimportant acoustic variations [27]. Indeed, this has been observed: parallel to the emergence of the ability to refrain from unimportant acoustic variations in the mother language and still to recognize correct phonological category (the phoneme), the ability gets lost to discriminate phonetic differences that are not distinctive in the native language [3]. This happens in normal hearing children around the 8th month of age [3]. The brain learns phonological categories by means of statistical correlations from the speech flow of parents and caregivers, i. e., the incidence of phonemes and their transitions from one phoneme to another [34] [35] [36], probably recognizing groups of phonemes first (“chunks” [37] [38] [39] [40]), with individual phonemes establishing secondarily. This is followed by the development of the lexicon where words are associated with content and stored [41] [42] [43]. The corresponding “vocabulary spurt” takes place in the 2nd year of life. Grammar crystallizes later in the third and following years of life.

Language acquisition shows a critical period: If the hearing ability of children is not restored until the age of 3, success is limited [44] [45] (overview in [2] [24]). When the data on the critical period in the congenitally deaf cat model was extrapolated to human cortical development, the age of 3 years was also obtained [46]. This suggests that even in children, the closure of the sensitive period for language acquisition is based on synaptic development in auditory areas. However, even within the first 3 years of life, the earlier auditory therapy starts, the better the success [47] [48]. The existence of the critical developmental period is in principle not different from other sensory systems: even in the visual system, for example, face recognition from an alien species is learned effectively only in a critical period [49].


3. Neuronal processes of discrimination and categorization

The data discussed demonstrate that therapy for congenital deafness must occur early in life. This is required so that the categorization of acoustic features into phonological categories, a function that has a correlate in the auditory cortex, can be established at the developmental phase when the brain is highly plastic and can serve as fundament for other linguistic functions. Data further confirm that this critical period depends on the process of cortical synaptogenesis and that synaptic elimination closes this critical period.

But which part of the neuronal network function is affected by congenital deafness? The complex sound analysis or the subsequent embedding in the broader categorical and linguistic processing? Experience with late-implanted prelingually deaf patients evidenced deficits in auditory discrimination but less in detection of stimuli [50] [51] [52], suggesting problems of discrimination and categorization, i. e., integrative function of the cortex, rather than problems of stimulus detection.

In order to identify the synapses lost in the deaf brain, corticocortical processing would have to be experimentally separated from thalamocortical processing. Fortunately, there is the possibility to achieve this by separating the activity closely time- and phase-coupled to the stimulus from the residual stimulus-related (but not phase-coupled) activity ([Fig. 2]). These two activities are called evoked (phase-coupled) and induced (non-phase-coupled). They are best separable in the time-frequency space [53] [54]. Since corticocortical processing is permanently present and determines spontaneous activity that is not synchronous with the presented stimulus, its correlate varies when the stimulus is repeated multiple times. This distinguishes it from thalamocortical activity that is strictly related to the stimulus and therefore occurs in a strictly phase-coupled manner in cases of repeated stimulation. The separation of evoked and induced activity thus allows the activity caused by thalamocortical input to be considered separately from corticocortical processing of the stimulus.

Zoom
Fig. 2 Separation of evoked (thalamocortical) and induced (corticocortical) activity using the example of a measurement in the primary auditory cortex of the cat. Top left: 30 repetitions of an auditory stimulus (condensation click, 50 µs, presented at 0 sec), single measurements (“trials”) shown in different colors. A strong phase-coupled (and therefore repeatable in different trials) response is visible at 0–0.2 sec. Activity after about 0.2 sec, however, is also different from activity before the stimulus (-0.4–0 sec). This activity is not phase-coupled, it varies significantly in different trials. Bottom left: After frequency analysis (Morlet wavelets), activity is clearly seen around 0 sec, but also between 0.2–0.6 sec. Middle: When the time signals are averaged, one sees the part of the activity that can be reproduced over trials (phase-coupled). This is limited to the time 0–0.2 sec. Right: The difference between total and evoked activity is the induced activity resulting from corticocortical interactions. Reproduced from [55] [rerif].

In deaf cats it could be shown recently that the synapses lost due to deafness affect less the thalamocortical synapses, but rather those responsible for corticocortical processing [55]. Subsequent studies were able to demonstrate that it is mainly due to a loss of synapses involved in the so-called top-down interaction between secondary and primary auditory cortex ([Fig. 3], see [56] [57]). These are responsible for the influence of higher to lower representations, e. g., from auditory object to acoustic properties, or from word to phoneme, etc. [27]. These functional data had a correlate in cortex morphology: deep layers V and VI, which are the main sources of top-down influences, showed a dystrophic change in primary and secondary auditory areas in deaf cats [58]. These layers became disconnected from the upper layers [56]. Such between-layers connections are key for the so-called recurrent cortical processing that allows to boost the influence of the weak corticocortical synapses on ongoing activity [56].

Zoom
Fig. 3 Results of connectivity analysis in hearing (left) and deaf (right) cats. Both the primary area (A1) and the secondary area (PAF) receive a strong thalamic input causing evoked activity in both areas. Subsequently, there is an offset in the cortex in which the areas are connected to each other via bottom-up (A1 ->PAF) as well as top-down (PAF ->A1) interactions. In the deaf animal, evoked responses are fully (A1) or partially (PAF) preserved, and corticocortical interactions, especially top-down interactions, are deficient. D=dorsal; V=ventral; C=caudal; R=rostral. Taken from [56] [rerif].

Such results are consistent with the theory of predictive processing [27] [59] [60] [61] [62], which states that the brain constantly generates prediction about possible sensory inputs and processes them only when they are inconsistent with the prediction. This substantially reduces the brain’s computational efforts. Congenital deafness prevents the top-down interactions that are critical for predictive processing [56]. Since the representation of auditory objects can only be established through experience and is not present in deafness, synapses necessary for this purpose are lost due to non-use or do not develop at all (ibid.). The absence of predictions makes the auditory process more effortful and then requires more active bottom-up processing, more auditory effort [27]. These predictions are actually consistent with findings supporting the ELU theory (“ease of language understanding”) [63]. Prediction error is also a crucial factor in the control of learning processes. Thus, the lack of the neuronal substrate for top-down interactions and thus prediction also impedes auditory learning.

The fact that auditory areas (and not language areas) are responsible for the closure of critical phases in children, is supported by the observation that the critical period is closely related to components of event-related potentials generated in primary and secondary auditory areas [64] [65]. This is exactly in line with the predictions of the cat model [24]. These findings are consistent with the observation mentioned above that the development of auditory circuits occurs earlier than the development of circuits responsible for higher-level language competence, e. g. lexicon or grammar. Thus, the bottleneck of development is in the auditory-phonetic analysis of linguistic input.

The many evidences that all point into the same direction allow the conclusion that the critical period for therapy of deafnes is not due to the higher speech processing (and thus the higher speech areas), but mainly to the acoustic-phonetic-phonological transformation. This must occur extremely effectively and rapidly and it is one of the the first linguistic skills to develop after birth. All subsequent steps of language acquisition depend on it and are thus also (secondarily) affected.


3. Distal effects of deafness: multisensory and cognitive sequelae

Hearing is not isolated in the brain. All cerebral structures are interconnected in many ways. This enables the brain’s integrative performance, and a holistic perception of the world is achieved.

Cognitive functions exert a top-down influence on auditory perception and speech processing so that part of speech comprehension is influenced by these functions [66] [67]. Even in postlingually deafened patients, cognitive performances allow to elucidate part of the interindividual variability of the outcomes of cochlear implantation, and this is why some authors propose to test these functions clinically [68]. Such testing is of course more complex in children, but it is quantifiable by means of questionnaires, even at preschool age [69].

Not only cognitive functions influence hearing, but also hearing has a reciprocal influence on cognition, especially in childhood [13]. Cognition uses the auditory system, for example, for representations in the temporal domain, which has been compared to a blackboard which is written by the cognition [12]. Hearing serves in this model for calibration of an mental time axis. Also for this purpose, auditory representations have to be accessed via top-down connections.

Thus, in addition to proximal effects of hearing impairment, distal effect of congenital hearing loss on the other sensory systems and cognition can be expected [13]. The auditory system is critical for temporal analysis in the brain. A visual task based on counting flashes (i. e., visual stimuli) can be disturbed by ignored acoustic stimuli presented in parallel, but not vice versa [70] [71]. A common explanation is the far higher precision of the auditory system in the temporal dimension: hearing faithfully represents the phase of acoustic stimuli up to a frequency of 4000 Hz, whereas in vision, this ends with the fusion of the individual stimuli already in the range of 40–60 Hz (where the illusion of motion starts), which in fact corresponds to a difference by a factor of 100. In the context of spatial localization tasks, hearing loses against vision, which becomes clearly evident with the ventriloquist effect [72] [73]. The much more precise visual system wins, whose spatial resolution is 1 angular minute, which is almost 100 times smaller than the minimum auditory angle (8°) that hearing is able to distinguish. Multisensory interactions (between sensory systems), however, require postnatal experience, without which they cannot develop [74] [75] [76]. In congenital deafness, negative crossed-effects on other sensory functions have been documented, e. g. negative effects on visual sequence learning [77] [78] [79] [80]. Multisensory perceptual interactions are affected in prelingually deaf patients [81], as well as fine motor skills [82]. Impairment in the temporal domain depends on the exact task [80]. In more complex tasks with combined spatial-temporal tasks, the spatial aspects may compensate for the temporal deficits, and this must be kept in mind when planning and interpreting the examinations [83].

In summary, congenital deafness has an impact on temporal processing in other sensory systems. Since, in effect, all natural stimuli are multimodal and cognition makes use of multimodal and modeless objects that arise from them, congenital hearing loss has significant consequences in this area as well. In auditory cortex, evoked (i. e., thalamocortical) responses to auditory input are preserved in deafness [84] but beyond auditory cortex they are reduced [85] and multisensory interaction with the deprived sensory system does not develop [76] [86] [87]. This is also seen in speech – e. g., in the influence of lip-reading on the perception of syllables, as in the McGurk effect. Prelingually deaf children implanted after the age of 2 showed an absence of multisensory fusion and a visual dominance in perception that was not seen in hearing controls [75]. Earlier implantations prevented this effect and allowed more effective multisensory fusion (ibid.). This is important for multisensory processing of speech.

Hearing has a decisive advantage over other sensory systems in orientation, since it is not dependent on attentional focus, visual field, or on vegetation. Attention is automatically co-directed by hearing, an effect that is absent in individuals born deaf. Consequently, congenital deafness leads to the change in the distribution of visual attention in space, with higher distraction and with more attention in the visual periphery [88] [89] [90]. The time of sustained attention to the same object with the parent is reduced [91] [92]. This is crucial for early learning and child development. (However, with reduced sustained attention, the hearing parent may also contribute to the problem because hearing parents do not pay enough attention to the child’s gaze direction [91] [93]). Fortunately, at later ages (around 9–10 years), this problem is no longer observed [92]. The related problem of higher distractibility and impulsiveness of deaf children (an executive function problem) remains present beyond 9–10 years [92], which negatively affects learning processes. Congenital hearing disorders lead to a relevant reorganization of the attentional system and executive functions in the affected child.

Hearing allows the establishment of phonological categories that largely form the basis for the written word. Deaf children (who have not been subjected to oral education) do not establish the phonological level (and other features) of spoken language [94] [95] [96]. Written language, however, derives from spoken language. Thus, the reading ability of deaf (signing) teens is delayed by many years on average compared to hearing peers [97] because they must also reacquire an unfamiliar phonological system when learning to read (review in [98]). Thus, it can be seen that acoustically mediated language dramatically improves the educational options of an education oriented at a hearing society.

In the context of “feral children” casuistics [99], reports about interindividual effects on cognition are known, which unfortunately have been investigated in detail only in a few of these subjects [100] [101] and whose origin could not always be definitely clarified. In some of these children (such as Peter of Hannover), autism has also been suspected as a concomitant disorder. In other casuistics, other concomitant disorders have been considered. Nevertheless, interindividually varying deficits in different cognitive functions have been reported in these children [102]. Such individually different deficits in individual cognitive functions under the assumption of the same disease are called cognitive scatter. Cognitive scatter has also been reported in deaf children [103] [104] [105] [106], suggesting that the absence of hearing increases the risk of cognitive abnormalities. While the above-mentioned effects of congenital deafness on attention explain well the changes in executive functions and reduced impulse control in deaf children [88] [92], the dependence of cognitive functions on speech competence (in the sense of the Sapir-Whorf hypothesis) needs further investigation. It should also be emphasized that this could be influenced by tests that are language dependent [107]. Animal models allow separation of effects of language deficits and hearing deficits on cognition, which is crucial in this field. Experiments in this direction are also being conducted in our labs.

To conclude, the congenitally deaf brain is not the same as a hearing brain without a functioning inner ear. The deaf brain is adapted to deafness, with much more profound impact for the child. These adaptations go beyond the hearing system and affect many other functions of the brain. The connectome is the sum of all synaptic connections in the brain; if understood as a functional connectome, it defines our thinking processes and all our perception. The connectome model of congenital deafness [13] proposes to view the consequences of hearing loss on brain development in the perspective of the whole brain ([Fig. 4]). The model emphasizes the high interconnectivity of the auditory system with the rest of the brain and its reciprocal dependencies in multisensory and cognitive functions including speech, attention, memory, and executive functions. This may (not necessarily) lead to cognitive changes in congenital deafness that may be dependent on the “strategy” the brain uses to compensate for the absence of hearing [13]. The cognitive changes typically show a highly individual pattern (cognitive distribution).

Zoom
Fig. 4 The connectome model of congenital deafness. Exemplary representation of the cortical connections of the auditory system with the rest of the cortex. Bottom-up interactions are in green color, top-down interactions are in red color. The auditory system is strongly connected to the other subsystems of the brain, and in deafness, these must adapt adequately to the absence of hearing. This adaptation varies interindividually. Taken from [13] [rerif] “Reprinted from Lancet Neurology 15, Kral A, Kronenberger WG, Pisoni DB, O’Donoghue GM, Neurocognitive factors in sensory restoration of early deafness: a connectome model 610-621, Copyright (2016), with permission from Elsevier”.

A risk of deficits in cognitive functions in congenital hearing deficits must therefore be considered, diagnosed, and also addressed, since such deficits can have far-reaching consequences for the later life of the deaf child. An essential task is to develop methods that can counteract the cognitive effects of deafness. Since these vary from individual to individual, this requires an individualized medical approach in the context of cochlear implantation.



Interessenkonflikt

Der Autor gibt an, dass kein Interessenkonflikt besteht.


Korrespondenzadresse

Prof. Dr. A. Kral
VIANNA
Stadtfelddamm 34
30625 Hannover
Deutschland   

Publikationsverlauf

Artikel online veröffentlicht:
02. Mai 2023

© 2023. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial-License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/).

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany


Zoom
Abb. 1 Die psychophysiche Entwicklung von auditorischen Fertigkeiten kann ich zwei Bereiche aufgeteilt werden: in die Entwicklung der Fähigkeit akustische Merkmale zu unterscheiden (Diskriminationsfähigkeit) und in die Fertigkeit diese dann in auditorische Objekte zu kategorisieren. a: Bei der Fähigkeit die akustischen Merkmale zu diskriminieren kann es nach der Geburt sowohl zur Verbesserung, aber auch zum Verlust der Fähigkeit kommen. b: Die Kategorisierung ist abhängig von Erfahrung, da die Kategorien sind typischerweise erst durch die Interaktion mit der Umwelt entwickeln. Aus [27] [rerif].
Zoom
Abb. 2 Trennung von evozierter (thalamokortikaler) und induzierter (kortikokortikaler) Aktivität am Beispiel einer Ableitung im primären auditorischen Cortex der Katze. Links oben: 30 Wiederholungen eines auditorischen Stimulus (Kondensations-click, 50 µs, bei 0 Sek. präsentiert), einzelne Ableitungen („Trials“) in unterschiedlichen Farben dargestellt. Eine starke Phasen-gekoppelte (und daher in unterschiedlichen Trials wiederholbare) Antwort ist bei 0–0.2 Sek sichtbar. Aktivität nach ca. 0.2 Sek. unterscheidet sich aber auch von der Aktivität vor dem Reiz (-0.4–0 Sek.). Diese Aktivität ist jedoch nicht phasengekoppelt, in unterschiedliche Trials variiert sie stark. Links unten: Nach Frequenzanalyse (Morlet Wavelets) zeigt sich deutlich Aktivität um 0 Sek, aber auch zwischen 0.2–0.6 Sek. Mitte: Wenn die Zeitsignale gemittelt werden, sieht man den über Trials reproduzierbaren Teil der Aktivität (phasengekoppelt). Dieser ist auf die Zeit 0–0.2 Sek. Beschränkt. Rechts: Der Unterschied zwischen totaler und evozierter Aktivität ist die induzierte Aktivität, die von kortikokortikalen Interaktionen resultiert. Reproduziert aus [55] [rerif].
Zoom
Abb 3 Resultate der Konnektivitätsanalyse bei hörenden (links) und gehörlosen (rechts) Katzen. Sowohl das primäre Areal (A1) wie auch das sekundäre Areal (PAF) erhalten einen starken thalamischen Eingang, der die evozierte Aktivität in beiden Arealen verursacht. Nachfolgend kommt es zu einer Verrechnung im Cortex, bei dem die Areale miteinander über bottom-up (A1 ->PAF) wie auch top-down (PAF ->A1) Interaktionen verbunden sind. Beim gehörlosen Tier sind die evozierten Antworten ganz (A1) oder teilweise (PAF) erhalten, die kortiko-kortikalen Interaktionen, vor allem die top-down Interaktionen, defizient. Aus [56].
Zoom
Abb. 4 Das Konnektom-Modell der angeborenen Gehörlosigkeit. Exemplarische Darstellung der kortikalen Verbindungen des Hörsystems mit dem Rest der Hirnrinde. Bottom-up Interaktionen sind in grüner Farbe, top-down Interaktionen in roter Farbe. Das Hörsystem ist stark mit den anderen Subsystemen des Gehirns verbunden, und bei Gehörlosigkeit müssen sich diese auf das Fehlen des Hörens adäquat anpassen. Diese Anpassung ist interindividuell unterschiedlich. Aus [13] [rerif] „Nachgedruckt von Lancet Neurology 15, Kral A, Kronenberger WG, Pisoni DB, O’Donoghue GM, Neurocognitive factors in sensory restoration of early deafness: a connectome model 610-621, Copyright (2016), mit Genehmigung von Elsevier“.
Zoom
Fig. 1 The psychophysical development of auditory skills can be divided into two parts: the development of the ability to discriminate acoustic features (discrimination ability) and the ability to categorize them into auditory objects. A: Regarding the ability to discriminate the acoustic features, there can be both improvement and loss of the ability after birth. B: Categorization is dependent on experience, as categories are typically first developed through interaction with the environment. Taken from [27] [rerif].
Zoom
Fig. 2 Separation of evoked (thalamocortical) and induced (corticocortical) activity using the example of a measurement in the primary auditory cortex of the cat. Top left: 30 repetitions of an auditory stimulus (condensation click, 50 µs, presented at 0 sec), single measurements (“trials”) shown in different colors. A strong phase-coupled (and therefore repeatable in different trials) response is visible at 0–0.2 sec. Activity after about 0.2 sec, however, is also different from activity before the stimulus (-0.4–0 sec). This activity is not phase-coupled, it varies significantly in different trials. Bottom left: After frequency analysis (Morlet wavelets), activity is clearly seen around 0 sec, but also between 0.2–0.6 sec. Middle: When the time signals are averaged, one sees the part of the activity that can be reproduced over trials (phase-coupled). This is limited to the time 0–0.2 sec. Right: The difference between total and evoked activity is the induced activity resulting from corticocortical interactions. Reproduced from [55] [rerif].
Zoom
Fig. 3 Results of connectivity analysis in hearing (left) and deaf (right) cats. Both the primary area (A1) and the secondary area (PAF) receive a strong thalamic input causing evoked activity in both areas. Subsequently, there is an offset in the cortex in which the areas are connected to each other via bottom-up (A1 ->PAF) as well as top-down (PAF ->A1) interactions. In the deaf animal, evoked responses are fully (A1) or partially (PAF) preserved, and corticocortical interactions, especially top-down interactions, are deficient. D=dorsal; V=ventral; C=caudal; R=rostral. Taken from [56] [rerif].
Zoom
Fig. 4 The connectome model of congenital deafness. Exemplary representation of the cortical connections of the auditory system with the rest of the cortex. Bottom-up interactions are in green color, top-down interactions are in red color. The auditory system is strongly connected to the other subsystems of the brain, and in deafness, these must adapt adequately to the absence of hearing. This adaptation varies interindividually. Taken from [13] [rerif] “Reprinted from Lancet Neurology 15, Kral A, Kronenberger WG, Pisoni DB, O’Donoghue GM, Neurocognitive factors in sensory restoration of early deafness: a connectome model 610-621, Copyright (2016), with permission from Elsevier”.