The present section will describe the dependence of the neuronal activity in the primaryauditorycortex of the low-trained monkeys on the sound meaning and on the unconditioned stimuli alone. In order to reveal the changes, we presented three passive conditions to the low- trained monkeys. To give the meaning to the acoustical stimulations, the last were paired with water delivery (CS+, Figure 1 and Figure 2). Animals were water deprived that insured that the drop gave the meaning to the acoustical stimulation. A drop of water was delivered shortly before the offset of the pure tone. Before or after the CS+ condition, another condition (CS-) was presented for the control of how the neuronal activity will change when the acoustical stimuli had no meaning. The acoustical stimuli in the CS- condition were the same but were not paired with the water delivery. Lastly, in order to control the effect of the unconditioned stimuli, a third condition (US) was presented before, after or between the CS- and CS+ conditions. In the US condition, one drop of water was regularly delivered to the monkeys. Three conditions were presented to two monkeys with little experience. One of the monkeys was never trained to perform any auditory tasks. Another monkey participated in another experiment with an auditory task but the training was terminated two years before the beginning of the current experiment. Additionally to the lack of experience, the analysis of the monkeys‟ behavior revealed a lack of learning from the first to the last sessions where the three conditions were presented (see section 3.1.4). Therefore, we will name the two animals “low-trained” monkeys. For the present study, we used 37 (nine recording session) and 38 units (nine recording sessions) recorded in the auditorycortex of the two monkeys (Supplementary table 1). The neuronal activities were similar between the monkeys and we combined the two samples into one. We identified that the first spike latencies of these units were 16.7± 11.9 ms after the onsets of the pure tones. All 75 units of the sample responded to at least one type of the acoustical stimulus, to the onset of the pure tone or of the noise, or to the offset of one of them.
Vagus nerve stimulation (VNS) is a method for driving therapeutic, targeted neuroplasticity in clinical populations suffering from tinnitus and stroke. VNS facilitates specific cortical changes through the phasic release of plasticity promoting neuromodulators simultaneously paired with delivery of a sensory stimulus, such as tones or speech. Recent clinical evidence and ongoing pre-clinical experiments in rats show that VNS paired with near-threshold somatosensation of the hand/paw can significantly reduce elevated sensory thresholds resulting from neural injuries after only one week of therapy. A possible explanation for this quick and robust recovery is that VNS is more effective at driving neuroplasticity in cortical circuits when paired with stimuli just above the response threshold of neural receptive fields. To date, all auditory VNS therapies have used stimuli considerably above auditory thresholds, potentially diminishing the therapeutic effect of VNS-paired treatments. To test the effectiveness of VNS-paired, near-threshold stimuli in driving auditory neuroplasticity, unimpaired adult rats will receive VNS repeatedly paired with the brief presentation of a 10 dB SPL 9 kHz tone for one week. A cortical map of receptive field properties in primaryauditorycortex will be made one day later and compared to the maps of naïve rats.
The neurophysiological data showed that the strength of inhibition, as quantified in comparison of several response parameters in predrug conditions and GABA blockade, was not significantly layer-dependent. This is consistent with data of a recent study (Chen and Jen 2000) that showed a similarity of the expanded tuning curves measured during BIC application within an orthogonal penetration in the auditorycortex of a bat species. The high percentage of silent neurons that was found in layer VI suggests that inhibition is strong in the deep layers of A1, which is similar to data from the somatosensory cortex (Dykes et al. 1984). The percentage of GABAergic neurons found in the gerbil (15%) is between that reported for the cat (24.6%; Prieto et al. 1994a) and the ferret auditorycortex (10.4%; Gao et al. 1999). The proportion of GABAergic neurons appears to be species-specific, since values for sensory cortical areas of the ferret are generally lower than reported for cat neocortex. In cat A1, the percentage of GABAergic neurons peaks in layers I and V, which is comparable to the present data. However, the respective percentages are higher in the cat, especially in layer I. Since most neurophysiological data are from pyramidal cells in layers II, III, V and VI, puncta on pyramidal cells were counted in these layers. The number of puncta was highest for layer V pyramidal cells. However, one has to take into account the larger somatic size of these cells. Indeed, when the number of puncta was related to the cell perimeter, no significant difference was found across layers.
In the visual system it has been argued that the lateral geniculate nucleus (LGN) performs such a whitening step [ 10 ] but the initial transformations employed in the auditory system are quite different, making this sort of preprocessing harder to justify. Whitening for cochlea- grams would essentially mean that neural activities do not encode energies in frequency bands but deviations from a mean energy relative to energy variances. Adaptation effects to mean and variances over time are well known for regions upstream of the cortex such as the auditory nerve and inferior colliculus [ 11 – 14 ]. However, this adaptation should not be equated with whitening. If it was this would imply that the absence of any signal energy should lead to (on average) equally strong responses as energies above the mean. If we do not assume a whitening stage for cochleagrams or a similar preprocessing to obtain mean-free stimuli, then we are con- fronted with the question: How do measured STRFs with their positive and negative subfields emerge? In vision, after an assumed whitening stage, stimuli contain positive and negative parts which directly result in components extracted by sparse coding to have negative and pos- itive subfields. For the non-negative energy representation of cochleagrams it is so far unclear how negative subfields can emerge without a whitening stage. Statistical data models not requiring whitening suggest alternative mechanisms commonly referred to as “explaining away effects” which have so far not been linked to negative subfields of neural response proper- ties. As an example for “explaining away” consider the situation of sitting in a park. It is a nice warm day, you have your eyes closed, and are just listening to the sounds around you. There is a small orchestra somewhere with musicians practicing for a concert, and there are birds in the trees. If you now perceive a very short melodic sequence, it may have been generated by a bird or by a musician’s flute
The function of the cerebral cortex essentially depends on the ability to form functional assemblies across different cortical areas serving different functions. Here we investigated how developmental hearing experience affects functional and effective interareal connectivity in the auditorycortex in an animal model with years-long and complete auditory deprivation (deafness) from birth, the congenitally deaf cat (CDC). Using intracortical multielectrode arrays, neuronal activity of adult hearing controls and CDCs was registered in the primaryauditorycortex and the secondary posterior auditory field (PAF). Ongoing activity as well as responses to acoustic stimulation (in adult hearing controls) and electric stimulation applied via cochlear implants (in adult hearing controls and CDCs) were analyzed. As functional connectivity measures pairwise phase consistency and Granger causality were used. While the number of coupled sites was nearly identical between controls and CDCs, a reduced coupling strength between the primary and the higher order field was found in CDCs under auditory stimulation. Such stimulus-related decoupling was particularly pronounced in the alpha band and in top–down direction. Ongoing connectivity did not show such a decoupling. These findings suggest that developmental experience is essential for functional interareal interactions during sensory processing. The outcomes demonstrate that corticocortical couplings, particularly top-down connectivity, are compromised following congenital sensory deprivation.
independent inhibition is implemented acting as a mechanism to enhance motion-direction sensitivity of neurons. In brief, the detector shown in Fig. 1 works as follows: A stimulus moving from the left to the right evokes an excitatory response from receptor 1 which arrives earlier at the nonlinear interaction stage as the inhibitory response from receptor 2, and can thus pass this stage. If a stimulus moves in the opposite direction, the inhibitory response from receptor 2 is delayed and the response from receptor 1 is blocked at the nonlinear interaction stage if receptor 1 is stimulated with the same delay as implemented in the detector. Kautz & Wagner (1998) could proof the existence of a direction independent inhibition in the IC of barn owls using microiontophoretic application of the γ-aminobutyric acid-A (GABAA) receptor antagonist bicuculline methiodide (BMI). However, a GABAergic directional dependent inhibition could not be shown. Thus, a complete understanding of the cellular mechanisms underlying motion- direction sensitivity in barn owls is still not achieved. However, the notion that inhibition is involved in creating motion-direction sensitivity is further support by other studies. Sanes et al. (1998) proposed that the responses of neurons in the IC of gerbils to dynamically varying interaural level differences are controlled by synaptic inhibition. Furthermore, the motion-direction-sensitive response of neurons in the primaryauditorycortex of cats was determined by inhibition (Altman & Nikitin, 1985). In contrast, using dynamic interaural phase cues and free field auditory apparent motion McAlpine et al. (2000) and Ingham et al. (2001) found no special inhibitory mechanism for the processing of auditory motion in the IC of guinea pigs. They concluded, that shifts of spatial RF position were due to adaptation of excitation, defined as the ‘reduced capacity of a neuron to respond to subsequent excitatory stimuli following presentation of a stimulus that is itself excitatory’ (Ingham et al., 2001, p. 24). Thus, no special auditory motion detection mechanism would exist.
Direct recordings from neurosurgical patients undergoing invasive monitoring for epilepsy provide an opportunity to resolve distinct auditory cortical fields on the supratemporal plane. Short-latency (< 20 ms) responses and prominent phase locking to 100-Hz click trains are consistently found in posteromedial Heschl's gyrus (HG), indicating a primary field. More anterolateral recording sites, presumably in a non-primary field, respond with longer latency and much less phase-locking. To more fully characterise electrophysiological properties of primary and non-primaryauditorycortex we analysed activity in the frequency domain, prior to and during the presentation of clear and degraded sentences. In contrast with posteromedial HG, where power in the high gamma range (70-150 Hz) dominated, anterolateral HG, was characterised by high spontaneous alpha (7-10 Hz) activity that was strongly suppressed from 300 ms after the onset of a stimulus until its offset. This suppression was more pronounced in response to clear than to degraded speech. We consider possible explanations for differences in the spectral profile of primary and non-primaryauditory cortical activity.
A clear disadvantage of the aforementioned studies is that investigating the sluggish BOLD response does not allow to assess to what extent relevant neural populations indeed track AM rates by aligning their activity to the temporal envelope of the acoustic stimulus. These temporal features of neural responses are much better captured using electrophysiological techniques. A classical approach applies rhythmic sensory stimulation to elicit a so-called steady state response ( Galambos, 1980 ). In the auditory modality, the steady-state response (aSSR) appears to elicit a maximum response in auditorycortex at ~40 Hz with a right-sided laterali- zation ( Ross et al., 2005 ). However, whether the aSSR also exhibits a topographical representation, as suggested by the aforementioned fMRI studies, is not conclusively known. This was also not shown by the classically cited MEG study by Langner et al. (1997) , which focused its analysis on low pass ﬁltered transient evoked responses, in particular the M100, M200 and the sustained ﬁeld. Liegeois- Chauvel et al. (2004) investigated the aSSR in epilepsy patients implanted with stereotactic electrodes, but were unable to identify clear modulation rate dependent spatial gradients. However, it could be argued that the spatial sampling with electrodes was insuf ﬁcient to uncover ﬁne spatially distributed patterns. In a noninvasive electrophysiological work using electroencephalog- raphy (EEG) and a prede ﬁned source montage, Herdmann et al. ( Herdman et al., 2002 ) reported that aSSRs driven by a modulation frequency of 39 Hz have generators dominantly in primaryauditorycortex (this ﬁnding is in line with several other reports also using alternative approaches; see e.g. ( Draganova et al., 2007 )). A faster modulation at 88 Hz elicits maximum responses in the brainstem, which is conform with the notion outlined previously that the ca- pacity of auditorycortex to track faster amplitude modulations
The second pathway leading from the retina is mainly formed by fibers of the Y- and W-cell type, which project to the superior colliculi (SC) in the tectum (Wässle & Illing, 1980). Each SC is composed of seven layers. The three superficial layers receive direct visual input from the retina and primary visual cortex, which is arranged to form a retinotopic representation of the contralateral visual field. The deeper layers receive information from the auditory, somatosensory, and motor systems. Likewise, these in- coming signals are used to build tonotopic and somatotopic representations, respec- tively (Meredith & Stein, 1986). The different sensory maps are combined to create a single multisensory map with the optical axis as its primary reference (Stein et al., 2013). The SC is thought to play a central role in the control of saccades and orienting responses toward an object or a location of interest (Stein et al., 2013). Anatomically, the SC sends ascending projections to the visual thalamic nuclei, including the LGN and Pulvinar, to the caudal and rostral intralaminar thalamic nuclei and to the supragenicu- late nucleus. The descending projections terminate in brainstem structures and the spi- nal cord to exert control over motor responses in orienting behavior. In addition to these ascending and descending pathways the SC is heavily interconnected with the ba- sal ganglia and substantia nigra (Krout et al., 2001; McHaffie et al., 2005; Hoshino et
A second possibility would rely on in the improvement to chronic implantation techniques. Intracortical microelectrodes, albeit more invasive that subdural electrodes, might provide a more effective prosthetic approach, targeting specific subregion of the cortex and using smaller currents. Using array with a high number of electrodes with a sufficiently fine spatial resolution, visual information could be conveyed through increasingly complex patterns of electrical stimulation (subjects can typically resolve phosphenes produced by electrodes separated by as little as 500µm (Bak et al., 1990; Schmidt et al., 1996)). However, combining phosphenes to obtain more detailed images is not so straightforward since concurrent stimulation of multiple sites in visual cortex produces multiple phosphenes that do not combine to form a coherent shape (Dobelle et al., 1976; Schmidt et al., 1996). One explanation for the failure of the conventional electrical stimulation paradigm is the unnatural activity that it evokes in cortex. When viewing natural scenes, only a small fraction of neurons in early cortex are active. In contrast to selective activation of neurons produced by real visual stimuli, electrical stimulation activates an effectively random set of neurons in the immediate region of the electrode (Histed et al., 2009). Even though this electrical activation can result in a roundish percept, the non- selective activation of spatially contiguous neurons might not propagate to higher areas to produce complex percepts as normally occurs with natural vision.
The primary visual cortex is one of the most in tensely studied areas of the mammalian cerebral cortex and its functional architecture is known in considerable detail now (Hubei and Wiesel, 1977). The early stages of visual processing are devoted to extracting relatively simple features like ori ented contrast contours (lines and bars) from the visual input. One type of feature detectors fre quently encountered in the visual cortex of cats and monkeys are orientation selective simple cells. Their localized receptive fields are subdivided into few elongated subfields of alternating ON or OFF response to small light spot stimuli (Jones and Palmer, 1987). Several investigations using optical imaging techniques showed these simple cells to be organized into piecewise continuous orienta tion preference maps along the cortical surface containing ± 1/2-vortices, where orientation pref erences change by ± 180° along a closed path around the vortex center (Blasdel, 1992; Bonhoef- fer and Grinvald, 1991). Also there are cortical re gions, so-called iso-orientation domains, where all neurons have similar orientation preferences, lin-
We compared fMRI responses in humans and fUS recordings in ferrets to speech/music and their model-matched counterparts. Interestingly, we observed speech selective regions in the ferret auditorycortex. However, and contrary to the real speech- and music-selective response components observed in human non-primary regions (Norman-Haigneré, 2015/2018), model-matched stimuli evoked similar responses in the ferret. Because speech and music are not ecologically relevant sounds for ferrets, we wanted to test whether ferret auditorycortex could discriminate between ferret pup vocalizations and their corresponding model-matched versions. We observed differences in animal motor activity for original vocalizations compared to model-matched stimuli, indicating that the animal is able to perceptually discriminate these two classes of sounds. We are currently investigating the neural correlates of this capability in auditorycortex responses. Follow-up work will test if ferrets can innately discriminate original vs synthetic speech, or whether perceptual learning is necessary to do so.
Alternatively, it is currently discussed to which extent branched projections to both MGB and IC exist (Lee et al., 2011; Slater et al., 2013), though they were frequently considered to represent only a minor fraction of largely independent auditory corticofugal projection streams (Wong and Kelly, 1981; Ojima, 1994). Therefore terminals of layer V CT cells could also activate thalamic inhibitory interneurons to quench the afferent signals from IC via presynaptic inhibiton. However, the percentage of GABAergic neurons in MGB was found to be <1% in rodents – although it has not been investigated in gerbils – (Winer and Larue, 1996), which may decrease the likelyhood and effectiveness of such a mechanism. The inclusion of additional synaptic relays, on the other hand, could make the circuitry too slow to account for the observed cortical patterns. In AC then, the afferent input could lead to both (direct) excitation of cortical (e.g., excitatory pyramidal) cells and (indirect) inhibition, e.g., disynaptic feed- forward shunting inhibition (Sun et al., 2006) or feedback inhibition. This would explain why the elongated afferent input emerges only after muscimol (no intracortical inhibiton anymore), and in layer V lesioned animals (generation of persisting activation). In non-lesioned animals, the afferent input will be weaker and shorter even without intracortical inhibition, thus the model works for both animal groups.
Finally, discrimination learning in an aversive Go/NoGo paradigm showed no effect on AEP gating or long term phase-locking between the auditorycortex and the ventral striatum. Intuitively, during the shuttle-box training a stimulus-response association is formed that is reinforced by successful foot-shock avoidance. All information needed to perform hits and correct rejections in the task, are already given during the first stimulus presentation in the train of FM tones, that were used as CS+ and CS- cues. The paradigm differs from other animal studies that demonstrated changes in auditory gating due to inevitable chronic stressors, and hence might be better suited to model real world situations. The finding further implicates that in a controllable situation, auditory gating functions robustly and subtle changes in the brain physiology due to learning cannot alter this process. To gain a deeper understanding of the role of attention during gating, future investigations could alter the CS stimuli in a way that attention had to be paid not to the first stimulus but to the second or a later tone within a stimulation train.
Our awareness of being-in-the-world is often caused by the intensiveness of multi-sensory stimuli. The experience of walking through a cavern, feeling a fresh breeze that contrasts with the pure solid rock under the feet, hearing echoes of footsteps and water drops serves as a good example for this: All the simultaneously sensed impressions make us aware of our body and its integration into the cavern. The lack of a single sense, or a misleading impression would change the overall impression. In traditional computer-related work, many senses such as Hearing, Taste or Smell are underused. Historically developed paradigms such as the prominent Graphical User Interface (GUI) are not able to fully embed the user into the information to be mediated. Possible explanations for their nevertheless widespread use should be searched more in their (historically developed) technical feasibility [Sut63], rather than in usability and user-oriented simplicity. For about the last ten years, though, there has been a shift towards better representations of computer-based processes and abstract data, which try to close the gap between the users’ reality and the abstract environment of data and algorithms. These fields take advantage of both display and controlling strategies by primarily incorporating other modalities than vision. Currently, these systems take advantage of either alternative display technologies such as auditory or haptic displays, or advanced controlling approaches like multi-touch or tangibility. I argue that the already promising achievements will be even better, if auditory and tactile displays are complemented by direct controlling approaches. Furthermore, I believe that their combination will unfold the true potential of interfacing technologies [Roh08]. In this thesis, I present such an approach with Tangible Auditory Interfaces, a combination of Auditory Displays and Tangible Interfaces.
computation in which data is combined with prior knowledge in order to estimate the underlying causes of a [visual] scene (Mumford 1994, Knill & Richards 1996, Rao et al. 2002, Kersten Mamassian et al. 2004). This is due to the fact that natural images are full of ambiguity. The causal properties of images - illumination, surface geometry, reflectance (material properties), and so forth - are entangled in complex relationships among pixel values. In order to tease these apart, aspects of scene structure must be estimated simultaneously” (Olshausen 2010). The ability of the brain to assess a multivariate visual scene likely requires multiple processing nodes acting in concert. Ouellette and Casanova showed in 2006 in the anesthetized cat, that when stimulus properties are optimized for the following visual areas: area 17, PMLS, LP-pulvinar, AMLS, and AEV, there is no statistical difference in latencies between the areas. The areas, when engaged with their ‘preferred’ stimulus properties, could function in concert. Recordings in the anesthetized cat have revealed concerted neural activity in the form of synchrony between the SC, pMS, and primary visual cortex (Brecht et al 1998). The simultaneous functioning of multiple cortical nodes combined with internal behavioral motivation of the awake cat, could turn the classily understood processing hierarchy on its head, with internal motivation and neural states driving neural activity and the external stimulus acting as a modifying influence.
Humans have precocial hearing, allowing the human fetus to learn seemingly complex features about the extrauterine environment via sound transmission into the womb. What is the neural architecture that facilitates auditory learning during this early developmental period? In general, primary sensory cortex matures in advance of nonprimary sensory cortex, but the timing differences between primary (pAC) and nonprimary auditorycortex (nAC) maturation in humans have been unclear. Tracking human perinatal cortical maturation in vivo has been made possible with modern advancements in magnetic resonance imaging (MRI). Diffusion MRI metrics of fractional anisotropy, mean diffusivity, axial diffusivity, and radial diffusivity show systematic changes as neuronal genesis, differentiation, and migration occur in the developing brain. Here we used diffusion MRI metrics to track these cortical maturational processes and microstructural changes along the length of Heschl’s gyrus. We analyzed longitudinal data from infants born preterm who each underwent diffusion MRI up to four times between 26 and 40 weeks postmenstrual age. We also tested for associations between diffusion metrics in infancy and language developmental outcomes at age 2 years. We were able to distinguish between pAC and nAC as early as 28 weeks postmenstrual age, a time at which the sulcal boundaries of Heschl’s gyrus are just beginning to appear. Our analysis revealed differing rates of maturation along the axis of Heschl’s gyrus as cortex transitions from pAC to nAC. While pAC was further advanced along the developmental timeline at each timepoint, nAC showed much larger changes from 26 to 40 weeks postmenstrual age than did pAC. Disturbed maturation of nAC (but not pAC) in infancy was associated with poorer language performance at age two years.
This study is, to our knowledge, the first to assess the effect of dual transcranial direct current stimulation (tDCS) over the primary motor cortices on a gesture-language integration task. The hypothesis of our study was based on the fact that the primary motor cortex (M1) has been found to be activated both when executing actions and when observing actions of others (4, 30, 36, 61, 62). As gestures are a subtype of actions, the M1 is expected to be activated by them, most notably when perceiving action-related gestures such as instrumental ones. Furthermore, gestures are considered to have been essential in the evolution of language and have been shown to enhance language understanding (20, 21, 27). Therefore, dual tDCS over the primary motor cortices was expected to have a positive effect on gesture comprehension. In our study specifically, an upregulation of activity in the right M1 through anodal stimulation and a downregulation of activity in the left M1 through cathodal stimulation was aimed at, in order to maximize the influence of the right M1 on the semantic integration task. The basis of this set-up lies in the findings that the right hemisphere has been demonstrated to play a role in understanding gestural aspects of communication (17, 55, 69). Therefore, stimulation of these specific areas was expected to lead to an enhancement of gesture-language integration exhibited by higher performance on a semantic task.
Reprint requests to Dr. S. Rotter. Fax: (0761) 2032860.
Koch, 1993). Accordingly, synchronized spikes are considered as a property of neuronal signals which can indeed be detected and propagated by other neurons. In addition, these spike correlations must be expected to be dynamic, reflecting varying affil iations of the neurons depending on the stimulus or behavioral context. Such dynamic modulations of spike correlation at various levels of precision have in fact been observed in different cortical areas, namely visual (A ertsen and Arndt, 1993; Eckhorn et al., 1988; Gray and Singer, 1989; Roelf- sema et al., 1996; Singer and Gray, 1995), auditory (Ahissar et al., 1992; de Charms and Merzenich, 1995; Eggerm ont, 1994; Sakurai, 1996), somato sensory (Nicolelis et al., 1995), m otor (Murthy and Fetz, 1992; Sanes and Donoghue, 1993), and fron tal (Abeles et al., 1993b; Vaadia et al., 1995). Little is known, however, about the functional role of temporal organisation in such signals.