Loading AI tools
Sensory system used for hearing From Wikipedia, the free encyclopedia
The auditory system is the sensory system for the sense of hearing. It includes both the sensory organs (the ears) and the auditory parts of the sensory system.[1]
Auditory system | |
---|---|
Anatomical terminology |
The outer ear funnels sound vibrations to the eardrum, increasing the sound pressure in the middle frequency range. The middle-ear ossicles further amplify the vibration pressure roughly 20 times. The base of the stapes couples vibrations into the cochlea via the oval window, which vibrates the perilymph liquid (present throughout the inner ear) and causes the round window to bulb out as the oval window bulges in.[citation needed]
Vestibular and tympanic ducts are filled with perilymph, and the smaller cochlear duct between them is filled with endolymph, a fluid with a very different ion concentration and voltage.[2][3][4] Vestibular duct perilymph vibrations bend organ of Corti outer cells (4 lines) causing prestin to be released in cell tips. This causes the cells to be chemically elongated and shrunk (somatic motor), and hair bundles to shift which, in turn, electrically affects the basilar membrane's movement (hair-bundle motor). These motors (outer hair cells) amplify the traveling wave amplitudes over 40-fold.[5] The outer hair cells (OHC) are minimally innervated by spiral ganglion in slow (unmyelinated) reciprocal communicative bundles (30+ hairs per nerve fiber); this contrasts inner hair cells (IHC) that have only afferent innervation (30+ nerve fibers per one hair) but are heavily connected. There are three to four times as many OHCs as IHCs. The basilar membrane (BM) is a barrier between scalae, along the edge of which the IHCs and OHCs sit. Basilar membrane width and stiffness vary to control the frequencies best sensed by the IHC. At the cochlear base the BM is at its narrowest and most stiff (high-frequencies), while at the cochlear apex it is at its widest and least stiff (low-frequencies). The tectorial membrane (TM) helps facilitate cochlear amplification by stimulating OHC (direct) and IHC (via endolymph vibrations). TM width and stiffness parallels BM's and similarly aids in frequency differentiation.[6][7][8][9][10][11][12][13][14][excessive citations]
The superior olivary complex (SOC), in the pons, is the first convergence of the left and right cochlear pulses. SOC has 14 described nuclei; their abbreviation are used here (see Superior olivary complex for their full names). MSO determines the angle the sound came from by measuring time differences in left and right info. LSO normalizes sound levels between the ears; it uses the sound intensities to help determine sound angle. LSO innervates the IHC. VNTB innervate OHC. MNTB inhibit LSO via glycine. LNTB are glycine-immune, used for fast signalling. DPO are high-frequency and tonotopical. DLPO are low-frequency and tonotopical. VLPO have the same function as DPO, but act in a different area. PVO, CPO, RPO, VMPO, ALPO and SPON (inhibited by glycine) are various signalling and inhibiting nuclei.[15][16][17][18]
The trapezoid body is where most of the cochlear nucleus (CN) fibers decussate (cross left to right and vice versa); this cross aids in sound localization.[19] The CN breaks into ventral (VCN) and dorsal (DCN) regions. The VCN has three nuclei.[clarification needed] Bushy cells transmit timing info, their shape averages timing jitters. Stellate (chopper) cells encode sound spectra (peaks and valleys) by spatial neural firing rates based on auditory input strength (rather than frequency). Octopus cells have close to the best temporal precision while firing, they decode the auditory timing code. The DCN has 2 nuclei. DCN also receives info from VCN. Fusiform cells integrate information to determine spectral cues to locations (for example, whether a sound originated from in front or behind). Cochlear nerve fibers (30,000+) each have a most sensitive frequency and respond over a wide range of levels.[20][21]
Simplified, nerve fibers' signals are transported by bushy cells to the binaural areas in the olivary complex, while signal peaks and valleys are noted by stellate cells, and signal timing is extracted by octopus cells. The lateral lemniscus has three nuclei: dorsal nuclei respond best to bilateral input and have complexity tuned responses; intermediate nuclei have broad tuning responses; and ventral nuclei have broad and moderately complex tuning curves. Ventral nuclei of lateral lemniscus help the inferior colliculus (IC) decode amplitude modulated sounds by giving both phasic and tonic responses (short and long notes, respectively). IC receives inputs not shown, including visual (pretectal area: moves eyes to sound. superior colliculus: orientation and behavior toward objects, as well as eye movements (saccade)) areas, pons (superior cerebellar peduncle: thalamus to cerebellum connection/hear sound and learn behavioral response), spinal cord (periaqueductal grey: hear sound and instinctually move), and thalamus. The above are what implicate IC in the 'startle response' and ocular reflexes. Beyond multi-sensory integration IC responds to specific amplitude modulation frequencies, allowing for the detection of pitch. IC also determines time differences in binaural hearing.[22] The medial geniculate nucleus divides into ventral (relay and relay-inhibitory cells: frequency, intensity, and binaural info topographically relayed), dorsal (broad and complex tuned nuclei: connection to somatosensory info), and medial (broad, complex, and narrow tuned nuclei: relay intensity and sound duration). The auditory cortex (AC) brings sound into awareness/perception. AC identifies sounds (sound-name recognition) and also identifies the sound's origin location. AC is a topographical frequency map with bundles reacting to different harmonies, timing and pitch. Right-hand-side AC is more sensitive to tonality, left-hand-side AC is more sensitive to minute sequential differences in sound.[23][24] Rostromedial and ventrolateral prefrontal cortices are involved in activation during tonal space and storing short-term memories, respectively.[25] The Heschl's gyrus/transverse temporal gyrus includes Wernicke's area and functionality, it is heavily involved in emotion-sound, emotion-facial-expression, and sound-memory processes. The entorhinal cortex is the part of the 'hippocampus system' that aids and stores visual and auditory memories.[26][27] The supramarginal gyrus (SMG) aids in language comprehension and is responsible for compassionate responses. SMG links sounds to words with the angular gyrus and aids in word choice. SMG integrates tactile, visual, and auditory info.[28][29]
The folds of cartilage surrounding the ear canal are called the auricle. Sound waves are reflected and attenuated when they hit the auricle, and these changes provide additional information that will help the brain determine the sound direction.
The sound waves enter the auditory canal, a deceptively simple tube. The ear canal amplifies sounds that are between 3 and 12 kHz.[citation needed] The tympanic membrane, at the far end of the ear canal marks the beginning of the middle ear.
Sound waves travel through the ear canal and hit the tympanic membrane, or eardrum. This wave information travels across the air-filled middle ear cavity via a series of delicate bones: the malleus (hammer), incus (anvil) and stapes (stirrup). These ossicles act as a lever, converting the lower-pressure eardrum sound vibrations into higher-pressure sound vibrations at another, smaller membrane called the oval window or vestibular window. The manubrium (handle) of the malleus articulates with the tympanic membrane, while the footplate (base) of the stapes articulates with the oval window. Higher pressure is necessary at the oval window than at the tympanic membrane because the inner ear beyond the oval window contains liquid rather than air. The stapedius reflex of the middle ear muscles helps protect the inner ear from damage by reducing the transmission of sound energy when the stapedius muscle is activated in response to sound. The middle ear still contains the sound information in wave form; it is converted to nerve impulses in the cochlea.
The inner ear consists of the cochlea and several non-auditory structures. The cochlea has three fluid-filled sections (i.e. the scala media, scala tympani and scala vestibuli), and supports a fluid wave driven by pressure across the basilar membrane separating two of the sections. Strikingly, one section, called the cochlear duct or scala media, contains endolymph. The organ of Corti is located in this duct on the basilar membrane, and transforms mechanical waves to electric signals in neurons. The other two sections are known as the scala tympani and the scala vestibuli. These are located within the bony labyrinth, which is filled with fluid called perilymph, similar in composition to cerebrospinal fluid. The chemical difference between the fluids endolymph and perilymph fluids is important for the function of the inner ear due to electrical potential differences between potassium and calcium ions.[citation needed]
The plan view of the human cochlea (typical of all mammalian and most vertebrates) shows where specific frequencies occur along its length. The frequency is an approximately exponential function of the length of the cochlea within the Organ of Corti. In some species, such as bats and dolphins, the relationship is expanded in specific areas to support their active sonar capability.
The organ of Corti forms a ribbon of sensory epithelium which runs lengthwise down the cochlea's entire scala media. Its hair cells transform the fluid waves into nerve signals. The journey of countless nerves begins with this first step; from here, further processing leads to a panoply of auditory reactions and sensations.
Hair cells are columnar cells, each with a "hair bundle" of 100–200 specialized stereocilia at the top, for which they are named. There are two types of hair cells specific to the auditory system; inner and outer hair cells. Inner hair cells are the mechanoreceptors for hearing: they transduce the vibration of sound into electrical activity in nerve fibers, which is transmitted to the brain. Outer hair cells are a motor structure. Sound energy causes changes in the shape of these cells, which serves to amplify sound vibrations in a frequency specific manner. Lightly resting atop the longest cilia of the inner hair cells is the tectorial membrane, which moves back and forth with each cycle of sound, tilting the cilia, which is what elicits the hair cells' electrical responses.
Inner hair cells, like the photoreceptor cells of the eye, show a graded response, instead of the spikes typical of other neurons. These graded potentials are not bound by the "all or none" properties of an action potential.
At this point, one may ask how such a wiggle of a hair bundle triggers a difference in membrane potential. The current model is that cilia are attached to one another by "tip links", structures which link the tips of one cilium to another. Stretching and compressing, the tip links may open an ion channel and produce the receptor potential in the hair cell. Recently it has been shown that cadherin-23 CDH23 and protocadherin-15 PCDH15 are the adhesion molecules associated with these tip links.[30] It is thought that a calcium driven motor causes a shortening of these links to regenerate tensions. This regeneration of tension allows for apprehension of prolonged auditory stimulation.[31]
Afferent neurons innervate cochlear inner hair cells, at synapses where the neurotransmitter glutamate communicates signals from the hair cells to the dendrites of the primary auditory neurons.
There are far fewer inner hair cells in the cochlea than afferent nerve fibers – many auditory nerve fibers innervate each hair cell. The neural dendrites belong to neurons of the auditory nerve, which in turn joins the vestibular nerve to form the vestibulocochlear nerve, or cranial nerve number VIII.[32] The region of the basilar membrane supplying the inputs to a particular afferent nerve fibre can be considered to be its receptive field.
Efferent projections from the brain to the cochlea also play a role in the perception of sound, although this is not well understood. Efferent synapses occur on outer hair cells and on afferent (towards the brain) dendrites under inner hair cells
The cochlear nucleus is the first site of the neuronal processing of the newly converted "digital" data from the inner ear (see also binaural fusion). In mammals, this region is anatomically and physiologically split into two regions, the dorsal cochlear nucleus (DCN), and ventral cochlear nucleus (VCN). The VCN is further divided by the nerve root into the posteroventral cochlear nucleus (PVCN) and the anteroventral cochlear nucleus (AVCN).[33]
The trapezoid body is a bundle of decussating fibers in the ventral pons that carry information used for binaural computations in the brainstem. Some of these axons come from the cochlear nucleus and cross over to the other side before traveling on to the superior olivary nucleus. This is believed to help with localization of sound.[34]
The superior olivary complex is located in the pons, and receives projections predominantly from the ventral cochlear nucleus, although the dorsal cochlear nucleus projects there as well, via the ventral acoustic stria. Within the superior olivary complex lies the lateral superior olive (LSO) and the medial superior olive (MSO). The former is important in detecting interaural level differences while the latter is important in distinguishing interaural time difference.[17]
The lateral lemniscus is a tract of axons in the brainstem that carries information about sound from the cochlear nucleus to various brainstem nuclei and ultimately the contralateral inferior colliculus of the midbrain.
The inferior colliculi (IC) are located just below the visual processing centers known as the superior colliculi. The central nucleus of the IC is a nearly obligatory relay in the ascending auditory system, and most likely acts to integrate information (specifically regarding sound source localization from the superior olivary complex[16] and dorsal cochlear nucleus) before sending it to the thalamus and cortex.[1] The inferior colliculus also receives descending inputs from the auditory cortex and auditory thalamus (or medial geniculate nucleus).[35]
The medial geniculate nucleus is part of the thalamic relay system.
The primary auditory cortex is the first region of cerebral cortex to receive auditory input.
Perception of sound is associated with the left posterior superior temporal gyrus (STG). The superior temporal gyrus contains several important structures of the brain, including Brodmann areas 41 and 42, marking the location of the primary auditory cortex, the cortical region responsible for the sensation of basic characteristics of sound such as pitch and rhythm. We know from research in nonhuman primates that the primary auditory cortex can probably be divided further into functionally differentiable subregions.[36][37][38][39] [40][41][42] The neurons of the primary auditory cortex can be considered to have receptive fields covering a range of auditory frequencies and have selective responses to harmonic pitches.[43] Neurons integrating information from the two ears have receptive fields covering a particular region of auditory space.
The primary auditory cortex is surrounded by secondary auditory cortex, and interconnects with it. These secondary areas interconnect with further processing areas in the superior temporal gyrus, in the dorsal bank of the superior temporal sulcus, and in the frontal lobe. In humans, connections of these regions with the middle temporal gyrus are probably important for speech perception. The frontotemporal system underlying auditory perception allows us to distinguish sounds as speech, music, or noise.
From the primary auditory cortex emerge two separate pathways: the auditory ventral stream and auditory dorsal stream.[44] The auditory ventral stream includes the anterior superior temporal gyrus, anterior superior temporal sulcus, middle temporal gyrus and temporal pole. Neurons in these areas are responsible for sound recognition, and extraction of meaning from sentences. The auditory dorsal stream includes the posterior superior temporal gyrus and sulcus, inferior parietal lobule and intra-parietal sulcus. Both pathways project in humans to the inferior frontal gyrus. The most established role of the auditory dorsal stream in primates is sound localization. In humans, the auditory dorsal stream in the left hemisphere is also responsible for speech repetition and articulation, phonological long-term encoding of word names, and verbal working memory.
Proper function of the auditory system is required to able to sense, process, and understand sound from the surroundings. Difficulty in sensing, processing and understanding sound input has the potential to adversely impact an individual's ability to communicate, learn and effectively complete routine tasks on a daily basis.[45]
In children, early diagnosis and treatment of impaired auditory system function is an important factor in ensuring that key social, academic and speech/language developmental milestones are met.[46]
Impairment of the auditory system can include any of the following:
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.