Recent from talks
Nothing was collected or created yet.
Audiometry
View on WikipediaThis article needs additional citations for verification. (September 2009) |
| Audiometry | |
|---|---|
| ICD-10-PCS | F13Z1 - F13Z6 |
| ICD-9-CM | 95.41 |
| MeSH | D001299 |
| MedlinePlus | 003341 |
Audiometry (from Latin audīre 'to hear' and metria 'to measure') is a branch of audiology and the science of measuring hearing acuity for variations in sound intensity and pitch and for tonal purity, involving thresholds and differing frequencies.[1] Typically, audiometric tests determine a subject's hearing levels with the help of an audiometer, but may also measure ability to discriminate between different sound intensities, recognize pitch, or distinguish speech from background noise. Acoustic reflex and otoacoustic emissions may also be measured. Results of audiometric tests are used to diagnose hearing loss or diseases of the ear, and often make use of an audiogram.[2]
History
[edit]The basic requirements of the field were to be able to produce a repeating sound, some way to attenuate the amplitude, a way to transmit the sound to the subject, and a means to record and interpret the subject's responses to the test.
Mechanical "acuity meters" and tuning forks
[edit]For many years there was desultory use of various devices capable of producing sounds of controlled intensity. The first types were clock-like, giving off air-borne sound to the tubes of a stethoscope; the sound distributor head had a valve that could be gradually closed. Another model used a tripped hammer to strike a metal rod and produce the testing sound; in another, a tuning fork was struck. The first such measurement device for testing hearing was described by Wolke (1802).[3]
Pure tone audiometry and audiograms
[edit]Following development of the induction coil in 1849 and audio transducers (telephone) in 1876, a variety of audiometers were invented in United States and overseas. These early audiometers were known as induction-coil audiometers due to...
- Hughes 1879
- Hartmann 1878
In 1885, Arthur Hartmann designed an "Auditory Chart" which included left and right ear tuning fork representation on the x -axis and percent of hearing on the y-axis.
In 1899, Carl E. Seashore Prof. of Psychology at U. Iowa, United States, introduced the audiometer as an instrument to measure the "keenness of hearing" whether in the laboratory, schoolroom, or office of the psychologist or aurist. The instrument operated on a battery and presented a tone or a click; it had an attenuator set in a scale of 40 steps. His machine became the basis of the audiometers later manufactured at Western Electric.
- Cordia C. Bunch 1919
The concept of a frequency versus sensitivity (amplitude) audiogram plot of human hearing sensitivity was conceived by German physicist Max Wien in 1903. The first vacuum tube implementations, November 1919, two groups of researchers — K.L. Schaefer and G. Gruschke, B. Griessmann and H. Schwarzkopf — demonstrated before the Berlin Oto-logical Society two instruments designed to test hearing acuity. Both were built with vacuum tubes. Their designs were characteristic of the two basic types of electronic circuits used in most electronic audio devices for the next two decades. Neither of the two devices was developed commercially for some time, although the second was to be manufactured under the name "Otaudion." The Western Electric 1A, developed by <who> was built in 1922 in the United States. It was not until 1922 that otolaryngologist Dr. Edmund P. Fowler, and physicists Dr. Harvey Fletcher and Robert Wegel of Western Electric Co. first employed frequency at octave intervals plotted along the x axis and intensity downward along the y-axis as a degree of hearing loss. Fletcher et al. also coined the term "audiogram" at that time.
With further technologic advances, bone conduction testing capabilities became a standard component of all Western Electric audiometers by 1928.
Electrophysiologic audiometry
[edit]In 1967, Sohmer and Feinmesser were the first to publish auditory brainstem responses (ABR), recorded with surface electrodes in humans which showed that cochlear potentials could be obtained non-invasively.
Otoacoustic audiometry
[edit]In 1978, David Kemp reported that sound energy produced by the ear could be detected in the ear canal—otoacoustic emissions. The first commercial system for detecting and measuring otoacoustic emissions was produced in 1988.
Auditory system
[edit]Components
[edit]The auditory system is composed of epithelial, osseous, vascular, neural and neocortical tissues. The anatomical divisions are external ear canal and tympanic membrane, middle ear, inner ear, VIII auditory nerve, and central auditory processing portions of the neocortex.
Hearing process
[edit]Sound waves enter the outer ear and travel through the external auditory canal until they reach the tympanic membrane, causing the membrane and the attached chain of auditory ossicles to vibrate. The motion of the stapes against the oval window sets up waves in the fluids of the cochlea, causing the basilar membrane to vibrate. This stimulates the sensory cells of the organ of Corti, atop the basilar membrane, to send nerve impulses to the central auditory processing areas of the brain, the auditory cortex, where sound is perceived and interpreted.
Sensory and psychodynamics of human hearing
[edit]Non-linearity
[edit]Temporal synchronization – sound localization and echo location
[edit]Parameters of human hearing
[edit]Audiometric testing
[edit]- objectives: integrity, structure, function, freedom from infirmity.
Normative standards
[edit]- ISO 7029:2000 and BS 6951
Types of audiometry
[edit]This section needs expansion. You can help by adding to it. (November 2015) |
Subjective audiometry
[edit]Subjective audiometry requires the cooperation of the subject, and relies upon subjective responses which may both qualitative and quantitative, and involve attention (focus), reaction time, etc.
- Differential testing is conducted with a low frequency (usually 512 Hz) tuning fork. They are used to assess asymmetrical hearing and air/bone conduction differences. They are simple manual physical tests and do not result in an audiogram.
- Weber test
- Bing test
- Rinne test
- Schwabach test, a variant of the Rinne test
- Pure tone audiometry is a standardized hearing test in which air conduction hearing thresholds in decibels (db) for a set of fixed frequencies between 250 Hz and 8,000 Hz are plotted on an audiogram for each ear independently. A separate set of measurements is made for bone conduction. There is also high frequency Pure Tone Audiometry covering the frequency range above 8000 Hz to 16,000 Hz.
- Threshold equalizing noise (TEN) test
- Masking level difference (MLD) test
- Psychoacoustic (or psychophysical) tuning curve test
- Speech audiometry is a diagnostic hearing test designed to test word or speech recognition. It has become a fundamental tool in hearing-loss assessment. In conjunction with pure-tone audiometry, it can aid in determining the degree and type of hearing loss. Speech audiometry also provides information regarding discomfort or tolerance to speech stimuli and information on word recognition abilities. In addition, information gained by speech audiometry can help determine proper gain and maximum output of hearing aids and other amplifying devices for patients with significant hearing losses and help assess how well they hear in noise. Speech audiometry also facilitates audiological rehabilitation management.
Speech audiometry may include:
- Speech awareness threshold
- Speech recognition threshold
- Suprathreshold word-recognition
- Sentence testing
- Dichotic listening test
- Loudness levels determination
- Békésy audiometry, also called decay audiometry - audiometry in which the subject controls increases and decreases in intensity as the frequency of the stimulus is gradually changed so that the subject traces back and forth across the threshold of hearing over the frequency range of the test. The test is quick and reliable, so was frequently used in military and industrial contexts.
- Audiometry of children
Objective audiometry
[edit]Objective audiometry is based on physical, acoustic or electrophysiologic measurements and does not depend on the cooperation or subjective responses of the subject.
- Caloric stimulation/reflex test uses temperature difference between hot and cold water or air delivered into the ear to test for neural damage. Caloric stimulation of the ear results in rapid side-to-side eye movements called nystagmus. Absence of nystagmus may indicate auditory nerve damage. This test will often be done as part of another test called electronystagmography.
- Electronystagmography (ENG) uses skin electrodes and an electronic recording device to measure nystagmus evoked by procedures such as caloric stimulation of the ear
- Acoustic immittance audiometry - Immittance audiometry is an objective technique which evaluates middle ear function by three procedures: static immittance, tympanometry, and the measurement of acoustic reflex threshold sensitivity. Immittance audiometry is superior to pure tone audiometry in detecting middle ear pathology.
- Tympanometry
- Acoustic reflex thresholds
- Acoustic reflectometry
- wide-band absorbance audiometry also called 3D tympanometry
- Evoked potential audiometry
- N1-P2 cortical audio evoked potential (CAEP) audiometry
- ABR is a neurologic tests of auditory brainstem function in response to auditory (click) stimuli.
- Electrocochleography a variant of ABR, tests the impulse transmission function of the cochlea in response to auditory (click) stimuli. It is most often used to detect endolymphatic hydrops in the diagnosis/assessment of Meniere's disease.
- Audio steady state response (ASSR) audiometry
- Vestibular evoked myogenic potential (VEMP) test, a variant of ABR that tests the integrity of the saccule
- Otoacoustic emission audiometry - this test can differentiate between the sensory and neural components of sensorineural hearing loss.
- Distortion product otoacoustic emissions (DPOAE) audiometry
- Transient evoked otoacoustic emissions (TEOAE) audiometry
- Sustained frequency otoacoustic emissions (SFOAE) audiometry - At present, SFOAEs are not used clinically.
- In situ audiometry: a technique for measuring not only the condition of the person's auditory system, but also the characteristics of sound reproduction devices, in-the-canal hearing aids, vents and sound tubes of hearing aids.[4][5]
Audiograms
[edit]The result of most audiometry is an audiogram plotting some measured dimension of hearing, either graphically or tabularly.
The most common type of audiogram is the result of a pure tone audiometry hearing test which plots frequency versus amplitude sensitivity thresholds for each ear along with bone conduction thresholds at 8 standard frequencies from 250 Hz to 8000 Hz. A pure tone audiometry hearing test is the gold standard for evaluation of hearing loss or disability.[medical citation needed] Other types of hearing tests also generate graphs or tables of results that may be loosely called 'audiograms', but the term is universally used to refer to the result of a pure tone audiometry hearing test.
Hearing assessment
[edit]Apart from testing hearing, part of the function of audiometry is in assessing or evaluating hearing from the test results. The most commonly used assessment of hearing is the determination of the threshold of audibility, i.e. the level of sound required to be just audible. This level can vary for an individual over a range of up to 5 decibels from day to day and from determination to determination, but it provides an additional and useful tool in monitoring the potential ill effects of exposure to noise. Hearing loss may be unilateral or bilateral, and bilateral hearing loss may not be symmetrical. The most common types of hearing loss, due to age and noise exposure, are usually bilateral and symmetrical.
In addition to the traditional audiometry, hearing assessment can be performed using a standard set of frequencies (audiogram) with mobile applications to detect possible hearing impairments.[6]

Hearing loss classification
[edit]This section needs expansion. You can help by adding to it. (November 2015) |
This section needs more reliable medical references for verification or relies too heavily on primary sources. (May 2019) |
The primary focus of audiometry is assessment of hearing status and hearing loss, including extent, type and configuration.
- There are four defined degrees of hearing loss: mild, moderate, severe and profound.
- Hearing loss may be divided into four types: conductive hearing loss, sensorineural hearing loss, central auditory processing disorders, and mixed types.
- Hearing loss may be unilateral or bilateral, of sudden onset or progressive, and temporary or permanent.
Hearing loss may be caused by a number of factors including heredity, congenital conditions, age-related (presbycusis) and acquired factors like noise-induced hearing loss, ototoxic chemicals and drugs, infections, and physical trauma.
Clinical practice
[edit]Audiometric testing may be performed by a general practitioner medical doctor, an otolaryngologist (a specialized MD also called an ENT), a CCC-A (Certificate of Clinical Competence in Audiology) audiologist, a certified school audiometrist (a practitioner analogous to an optometrist who tests eyes), and sometimes other trained practitioners. Practitioners are certified by American Board of Audiology (ABA). Practitioners are licensed by various state boards regulating workplace health & safety, occupational professions, or ...
Schools
[edit]Occupational testing
[edit]Noise-induced hearing loss
[edit]Workplace and environmental noise is the most prevalent cause of hearing loss in the United States and elsewhere.
Research
[edit]- Computer modeling of patterns of hearing deficit
- 3D longitudinal profiles of hearing loss including age axis (presbycusis study)
See also
[edit]References
[edit]- ^ Willems, Patrick J. (2004). Genetic hearing loss. CRC Press. pp. 34–. ISBN 978-0-8247-4309-3. Retrieved 23 June 2011.
- ^ Roeser, Ross J. (2013). Roeser's audiology desk reference (2nd ed.). New York: Thieme. ISBN 9781604063981. OCLC 704384422.
- ^ Feldmann, H (September 1992). "[History of instrumental measuring of hearing acuity: the first acumeter]". Laryngo- Rhino- Otologie. 71 (9): 477–82. doi:10.1055/s-2007-997336. PMID 1388477. S2CID 260205193.
- ^ Vashekevich M.I., Azarov I.S., Petrovskiy A.A., Cosine-modulated filter banks with a phase conversion: realization and use in hearing aids. - Moscow, Goryachaya liniya-Telecom, 2014. -210 p.
- ^ ↑ Vonlanthen A. Hearing Aids / Vonlanthen A. Horst A. - Rostov-on-Don: Phoenix, 2009. -304 p.
- ^ Majumder, Sumit; Deen, M. Jamal (2019-05-09). "Smartphone Sensors for Health Monitoring and Diagnosis". Sensors. 19 (9): 2164. Bibcode:2019Senso..19.2164M. doi:10.3390/s19092164. ISSN 1424-8220. PMC 6539461. PMID 31075985.
Audiometry
View on GrokipediaAuditory System Basics
Anatomical Components
The outer ear, or external ear, consists of the auricle (pinna) and the external auditory canal. The pinna, composed of elastic cartilage covered by skin, serves to collect and funnel sound waves into the ear canal, enhancing sound localization and directing airborne vibrations toward the middle ear.[7] The external auditory canal, a curved tube about 2.5 cm long lined with skin, cerumen-producing glands, and hair, further amplifies sound frequencies around 2-5 kHz while protecting the inner structures from foreign particles.[8] The middle ear is an air-filled cavity within the temporal bone, separated from the outer ear by the tympanic membrane (eardrum) and connected to the nasopharynx via the Eustachian tube for pressure equalization. The tympanic membrane, a thin, fibrous structure, vibrates in response to sound waves entering the ear canal. These vibrations are transmitted and amplified by the three ossicles: the malleus (hammer), attached to the tympanic membrane; the incus (anvil), articulating with the malleus; and the stapes (stirrup), which connects to the oval window of the inner ear. This ossicular chain functions primarily in impedance matching, overcoming the acoustic impedance mismatch between air and the fluid-filled inner ear by increasing the force and decreasing the velocity of the vibrations, thereby efficiently transferring sound energy.[9][10] The inner ear, housed in the bony labyrinth of the temporal bone, includes the cochlea, a spiral-shaped, fluid-filled structure approximately 35 mm long in humans that transduces mechanical vibrations into neural signals. The cochlea consists of three interconnected scalae: the scala vestibuli and scala tympani, filled with perilymph (similar to extracellular fluid), and the central scala media (cochlear duct), filled with endolymph (high in potassium).[11] Within the scala media lies the organ of Corti, a complex sensory epithelium resting on the basilar membrane, which separates the scala media from the scala tympani. The organ of Corti contains specialized mechanoreceptor hair cells—inner hair cells for signal transmission and outer hair cells for amplification—whose stereocilia are embedded in the overlying tectorial membrane. These hair cells are tonotopically organized along the basilar membrane, with high-frequency sounds stimulating the basal (stiffer, narrower) region and low-frequency sounds affecting the apical (more flexible, wider) region, enabling frequency discrimination.[12][13] The auditory nerve, or cochlear division of cranial nerve VIII (vestibulocochlear nerve), originates from the spiral ganglion neurons in the cochlea and carries the transduced electrical signals from the hair cells as action potentials. These bipolar neurons synapse with hair cells at one end and project centrally via myelinated axons that bundle into the auditory nerve, entering the brainstem at the pontomedullary junction to terminate in the cochlear nuclei. This pathway initiates the central auditory processing, conveying frequency, intensity, and temporal information from the periphery.[14][15]Physiological Hearing Process
The physiological process of hearing begins when sound waves, consisting of pressure variations in air, enter the external auditory canal and strike the tympanic membrane, causing it to vibrate in synchrony with the incoming waves.[14] These vibrations are transmitted to the middle ear ossicles—the malleus, incus, and stapes—which act as a lever system to amplify the mechanical energy. The ossicles provide an impedance-matching function, increasing the pressure by a factor of approximately 20-30 dB to efficiently couple the low-impedance air medium to the high-impedance fluid of the inner ear.[13][16] In the cochlea, the stapes footplate pushes against the oval window, displacing perilymph fluid in the scala vestibuli and creating pressure waves that propagate through the cochlear duct. This fluid motion causes a traveling wave along the basilar membrane, a flexible structure within the cochlea that separates the scala media from the scala tympani. The traveling wave peaks at specific locations determined by the sound's frequency due to the membrane's tonotopic organization, where stiffness decreases progressively from the base (high frequencies) to the apex (low frequencies), resulting in greater displacement at frequency-matched sites.[14][13] The peaking wave shears the basilar membrane against the overlying tectorial membrane, deflecting the stereocilia of hair cells in the organ of Corti. This mechanical deflection opens mechanically gated potassium channels, leading to depolarization of the hair cells and subsequent calcium influx, which triggers the release of the neurotransmitter glutamate at synapses with auditory nerve fibers. Inner hair cells primarily transduce the signal to the auditory nerve, while outer hair cells enhance sensitivity through electromotility, actively contracting and amplifying basilar membrane motion by up to 40 dB via prestin-mediated length changes in response to electrical signals.[14][13][16] Auditory nerve fibers encode the mechanical signal into neural activity through distinct mechanisms: phase-locking, where action potentials synchronize to specific phases of low-frequency sounds (below ~1 kHz); rate coding, where firing rates increase with sound intensity for higher frequencies; and the volley theory, whereby synchronized groups of fibers collectively represent intermediate frequencies (1-5 kHz) to maintain temporal precision. This neural encoding preserves both temporal and intensity information as signals ascend the auditory pathway.[14][13]Human Hearing Parameters
Frequency Range and Sensitivity
The human audible frequency range typically spans from 20 Hz to 20 kHz, encompassing the spectrum of sounds detectable by the ear under optimal conditions.[17] This range narrows with age due to presbycusis, a form of sensorineural hearing loss that primarily affects high frequencies, often shifting the upper limit to approximately 8-10 kHz in older adults.[18] Children and young individuals generally retain sensitivity up to 20 kHz, while many adults experience a decline, with detection thresholds rising sharply above 16 kHz.[19][20] Human sensitivity to frequencies varies significantly, as illustrated by equal-loudness contours, originally mapped by Fletcher and Munson in their seminal 1933 study. These contours demonstrate that the ear's peak sensitivity occurs between 2 and 5 kHz, where the lowest detection threshold is approximately 0 dB SPL at around 4 kHz.[21] Thresholds across the audible range can differ by up to 100 dB, with much higher intensities required to perceive low frequencies below 100 Hz or high frequencies above 10 kHz at equivalent perceived loudness levels.[22] The perceptual organization of frequencies is further structured into critical bands, approximately 25 in number, as defined by the Bark scale—a psychoacoustic model that approximates the ear's frequency resolution. These bands, ranging from 1 to 24 Barks, cover the audible spectrum and form the basis for auditory masking effects, where a stronger tone within the same band obscures a weaker one.[23][24] Frequency discrimination relies on tonotopic mapping along the cochlea, where specific locations on the basilar membrane resonate to particular frequencies, aligning with the place theory of pitch perception. This spatial organization enables the inner ear to differentiate tones based on their resonant positions, supporting precise frequency selectivity.[25][26]Amplitude Thresholds and Dynamics
The absolute threshold of hearing (ATH) represents the minimum sound intensity that a person with normal hearing can detect 50% of the time in a quiet environment. This threshold is standardized at 0 dB hearing level (HL) for a pure tone at 1 kHz, equivalent to approximately 7 dB sound pressure level (SPL) (about 35.5 μPa) under reference conditions.[27][28] The ATH is frequency-dependent, with greatest sensitivity typically between 1 and 4 kHz.[29] Sound pressure level, the standard measure for intensity, is defined by the equation where is the root-mean-square sound pressure in pascals and is the reference pressure approximating the ATH.[30] Human hearing exhibits a wide dynamic range, extending from the ATH at 0 dB SPL to the pain threshold of 120-140 dB SPL, beyond which sounds become physically painful.[31][32] In cases of cochlear damage, such as sensorineural hearing loss, this range compresses due to a phenomenon called recruitment, where loudness increases abnormally rapidly for intensities above the elevated threshold, reducing the difference between comfortable and uncomfortable levels.[33][34] The uncomfortable loudness level (UCL), the intensity at which sounds become intolerable, typically occurs at 100-110 dB HL for speech stimuli in individuals with normal hearing.[35][36] Perceived loudness does not scale linearly with physical intensity but approximates the Weber-Fechner law, where loudness is proportional to the logarithm of intensity , expressed as .[37] To quantify subjective loudness, the sone scale is used, with 1 sone defined as the loudness of a 1 kHz tone at 40 dB SPL (equivalent to 40 phons); each doubling of sones corresponds to a perceived doubling of loudness.[38] Prolonged exposure to sounds exceeding 85 dB SPL can induce temporary threshold shift (TTS), a reversible elevation in hearing threshold that typically recovers within hours to days but signals risk for permanent damage with repeated occurrences.[39][40]Historical Development
Early Mechanical and Tuning Fork Methods
In the 19th century, audiometry's origins relied on simple mechanical devices to roughly evaluate hearing acuity through distance-based assessments. The watch tick test, for instance, involved determining the maximum distance at which a patient could hear the ticking of a pocket watch, with results recorded as fractions relative to normal hearing thresholds, such as 12/36 inches.[41] Similarly, the whisper test used spoken words delivered at varying intensities—from a faint whisper to a shout—to gauge the distance over which speech was audible, often standardized at 60-70 feet for conversational levels in healthy individuals.[41] These methods, advocated by figures like J.S. Prout in 1872 for the watch test and H. Knapp in 1887 for voice testing, provided crude estimates of hearing sensitivity but lacked precision due to environmental noise and subjective interpretation.[41] Tuning forks emerged as a more standardized tool in mid-19th-century otology, typically using a 512 Hz fork to assess conduction pathways and differentiate types of hearing loss. The Weber test, developed by Ernst Heinrich Weber in 1834, involves placing the vibrating fork on the midline forehead or vertex; in conductive hearing loss, sound lateralizes to the affected ear, while in sensorineural loss, it shifts to the unaffected ear.[42] The Rinne test, introduced by Heinrich Adolf Rinne in 1855, compares air conduction (fork held near the ear) to bone conduction (stem placed on the mastoid process); normally, air conduction exceeds bone conduction, but reversal indicates conductive impairment, whereas both are reduced proportionally in sensorineural cases.[42] These tests, later explained in detail by D.B. St. John Roosa in 1881, represented a significant advance by exploiting differences in sound transmission through air and bone.[41] Pioneering otologists like Joseph Toynbee contributed to early diagnostic tools in the 1860s, inventing an auscultation tube in 1850—later termed an otoscope—to auscultate ear sounds and visualize the tympanic membrane, aiding indirect hearing evaluations through anatomical inspection.[43] Similarly, Prosper Menière's 1861 presentation to the French Academy of Medicine described episodic vertigo accompanied by hearing loss and tinnitus as originating from inner ear pathology, such as hemorrhage, challenging prior cerebral attributions and emphasizing auditory-vestibular connections.[44] Devices like Adam Politzer's 1877 acoumeter, a hand-held mallet-struck iron cylinder for consistent air and bone conduction tones, further refined these mechanical approaches but remained limited by high cost and variability.[41] Despite their innovations, early methods suffered from inherent limitations: they were highly subjective, required patient cooperation, offered no frequency-specific analysis, and could not quantify loss degree accurately, often yielding inconsistent results influenced by tester technique or ambient conditions.[41]Emergence of Pure-Tone and Electrophysiological Techniques
The emergence of pure-tone audiometry in the early 20th century represented a pivotal shift toward electronic instrumentation in hearing assessment, enabling precise measurement of auditory thresholds. In 1922, physicists Harvey Fletcher and R.L. Wegel at Bell Laboratories developed the first electronic audiometer, which generated pure sinusoidal tones across a range of frequencies and allowed for systematic threshold determination.[45] This device facilitated the creation of the modern audiogram, a graphical representation plotting hearing thresholds in decibels hearing level (dB HL) against frequencies spaced in octaves from 125 Hz to 8000 Hz, capturing the primary range of human speech and environmental sounds.[3] The audiogram's standardized format provided a quantifiable baseline for diagnosing hearing impairments, moving beyond subjective mechanical tests. Commercialization accelerated adoption, with Western Electric introducing the Model 1-A audiometer in 1922 as the first widely available electronic device for clinical use, followed by the improved Model 2-A in 1923.[46] These vacuum tube-based instruments were calibrated to deliver tones at controlled intensities, supporting both air conduction testing through earphones and early explorations of bone conduction. In the 1930s, bone conduction testing became a standard component of audiometry, utilizing a vibrator placed on the mastoid process to transmit sound directly to the cochlea and isolate middle or inner ear pathologies from outer ear obstructions.[47] This addition enhanced diagnostic specificity, as discrepancies between air and bone conduction thresholds could differentiate conductive from sensorineural losses. To promote global consistency, the International Organization for Standardization (ISO) published Recommendation R 389 in 1964, establishing a reference zero level for pure-tone audiometer calibration based on young, otologically normal listeners under controlled conditions.[48] Audiometers evolved into distinct types: clinical models compliant with ANSI S3.6 standards for comprehensive diagnostic testing, including narrowband masking and extended frequency ranges, and screening audiometers for rapid, basic threshold checks in non-clinical settings.[49] By the 1980s, the field transitioned from analog vacuum tube and transistor circuits to digital microprocessor systems, improving signal purity, automation, and data storage while reducing size and cost.[50] Parallel advancements in electrophysiological techniques introduced objective methods independent of patient response. In 1970, Don L. Jewett and colleagues identified brainstem auditory evoked potentials (BAEPs) as a series of short-latency waves recorded from the scalp in response to clicks, originating from the auditory nerve and brainstem nuclei.[51] This discovery enabled non-behavioral assessment of neural integrity, particularly useful for infants and uncooperative individuals. Complementing this, David T. Kemp reported the first observation of otoacoustic emissions (OAEs) in 1978, detecting faint sounds emitted from the cochlea in response to acoustic stimuli, which indicated active outer hair cell function.[52] These innovations laid the groundwork for objective audiometry, expanding beyond pure-tone thresholds to physiological validation.Audiometric Testing Fundamentals
Normative Standards and Protocols
Audiometric testing adheres to established international standards to ensure reliability, reproducibility, and comparability of results across clinical settings. The American National Standards Institute (ANSI) S3.6-2025 specification outlines requirements for the calibration of audiometers, including electroacoustic performance, signal stability, and output limits to maintain measurement accuracy within specified tolerances.[53] Similarly, the International Organization for Standardization (ISO) 8253-1:2010 provides guidelines for pure-tone air-conduction and bone-conduction threshold audiometry, emphasizing standardized procedures for test conditions, transducer placement, and response criteria to minimize variability. These standards form the foundation for professional audiometric practice, with updates reflecting advances in equipment and methodology. The test environment must be controlled to avoid external influences on hearing thresholds. Testing typically occurs in a sound-treated booth where ambient noise levels meet the maximum permissible ambient noise levels (MPANLs) in octave bands from 125 Hz to 8 kHz, as specified in ANSI S3.1-1999 (R2018) and ISO 8253-1:2010, to prevent masking of low-level signals.[54] Patients receive clear instructions prior to testing, such as raising a hand or pressing a button to indicate detection of the tone, ensuring consistent subjective responses. Standard protocols begin with otoscopy to visualize the ear canal and rule out obstructions like cerumen impaction, which could affect results. Pure-tone audiometry then proceeds by testing frequencies of 250 Hz, 500 Hz, 1000 Hz, 2000 Hz, 4000 Hz, and 8000 Hz, starting with the better ear and progressing bilaterally. Masking is applied when necessary to isolate the test ear, for air conduction when interaural attenuation (≈40-60 dB) is exceeded by the intensity difference, and for bone conduction, where interaural attenuation is ≈0 dB, typically requiring masking of the non-test ear.[55] The Hughson-Westlake ascending method is widely used to determine thresholds, involving presentation of tones in 5 dB steps until detection, followed by descending to confirm, achieving accuracy within ±5 dB. For pediatric populations, protocols incorporate age-appropriate modifications to elicit reliable responses. Visual reinforcement audiometry (VRA), effective from around 6 months of age, uses lights or toys as rewards for head-turning toward sounds, adapting the standard procedure for non-verbal children while adhering to core ISO and ANSI guidelines.Subjective Testing Methods
Subjective testing methods in audiometry rely on the patient's conscious responses to auditory stimuli, providing insights into perceived hearing thresholds and speech understanding. These techniques require active participation, such as raising a hand or pressing a button when a sound is detected, making them suitable for cooperative adults and older children but challenging for infants or those with cognitive impairments. Key procedures include pure-tone audiometry, speech audiometry, and automated tracing methods like Békésy audiometry, often complemented by immittance measures to assess middle ear function indirectly through patient tolerance. Pure-tone audiometry determines the softest sound levels (thresholds) a patient can detect across frequencies, typically from 250 Hz to 8000 Hz, using air conduction via headphones or inserts and bone conduction via a vibrator placed on the mastoid process. Air conduction thresholds evaluate the entire auditory pathway from the outer ear to the cochlea, while bone conduction bypasses the outer and middle ear to directly stimulate the cochlea, helping differentiate conductive from sensorineural hearing loss. Testing follows protocols like the Hughson-Westlake method, ascending in 5 dB steps until detection, with responses confirming the threshold. Interaural attenuation—the sound energy loss when crossing the skull—is approximately 40-60 dB for air conduction, necessitating masking of the non-test ear to prevent crossover responses, whereas it is about 0 dB for bone conduction, requiring masking in nearly all cases to isolate the test ear. Speech audiometry extends pure-tone testing by assessing functional hearing for verbal stimuli. The speech reception threshold (SRT) measures the lowest intensity level at which 50% of spondaic words—two-syllable words with equal stress, such as "baseball" or "hotdog"—are correctly repeated, providing an estimate of everyday speech detection. Word recognition score (WRS) evaluates speech discrimination by presenting monosyllabic words at a comfortable level, typically 40 dB sensation level (SL) above the SRT, where the percentage of correctly identified words indicates clarity of understanding. Additional measures include the most comfortable level (MCL), the intensity preferred for listening without discomfort, and the uncomfortable level (UCL), the threshold where sounds become intolerable, aiding in hearing aid fitting. Békésy audiometry offers an automated variant of pure-tone testing, where the patient traces thresholds by holding a button to attenuate tones that are audible and releasing it when inaudible, producing continuous or interrupted tone tracings across frequencies. This method reveals threshold fluctuations, such as Type I (normal, overlapping continuous and pulsed tones) or Type III (conductive loss, wider pulsed tracing), and is useful for detecting nonorganic hearing loss through inconsistent patterns. It reduces examiner bias but still demands patient attention. Immittance measures, while largely objective, incorporate subjective elements in patient tolerance during probe insertion and include tympanometry to evaluate middle ear compliance via pressure changes in the ear canal. Tympanograms are classified by Jerger's system: Type A shows normal peak compliance at ambient pressure, indicating intact middle ear function; Type B is flat with no peak, suggesting fluid or perforation; Type C displays a shifted peak to negative pressure, denoting Eustachian tube dysfunction; Type D features a shallow peak for reduced mobility, as in otosclerosis; and Type As or Ad indicate abnormally shallow or deep compliance, respectively. These curves correlate with conductive issues but require patient stillness. Despite their utility, subjective methods have limitations, including invalidity for infants or uncooperative patients who cannot provide reliable responses, often necessitating objective alternatives. They are also susceptible to bias from malingering or nonorganic factors, where patients may exaggerate thresholds, detectable through inconsistencies like those in Békésy tracings or SRT discrepancies with pure-tone averages.Objective Audiometry Techniques
Electrophysiological Assessments
Electrophysiological assessments in audiometry involve objective techniques that measure electrical potentials generated by the auditory nervous system in response to sound stimuli, providing insights into neural function without relying on patient cooperation. These methods record bioelectric activity from the auditory nerve through the brainstem and higher centers, typically using surface electrodes on the scalp. They are particularly valuable for evaluating infants, uncooperative individuals, and cases of suspected retrocochlear pathology.[56] The auditory brainstem response (ABR) is a cornerstone of these assessments, capturing synchronized neural firing along the auditory pathway from the cochlea to the inferior colliculus within the first 10 milliseconds post-stimulus. ABR testing employs a standard electrode montage with the active electrode at the vertex (Cz), reference electrodes on the ipsilateral mastoid or earlobe (A1 or A2), and a ground electrode on the forehead or posterior scalp. Clicks or tone bursts at intensities starting from 80 dB normalized hearing level (nHL) elicit the response, which manifests as five vertex-positive waves: Wave I (distal auditory nerve, ~1.5 ms), Wave II (~2.5 ms), Wave III (cochlear nucleus, ~3.5 ms), Wave IV (~4.5 ms), and Wave V (lateral lemniscus, ~5.5-6 ms for an 80 dB nHL click in adults with normal hearing). ABR is used to assess neural synchrony and estimate hearing thresholds, with ABR thresholds typically 10-20 dB higher than behavioral pure-tone thresholds due to differences in stimulus specificity and neural recruitment.[56][56][56][57] Electrocochleography (ECochG) provides more peripheral detail by recording responses directly from the cochlea and auditory nerve, often via a transtympanic electrode placed on the promontory after piercing the tympanic membrane. It measures the cochlear microphonic (CM), a receptor potential reflecting outer hair cell activity and basilar membrane motion; the summating potential (SP), a DC shift from inner hair cells and stria vascularis; and the compound action potential (AP), corresponding to auditory nerve firing (~1.5 ms latency). In diagnosing Ménière's disease, ECochG detects endolymphatic hydrops through an elevated SP/AP ratio (≥0.50 for clicks), with extratympanic variants showing sensitivity of 47.6% and specificity of 83.8%, though combining with other audiological measures improves sensitivity to 63.5% without significant specificity gains.[58][58][58][59] Middle latency responses (MLR) extend the assessment to cortical levels, recording potentials 12-75 ms post-stimulus from thalamocortical pathways using electrodes at Cz or temporal sites (C3/C4, T3/T4) referenced to the earlobe or chin. Key components include Na (~12-21 ms), Pa (~21-38 ms), Nb, and Pb (~50 ms), which evaluate higher-order auditory processing and are sensitive to cortical lesions (sensitivity 0.52-0.64 when paired with ABR). These responses aid in localizing central auditory dysfunction.[60][60][60] Applications of electrophysiological assessments include universal newborn hearing screening, implemented in many countries since the 1990s using automated ABR protocols with 35 dB nHL clicks to detect bilateral hearing loss >35 dB HL, achieving high referral rates for confirmation. ABR also serves in intraoperative monitoring during surgeries like acoustic neuroma resection to preserve auditory function by tracking real-time changes in wave latencies and amplitudes.[61][56]Otoacoustic Emission Testing
Otoacoustic emission (OAE) testing measures low-level sounds produced by the outer hair cells in the cochlea in response to acoustic stimuli, providing an objective assessment of cochlear function and outer hair cell integrity. These emissions, first discovered by David Kemp in 1978 as the "Kemp effect," occur when the cochlea actively amplifies incoming sounds through electromotility of outer hair cells, generating measurable echoes that travel back through the middle ear to the ear canal. OAE testing is particularly valuable because it does not require behavioral responses from the patient, making it ideal for infants, young children, or uncooperative individuals.[62] There are several types of evoked OAEs, with transient-evoked OAEs (TEOAEs) and distortion-product OAEs (DPOAEs) being the most commonly used in clinical practice. TEOAEs are elicited by brief broadband stimuli such as clicks, producing emissions across a wide frequency range that reflect the overall health of the cochlea.[62] In contrast, DPOAEs are generated by presenting two simultaneous pure tones of different frequencies (f1 and f2, with f2 > f1), typically at a frequency ratio of 1.2, resulting in nonlinear distortion products that can be recorded at specific frequencies like 2f1 - f2 for frequency-specific evaluation.[62] These types allow for targeted assessment of cochlear regions, with TEOAEs providing a general screen and DPOAEs enabling more precise frequency mapping. The procedure involves inserting a small probe into the external ear canal, which contains a speaker to deliver the stimuli and a sensitive microphone to record the emissions while minimizing noise interference. The probe seals the canal to ensure accurate measurement, and the test typically lasts 30-60 seconds per ear, with automated analysis determining pass/refer criteria based on emission amplitude thresholds relative to noise floors (e.g., signal-to-noise ratio >6 dB).[63] Emissions are strongest and most reliably detected in the 1-4 kHz frequency range, corresponding to the primary speech frequencies, but they are often absent or weak above 4 kHz in up to 50% of normal adult ears due to olivocochlear efferent suppression.[64] In clinical applications, OAE testing is a cornerstone of universal neonatal hearing screening programs, achieving approximately 90% sensitivity for detecting moderate hearing loss (≥30-40 dB HL) when combined with automated auditory brainstem response testing.[62] It is also used for monitoring ototoxicity in patients receiving chemotherapy or aminoglycosides, where serial DPOAE measurements can detect early cochlear damage before changes appear on pure-tone audiometry, as recommended by guidelines from the American Academy of Audiology.[65]Audiograms and Data Interpretation
Audiogram Construction and Features
An audiogram is a graphical representation of hearing thresholds obtained from pure-tone audiometry, plotting the softest detectable sounds across various frequencies against intensity levels. The horizontal x-axis displays sound frequencies in hertz (Hz), typically ranging from 125 Hz to 8000 Hz on a logarithmic scale to reflect the human ear's perceptual spacing of pitches, with lower frequencies on the left and higher on the right. The vertical y-axis represents hearing level in decibels hearing level (dB HL), inverted such that better hearing (lower thresholds) appears at the top, usually spanning from -10 dB HL to 120 dB HL.[4][66] Standard symbols denote thresholds for air conduction and bone conduction testing, differentiated by ear and masking status to ensure accurate interpretation. For the right ear, air conduction thresholds (unmasked) are marked with a circle (O), while left ear air conduction uses a cross (X). Bone conduction thresholds are indicated with square brackets: [ for the right ear (unmasked) and ] for the left ear (unmasked), with angle brackets < or > used for masked bone conduction to prevent cross-hearing. These conventions, established by the American Speech-Language-Hearing Association (ASHA), facilitate clear visualization of conductive versus sensorineural components.[67][4] Audiogram configurations describe the pattern of threshold elevations across frequencies, aiding in identifying the nature of hearing impairment. Normal hearing is characterized by thresholds of 0 to 25 dB HL across tested frequencies. A flat configuration shows relatively equal loss at all frequencies, often seen in certain sensorineural losses. Sloping configurations exhibit progressively worse thresholds at higher frequencies, common in age-related or noise-induced hearing loss. Rising configurations display greater loss at lower frequencies, improving toward higher pitches, as may occur in otosclerosis or Meniere's disease.[4][66] Masking is applied to the contralateral (non-test) ear during testing to isolate responses when interaural differences exceed the interaural attenuation—approximately 40 dB for supra-aural headphones in air conduction or 0 dB for bone conduction—calculated as the presentation level to the test ear minus the non-test ear's threshold. This masking index ensures the signal is heard primarily by the intended ear, preventing erroneous thresholds from cross-hearing.[55][4] The speech banana is an overlaid curve on the audiogram highlighting the frequency range critical for speech intelligibility, spanning approximately 300 to 3400 Hz, where most phonemes and conversational sounds reside at moderate intensities (20 to 50 dB HL). This visual aid contextualizes how hearing loss within this band impacts daily communication.[4]Classification of Hearing Loss
The classification of hearing loss in audiometry primarily involves assessing the degree (severity) and type (site of lesion) based on pure-tone thresholds obtained from an audiogram. The degree is typically determined using the pure-tone average (PTA), calculated as the mean threshold at 500 Hz, 1000 Hz, and 2000 Hz for the ear being assessed, which correlates with speech detection capabilities.[68] This PTA provides a standardized metric for categorizing impairment levels, guiding clinical management and intervention needs. Degrees of hearing loss are delineated as follows:| Degree | PTA Range (dB HL) |
|---|---|
| Normal | ≤25 |
| Mild | 26–40 |
| Moderate | 41–55 |
| Moderately severe | 56–70 |
| Severe | 71–90 |
| Profound | >90 |
Clinical and Occupational Applications
Educational and School-Based Screening
Universal newborn hearing screening (UNHS) programs utilize objective audiometric techniques, such as otoacoustic emissions (OAE) testing and automated auditory brainstem response (AABR), to detect congenital hearing loss in all infants before hospital discharge or within the first month of life, as recommended by the Joint Committee on Infant Hearing (JCIH) 2019 position statement.[72] These methods enable early identification, with screening coverage exceeding 95% in developed countries like the United States, where over 98% of newborns are screened annually according to Centers for Disease Control and Prevention (CDC) data.[73] The prevalence of permanent congenital hearing loss, including profound cases, ranges from 1 to 3 per 1,000 newborns, underscoring the importance of universal implementation to ensure timely diagnosis and intervention.[73] In educational settings, school-based hearing screening protocols typically involve pure-tone audiometry for children aged 6 years and older, presenting tones at frequencies of 1,000 Hz, 2,000 Hz, and 4,000 Hz at 20-25 dB hearing level (dB HL) to identify potential hearing thresholds above normal limits.[74] For younger children in preschool or early elementary programs, visual reinforcement audiometry (VRA) or play audiometry is employed, where responses to sounds are reinforced with visual stimuli or toys to engage toddlers and assess hearing behaviorally.[74] These screenings occur annually or biennially, often in kindergarten through 12th grade, to detect both permanent and acquired losses that could impact academic performance. Otitis media with effusion (OME), a common middle ear condition in children and the leading cause of temporary conductive hearing losses in pediatric populations, leads to fluctuating mild-to-moderate impairments that resolve with treatment but may contribute to speech and language challenges if persistent.[75] Interventions in school settings include the provision of frequency modulation (FM) systems, which enhance the signal-to-noise ratio by wirelessly transmitting the teacher's voice directly to the child's hearing aid or personal receiver, facilitating better comprehension in noisy classrooms.[76] Early amplification through hearing aids, fitted promptly after diagnosis, significantly prevents language delays, with studies showing improved speech perception and vocabulary development in children identified via UNHS and school protocols.[77]Workplace Hearing Conservation Programs
Workplace hearing conservation programs (HCPs) are mandated by the Occupational Safety and Health Administration (OSHA) under 29 CFR 1910.95 to protect workers from noise-induced hearing loss in occupational settings where noise exposure reaches or exceeds 85 decibels (dBA) as an 8-hour time-weighted average (TWA).[78] Established in 1983, this standard requires employers to implement comprehensive programs including noise monitoring, audiometric testing, provision of hearing protection devices (HPDs), employee training, and recordkeeping, with audiometry serving as a core component for early detection and prevention.[79] Audiometric testing within HCPs begins with a baseline audiogram conducted within six months of an employee's initial exposure to noise at or above the action level, followed by annual testing to monitor hearing health.[78] A standard threshold shift (STS) is defined as an average hearing level change of 10 dB or greater at 2000, 3000, and 4000 Hz in either ear compared to the baseline, triggering follow-up measures such as retesting, HPD refitting, and referral for further evaluation.[78] Engineering controls, such as noise-reducing machinery, are prioritized over administrative controls or HPDs like earplugs and earmuffs, though all are integrated to minimize exposure.[79] High-risk industries include manufacturing, where approximately 46% of workers face hazardous noise exposure, and mining, with 56% prevalence, alongside construction and utilities.[80][81] Exposure to ototoxic chemicals, such as organic solvents (e.g., toluene, styrene) and metals (e.g., lead, mercury), often compounds the effects of noise, increasing the risk of hearing damage even at moderate noise levels.[82][83] Emerging tools like mobile applications for self-screening, including the NIOSH Sound Level Meter app for noise assessment and validated hearing test apps such as hearWHO, are supplementing traditional audiometry by enabling quick, on-site evaluations in workplaces.[84][85] Studies indicate that compliant HCPs can reduce the overall incidence of noise-induced hearing loss, with one analysis showing workers 28% less likely to experience hearing shifts when enrolled in such programs compared to those without.[86][87] Effective implementation, including consistent audiometric follow-up, has been associated with improved hearing thresholds over time and lower rates of occupational hearing claims.[88]Advanced Topics and Research
Speech and Non-Linear Hearing Phenomena
The cocktail party effect describes the human ability to selectively attend to a single speech stream amid competing background sounds, such as in a noisy social gathering. This phenomenon relies on binaural cues, including interaural time differences (ITDs) and interaural level differences (ILDs), which help segregate the target sound from interferers, as well as linguistic context that aids in parsing meaningful content from irrelevant noise.[89] Psychoacoustic studies have shown that early auditory processing stages contribute to this selection, with attention modulating the perceptual grouping of sounds based on spatial and semantic features.[90] Speech intelligibility, a key measure of how well acoustic signals are understood, is influenced by factors like frequency band contributions and background noise. The Speech Intelligibility Index (SII), an evolution of the earlier Articulation Index (AI), quantifies this through the formulawhere represents the importance weight of the -th frequency band for speech recognition, and is the audibility factor (ranging from 0 to 1 based on signal audibility in that band).[91] In noisy environments, the signal-to-noise ratio (SNR) is pivotal; for normal-hearing listeners, an SNR of approximately 0 dB yields about 50% word or sentence understanding in standard tests, highlighting the narrow margin for effective communication.[92] Non-linear hearing phenomena arise primarily from active processes in the cochlea, where outer hair cells amplify incoming signals through electromotility, providing a compressive gain of 50-60 dB at low sound levels to enhance sensitivity across the dynamic range.[93] This nonlinearity manifests in effects like two-tone suppression, where one tone reduces the response to a nearby frequency tone at the basilar membrane, and distortion products, such as cubic difference tones generated when two simultaneous tones interact, which can be measured as otoacoustic emissions.[94] These mechanisms sharpen frequency selectivity but also introduce intermodulation distortions that influence overall auditory perception.[95] Human auditory temporal resolution, essential for parsing rapid speech elements, is demonstrated by gap detection thresholds of approximately 2-10 ms in broadband noise for normal-hearing individuals, varying with stimulus bandwidth and listener age.[96] Sound localization further exemplifies binaural processing, utilizing ITDs as small as less than 1 ms (thresholds around 10-20 μs for low frequencies) and ILDs ranging from 1-20 dB (with thresholds near 1 dB for high frequencies) to determine azimuth.[97] In blind individuals, human echolocation extends these capabilities, where self-generated tongue clicks produce broadband pulses that reflect off objects, allowing spatial mapping with acuity comparable to visual perception in trained users.[98][99]