Hubbry Logo
AudiometryAudiometryMain
Open search
Audiometry
Community hub
Audiometry
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Audiometry
Audiometry
from Wikipedia
Audiometry
ICD-10-PCSF13Z1 - F13Z6
ICD-9-CM95.41
MeSHD001299
MedlinePlus003341

Audiometry (from Latin audīre 'to hear' and metria 'to measure') is a branch of audiology and the science of measuring hearing acuity for variations in sound intensity and pitch and for tonal purity, involving thresholds and differing frequencies.[1] Typically, audiometric tests determine a subject's hearing levels with the help of an audiometer, but may also measure ability to discriminate between different sound intensities, recognize pitch, or distinguish speech from background noise. Acoustic reflex and otoacoustic emissions may also be measured. Results of audiometric tests are used to diagnose hearing loss or diseases of the ear, and often make use of an audiogram.[2]

History

[edit]

The basic requirements of the field were to be able to produce a repeating sound, some way to attenuate the amplitude, a way to transmit the sound to the subject, and a means to record and interpret the subject's responses to the test.

Mechanical "acuity meters" and tuning forks

[edit]

For many years there was desultory use of various devices capable of producing sounds of controlled intensity. The first types were clock-like, giving off air-borne sound to the tubes of a stethoscope; the sound distributor head had a valve that could be gradually closed. Another model used a tripped hammer to strike a metal rod and produce the testing sound; in another, a tuning fork was struck. The first such measurement device for testing hearing was described by Wolke (1802).[3]

Pure tone audiometry and audiograms

[edit]

Following development of the induction coil in 1849 and audio transducers (telephone) in 1876, a variety of audiometers were invented in United States and overseas. These early audiometers were known as induction-coil audiometers due to...

  • Hughes 1879
  • Hartmann 1878

In 1885, Arthur Hartmann designed an "Auditory Chart" which included left and right ear tuning fork representation on the x -axis and percent of hearing on the y-axis.

In 1899, Carl E. Seashore Prof. of Psychology at U. Iowa, United States, introduced the audiometer as an instrument to measure the "keenness of hearing" whether in the laboratory, schoolroom, or office of the psychologist or aurist. The instrument operated on a battery and presented a tone or a click; it had an attenuator set in a scale of 40 steps. His machine became the basis of the audiometers later manufactured at Western Electric.

  • Cordia C. Bunch 1919

The concept of a frequency versus sensitivity (amplitude) audiogram plot of human hearing sensitivity was conceived by German physicist Max Wien in 1903. The first vacuum tube implementations, November 1919, two groups of researchers — K.L. Schaefer and G. Gruschke, B. Griessmann and H. Schwarzkopf — demonstrated before the Berlin Oto-logical Society two instruments designed to test hearing acuity. Both were built with vacuum tubes. Their designs were characteristic of the two basic types of electronic circuits used in most electronic audio devices for the next two decades. Neither of the two devices was developed commercially for some time, although the second was to be manufactured under the name "Otaudion." The Western Electric 1A, developed by <who> was built in 1922 in the United States. It was not until 1922 that otolaryngologist Dr. Edmund P. Fowler, and physicists Dr. Harvey Fletcher and Robert Wegel of Western Electric Co. first employed frequency at octave intervals plotted along the x axis and intensity downward along the y-axis as a degree of hearing loss. Fletcher et al. also coined the term "audiogram" at that time.

With further technologic advances, bone conduction testing capabilities became a standard component of all Western Electric audiometers by 1928.

Electrophysiologic audiometry

[edit]

In 1967, Sohmer and Feinmesser were the first to publish auditory brainstem responses (ABR), recorded with surface electrodes in humans which showed that cochlear potentials could be obtained non-invasively.

Otoacoustic audiometry

[edit]

In 1978, David Kemp reported that sound energy produced by the ear could be detected in the ear canal—otoacoustic emissions. The first commercial system for detecting and measuring otoacoustic emissions was produced in 1988.

Auditory system

[edit]

Components

[edit]

The auditory system is composed of epithelial, osseous, vascular, neural and neocortical tissues. The anatomical divisions are external ear canal and tympanic membrane, middle ear, inner ear, VIII auditory nerve, and central auditory processing portions of the neocortex.

Hearing process

[edit]

Sound waves enter the outer ear and travel through the external auditory canal until they reach the tympanic membrane, causing the membrane and the attached chain of auditory ossicles to vibrate. The motion of the stapes against the oval window sets up waves in the fluids of the cochlea, causing the basilar membrane to vibrate. This stimulates the sensory cells of the organ of Corti, atop the basilar membrane, to send nerve impulses to the central auditory processing areas of the brain, the auditory cortex, where sound is perceived and interpreted.

Sensory and psychodynamics of human hearing

[edit]

Non-linearity

[edit]

Temporal synchronization – sound localization and echo location

[edit]

Parameters of human hearing

[edit]

Frequency range

[edit]

Amplitude sensitivity

[edit]

Audiometric testing

[edit]
  • objectives: integrity, structure, function, freedom from infirmity.

Normative standards

[edit]
  • ISO 7029:2000 and BS 6951

Types of audiometry

[edit]

Subjective audiometry

[edit]

Subjective audiometry requires the cooperation of the subject, and relies upon subjective responses which may both qualitative and quantitative, and involve attention (focus), reaction time, etc.

  • Differential testing is conducted with a low frequency (usually 512 Hz) tuning fork. They are used to assess asymmetrical hearing and air/bone conduction differences. They are simple manual physical tests and do not result in an audiogram.
  • Pure tone audiometry is a standardized hearing test in which air conduction hearing thresholds in decibels (db) for a set of fixed frequencies between 250 Hz and 8,000 Hz are plotted on an audiogram for each ear independently. A separate set of measurements is made for bone conduction. There is also high frequency Pure Tone Audiometry covering the frequency range above 8000 Hz to 16,000 Hz.
  • Threshold equalizing noise (TEN) test
  • Masking level difference (MLD) test
  • Psychoacoustic (or psychophysical) tuning curve test
  • Speech audiometry is a diagnostic hearing test designed to test word or speech recognition. It has become a fundamental tool in hearing-loss assessment. In conjunction with pure-tone audiometry, it can aid in determining the degree and type of hearing loss. Speech audiometry also provides information regarding discomfort or tolerance to speech stimuli and information on word recognition abilities. In addition, information gained by speech audiometry can help determine proper gain and maximum output of hearing aids and other amplifying devices for patients with significant hearing losses and help assess how well they hear in noise. Speech audiometry also facilitates audiological rehabilitation management.

Speech audiometry may include:

  • Békésy audiometry, also called decay audiometry - audiometry in which the subject controls increases and decreases in intensity as the frequency of the stimulus is gradually changed so that the subject traces back and forth across the threshold of hearing over the frequency range of the test. The test is quick and reliable, so was frequently used in military and industrial contexts.
  • Audiometry of children

Objective audiometry

[edit]

Objective audiometry is based on physical, acoustic or electrophysiologic measurements and does not depend on the cooperation or subjective responses of the subject.

  • Caloric stimulation/reflex test uses temperature difference between hot and cold water or air delivered into the ear to test for neural damage. Caloric stimulation of the ear results in rapid side-to-side eye movements called nystagmus. Absence of nystagmus may indicate auditory nerve damage. This test will often be done as part of another test called electronystagmography.
  • Electronystagmography (ENG) uses skin electrodes and an electronic recording device to measure nystagmus evoked by procedures such as caloric stimulation of the ear
  • Acoustic immittance audiometry - Immittance audiometry is an objective technique which evaluates middle ear function by three procedures: static immittance, tympanometry, and the measurement of acoustic reflex threshold sensitivity. Immittance audiometry is superior to pure tone audiometry in detecting middle ear pathology.
  • Evoked potential audiometry
    • N1-P2 cortical audio evoked potential (CAEP) audiometry
    • ABR is a neurologic tests of auditory brainstem function in response to auditory (click) stimuli.
    • Electrocochleography a variant of ABR, tests the impulse transmission function of the cochlea in response to auditory (click) stimuli. It is most often used to detect endolymphatic hydrops in the diagnosis/assessment of Meniere's disease.
    • Audio steady state response (ASSR) audiometry
    • Vestibular evoked myogenic potential (VEMP) test, a variant of ABR that tests the integrity of the saccule
  • Otoacoustic emission audiometry - this test can differentiate between the sensory and neural components of sensorineural hearing loss.
    • Distortion product otoacoustic emissions (DPOAE) audiometry
    • Transient evoked otoacoustic emissions (TEOAE) audiometry
    • Sustained frequency otoacoustic emissions (SFOAE) audiometry - At present, SFOAEs are not used clinically.
  • In situ audiometry: a technique for measuring not only the condition of the person's auditory system, but also the characteristics of sound reproduction devices, in-the-canal hearing aids, vents and sound tubes of hearing aids.[4][5]

Audiograms

[edit]

The result of most audiometry is an audiogram plotting some measured dimension of hearing, either graphically or tabularly.

The most common type of audiogram is the result of a pure tone audiometry hearing test which plots frequency versus amplitude sensitivity thresholds for each ear along with bone conduction thresholds at 8 standard frequencies from 250 Hz to 8000 Hz. A pure tone audiometry hearing test is the gold standard for evaluation of hearing loss or disability.[medical citation needed] Other types of hearing tests also generate graphs or tables of results that may be loosely called 'audiograms', but the term is universally used to refer to the result of a pure tone audiometry hearing test.

Hearing assessment

[edit]

Apart from testing hearing, part of the function of audiometry is in assessing or evaluating hearing from the test results. The most commonly used assessment of hearing is the determination of the threshold of audibility, i.e. the level of sound required to be just audible. This level can vary for an individual over a range of up to 5 decibels from day to day and from determination to determination, but it provides an additional and useful tool in monitoring the potential ill effects of exposure to noise. Hearing loss may be unilateral or bilateral, and bilateral hearing loss may not be symmetrical. The most common types of hearing loss, due to age and noise exposure, are usually bilateral and symmetrical.

In addition to the traditional audiometry, hearing assessment can be performed using a standard set of frequencies (audiogram) with mobile applications to detect possible hearing impairments.[6]

Hearing care professional performing a hearing test on a client, 2015

Hearing loss classification

[edit]

The primary focus of audiometry is assessment of hearing status and hearing loss, including extent, type and configuration.

  • There are four defined degrees of hearing loss: mild, moderate, severe and profound.
  • Hearing loss may be divided into four types: conductive hearing loss, sensorineural hearing loss, central auditory processing disorders, and mixed types.
  • Hearing loss may be unilateral or bilateral, of sudden onset or progressive, and temporary or permanent.

Hearing loss may be caused by a number of factors including heredity, congenital conditions, age-related (presbycusis) and acquired factors like noise-induced hearing loss, ototoxic chemicals and drugs, infections, and physical trauma.

Clinical practice

[edit]

Audiometric testing may be performed by a general practitioner medical doctor, an otolaryngologist (a specialized MD also called an ENT), a CCC-A (Certificate of Clinical Competence in Audiology) audiologist, a certified school audiometrist (a practitioner analogous to an optometrist who tests eyes), and sometimes other trained practitioners. Practitioners are certified by American Board of Audiology (ABA). Practitioners are licensed by various state boards regulating workplace health & safety, occupational professions, or ...

Schools

[edit]

Occupational testing

[edit]

Noise-induced hearing loss

[edit]

Workplace and environmental noise is the most prevalent cause of hearing loss in the United States and elsewhere.

Research

[edit]
  • Computer modeling of patterns of hearing deficit
  • 3D longitudinal profiles of hearing loss including age axis (presbycusis study)

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Audiometry is the systematic of hearing sensitivity and acuity, involving a series of standardized tests to evaluate the function of the entire , from the to the . It assesses how well an individual can detect and discriminate sounds based on their intensity (measured in decibels, dB) and (measured in Hertz, Hz), with normal hearing spanning approximately 20 to 20,000 Hz and speech primarily in the 500 to 3,000 Hz range. The procedure helps identify the presence, type, degree, and configuration of , enabling clinicians to diagnose underlying conditions and recommend appropriate interventions such as hearing aids, medical treatment, or surgical options. The core components of audiometry include several specialized tests tailored to different aspects of auditory function. , the foundational and most widely used method, determines the lowest audible intensity (threshold) for pure tones delivered via air conduction (through the ) or (directly to the ), typically at frequencies from 250 Hz to 8,000 Hz. This test distinguishes between (due to outer or issues, like ), (involving or neural damage, often from noise exposure or aging), and mixed losses combining both. Speech audiometry complements this by measuring the speech reception threshold—the level at which 50% of words are understood—and word recognition scores, assessing real-world communication abilities in noisy or quiet conditions. Additional tests, such as immittance audiometry (including to evaluate pressure and mobility) and (ABR) audiometry (which records neural responses to sounds via electrodes), provide insights into mechanics and central auditory pathways, respectively. Audiometric testing is performed by trained audiologists in sound-treated rooms or booths to minimize environmental noise interference, using calibrated equipment like audiometers that generate precise tones and speech stimuli. Patients respond to stimuli—often by raising a hand or pressing a button—while thresholds are established using ascending-descending methods, such as the Hughson-Westlake procedure, where intensity decreases in 5- or 10-dB steps until the sound is inaudible. Results are documented on an audiogram, a graphical representation plotting thresholds against frequency, with air conduction shown as red "O" symbols for the right ear and blue "X" for the left, and bone conduction as brackets; an air-bone gap greater than 10-15 dB indicates conductive impairment. Hearing levels are classified as normal (≤25 dB hearing level, HL), mild (26-40 dB HL), moderate (41-55 dB HL), moderately severe (56-70 dB HL), severe (71-90 dB HL), or profound (>90 dB HL), based on pure-tone averages at 500, 1,000, and 2,000 Hz. Early detection through audiometry is crucial, as of 2024 hearing loss affects approximately 30 million people in the United States aged 12 years or older, with around 28.8 million adults who could benefit from wearing hearing aids, impacting communication, quality of life, and potentially cognitive health.

Auditory System Basics

Anatomical Components

The , or external ear, consists of the auricle (pinna) and the external . The pinna, composed of covered by skin, serves to collect and funnel sound waves into the ear , enhancing and directing airborne vibrations toward the . The external auditory , a curved tube about 2.5 cm long lined with skin, cerumen-producing glands, and hair, further amplifies sound frequencies around 2-5 kHz while protecting the inner structures from foreign particles. The is an air-filled cavity within the , separated from the by the tympanic membrane (eardrum) and connected to the nasopharynx via the for pressure equalization. The tympanic membrane, a thin, fibrous structure, vibrates in response to sound waves entering the . These vibrations are transmitted and amplified by the three : the (hammer), attached to the tympanic membrane; the (anvil), articulating with the ; and the (stirrup), which connects to the oval window of the . This ossicular chain functions primarily in , overcoming the mismatch between air and the fluid-filled by increasing the force and decreasing the velocity of the vibrations, thereby efficiently transferring sound energy. The , housed in the of the , includes the , a spiral-shaped, fluid-filled structure approximately 35 mm long in humans that transduces mechanical vibrations into neural signals. The consists of three interconnected scalae: the scala vestibuli and scala tympani, filled with (similar to ), and the central scala media (cochlear duct), filled with (high in ). Within the scala media lies the , a complex sensory resting on the basilar membrane, which separates the scala media from the scala tympani. The contains specialized mechanoreceptor hair cells—inner hair cells for signal transmission and outer hair cells for amplification—whose are embedded in the overlying tectorial membrane. These hair cells are tonotopically organized along the basilar membrane, with high-frequency sounds stimulating the basal (stiffer, narrower) region and low-frequency sounds affecting the apical (more flexible, wider) region, enabling frequency discrimination. The auditory nerve, or cochlear division of cranial nerve VIII (), originates from the spiral ganglion neurons in the and carries the transduced electrical signals from the hair cells as action potentials. These bipolar neurons with hair cells at one end and project centrally via myelinated axons that bundle into the auditory nerve, entering the at the pontomedullary junction to terminate in the cochlear nuclei. This pathway initiates the central auditory processing, conveying frequency, intensity, and temporal information from the periphery.

Physiological Hearing Process

The physiological process of hearing begins when sound waves, consisting of pressure variations in air, enter the external auditory canal and strike the tympanic membrane, causing it to vibrate in synchrony with the incoming waves. These vibrations are transmitted to the middle ear —the , , and —which act as a system to amplify the . The provide an impedance-matching function, increasing the pressure by a factor of approximately 20-30 dB to efficiently couple the low-impedance air medium to the high-impedance fluid of the . In the , the footplate pushes against the oval window, displacing fluid in the scala vestibuli and creating pressure waves that propagate through the cochlear duct. This fluid motion causes a traveling wave along the basilar membrane, a flexible structure within the cochlea that separates the scala media from the scala tympani. The traveling wave peaks at specific locations determined by the sound's frequency due to the membrane's tonotopic organization, where decreases progressively from the base (high frequencies) to the apex (low frequencies), resulting in greater displacement at frequency-matched sites. The peaking wave shears the basilar membrane against the overlying tectorial membrane, deflecting the of cells in the . This mechanical deflection opens mechanically gated potassium channels, leading to depolarization of the cells and subsequent calcium influx, which triggers the release of the glutamate at synapses with fibers. Inner cells primarily transduce the signal to the , while outer cells enhance sensitivity through electromotility, actively contracting and amplifying basilar membrane motion by up to 40 dB via prestin-mediated length changes in response to electrical signals. Auditory nerve fibers encode the mechanical signal into neural activity through distinct mechanisms: phase-locking, where action potentials synchronize to specific phases of low-frequency sounds (below ~1 kHz); rate coding, where firing rates increase with sound intensity for higher frequencies; and the volley theory, whereby synchronized groups of fibers collectively represent intermediate frequencies (1-5 kHz) to maintain temporal precision. This neural encoding preserves both temporal and intensity information as signals ascend the auditory pathway.

Human Hearing Parameters

Frequency Range and Sensitivity

The human audible frequency range typically spans from 20 Hz to 20 kHz, encompassing the spectrum of sounds detectable by the ear under optimal conditions. This range narrows with age due to , a form of that primarily affects high frequencies, often shifting the upper limit to approximately 8-10 kHz in older adults. Children and young individuals generally retain sensitivity up to 20 kHz, while many adults experience a decline, with detection thresholds rising sharply above 16 kHz. Human sensitivity to frequencies varies significantly, as illustrated by equal-loudness , originally mapped by Fletcher and Munson in their seminal 1933 study. These demonstrate that the ear's peak sensitivity occurs between 2 and 5 kHz, where the lowest detection threshold is approximately 0 dB SPL at around 4 kHz. Thresholds across the audible range can differ by up to 100 dB, with much higher intensities required to perceive low frequencies below 100 Hz or high frequencies above 10 kHz at equivalent perceived levels. The perceptual organization of frequencies is further structured into critical bands, approximately 25 in number, as defined by the Bark scale—a psychoacoustic model that approximates the ear's resolution. These bands, ranging from 1 to 24 Barks, cover the audible spectrum and form the basis for effects, where a stronger tone within the same band obscures a weaker one. Frequency discrimination relies on tonotopic mapping along the , where specific locations on the basilar membrane resonate to particular frequencies, aligning with the of pitch . This spatial organization enables the to differentiate tones based on their resonant positions, supporting precise frequency selectivity.

Amplitude Thresholds and Dynamics

The absolute threshold of hearing (ATH) represents the minimum sound intensity that a person with normal hearing can detect 50% of the time in a quiet environment. This threshold is standardized at 0 dB hearing level (HL) for a pure tone at 1 kHz, equivalent to approximately 7 dB sound pressure level (SPL) (about 35.5 μPa) under reference conditions. The ATH is frequency-dependent, with greatest sensitivity typically between 1 and 4 kHz. Sound pressure level, the standard measure for intensity, is defined by the equation SPL=20log10(PP0)\text{SPL} = 20 \log_{10} \left( \frac{P}{P_0} \right) where PP is the root-mean-square sound pressure in pascals and P0=20μPaP_0 = 20 \, \mu\text{Pa} is the reference pressure approximating the ATH. Human hearing exhibits a wide dynamic range, extending from the ATH at 0 dB SPL to the pain threshold of 120-140 dB SPL, beyond which sounds become physically painful. In cases of cochlear damage, such as sensorineural hearing loss, this range compresses due to a phenomenon called recruitment, where loudness increases abnormally rapidly for intensities above the elevated threshold, reducing the difference between comfortable and uncomfortable levels. The uncomfortable loudness level (UCL), the intensity at which sounds become intolerable, typically occurs at 100-110 dB HL for speech stimuli in individuals with normal hearing. Perceived loudness does not scale linearly with physical intensity but approximates the Weber-Fechner law, where loudness LL is proportional to the logarithm of intensity II, expressed as L=klogIL = k \log I. To quantify subjective loudness, the sone scale is used, with 1 sone defined as the loudness of a 1 kHz tone at 40 dB SPL (equivalent to 40 phons); each doubling of sones corresponds to a perceived doubling of loudness. Prolonged exposure to sounds exceeding 85 dB SPL can induce temporary threshold shift (TTS), a reversible elevation in hearing threshold that typically recovers within hours to days but signals risk for permanent damage with repeated occurrences.

Historical Development

Early Mechanical and Tuning Fork Methods

In the 19th century, audiometry's origins relied on simple mechanical devices to roughly evaluate hearing acuity through distance-based assessments. The watch tick test, for instance, involved determining the maximum distance at which a patient could hear the ticking of a pocket watch, with results recorded as fractions relative to normal hearing thresholds, such as 12/36 inches. Similarly, the whisper test used spoken words delivered at varying intensities—from a faint whisper to a shout—to gauge the distance over which speech was audible, often standardized at 60-70 feet for conversational levels in healthy individuals. These methods, advocated by figures like J.S. Prout in 1872 for the watch test and H. Knapp in 1887 for voice testing, provided crude estimates of hearing sensitivity but lacked precision due to environmental noise and subjective interpretation. Tuning forks emerged as a more standardized tool in mid-19th-century , typically using a 512 Hz fork to assess conduction pathways and differentiate types of . The , developed by in 1834, involves placing the vibrating on the midline or vertex; in , sound lateralizes to the affected , while in sensorineural loss, it shifts to the unaffected ear. The , introduced by Heinrich Adolf Rinne in 1855, compares air conduction (fork held near the ear) to (stem placed on the mastoid process); normally, air conduction exceeds bone conduction, but reversal indicates conductive impairment, whereas both are reduced proportionally in sensorineural cases. These tests, later explained in detail by D.B. St. John Roosa in 1881, represented a significant advance by exploiting differences in sound transmission through air and bone. Pioneering otologists like Joseph Toynbee contributed to early diagnostic tools in the 1860s, inventing an tube in 1850—later termed an —to auscultate ear sounds and visualize the tympanic membrane, aiding indirect hearing evaluations through anatomical inspection. Similarly, Prosper Menière's 1861 presentation to the French Academy of Medicine described episodic vertigo accompanied by and as originating from pathology, such as hemorrhage, challenging prior cerebral attributions and emphasizing auditory-vestibular connections. Devices like Adam Politzer's 1877 acoumeter, a hand-held mallet-struck iron cylinder for consistent air and tones, further refined these mechanical approaches but remained limited by high cost and variability. Despite their innovations, early methods suffered from inherent limitations: they were highly subjective, required patient cooperation, offered no frequency-specific analysis, and could not quantify loss degree accurately, often yielding inconsistent results influenced by tester technique or ambient conditions.

Emergence of Pure-Tone and Electrophysiological Techniques

The emergence of pure-tone audiometry in the early 20th century represented a pivotal shift toward electronic instrumentation in hearing assessment, enabling precise measurement of auditory thresholds. In 1922, physicists Harvey Fletcher and R.L. Wegel at Bell Laboratories developed the first electronic audiometer, which generated pure sinusoidal tones across a range of frequencies and allowed for systematic threshold determination. This device facilitated the creation of the modern audiogram, a graphical representation plotting hearing thresholds in decibels hearing level (dB HL) against frequencies spaced in octaves from 125 Hz to 8000 Hz, capturing the primary range of human speech and environmental sounds. The audiogram's standardized format provided a quantifiable baseline for diagnosing hearing impairments, moving beyond subjective mechanical tests. Commercialization accelerated adoption, with Western Electric introducing the Model 1-A audiometer in 1922 as the first widely available electronic device for clinical use, followed by the improved Model 2-A in 1923. These vacuum tube-based instruments were calibrated to deliver tones at controlled intensities, supporting both air conduction testing through earphones and early explorations of bone conduction. In the 1930s, bone conduction testing became a standard component of audiometry, utilizing a vibrator placed on the mastoid process to transmit sound directly to the cochlea and isolate middle or inner ear pathologies from outer ear obstructions. This addition enhanced diagnostic specificity, as discrepancies between air and bone conduction thresholds could differentiate conductive from sensorineural losses. To promote global consistency, the (ISO) published Recommendation R 389 in 1964, establishing a reference zero level for pure-tone audiometer calibration based on young, otologically normal listeners under controlled conditions. Audiometers evolved into distinct types: clinical models compliant with ANSI S3.6 standards for comprehensive diagnostic testing, including narrowband masking and extended frequency ranges, and screening audiometers for rapid, basic threshold checks in non-clinical settings. By the 1980s, the field transitioned from analog and circuits to digital systems, improving signal purity, , and data storage while reducing size and cost. Parallel advancements in electrophysiological techniques introduced objective methods independent of patient response. In 1970, Don L. Jewett and colleagues identified brainstem auditory evoked potentials (BAEPs) as a series of short-latency waves recorded from the in response to clicks, originating from the auditory nerve and nuclei. This discovery enabled non-behavioral assessment of neural integrity, particularly useful for infants and uncooperative individuals. Complementing this, David T. Kemp reported the first observation of otoacoustic emissions (OAEs) in 1978, detecting faint sounds emitted from the in response to acoustic stimuli, which indicated active outer function. These innovations laid the groundwork for objective audiometry, expanding beyond pure-tone thresholds to physiological validation.

Audiometric Testing Fundamentals

Normative Standards and Protocols

Audiometric testing adheres to established international standards to ensure reliability, reproducibility, and comparability of results across clinical settings. The S3.6-2025 specification outlines requirements for the of audiometers, including electroacoustic performance, signal stability, and output limits to maintain measurement accuracy within specified tolerances. Similarly, the 8253-1:2010 provides guidelines for pure-tone air-conduction and bone-conduction threshold audiometry, emphasizing standardized procedures for test conditions, placement, and response criteria to minimize variability. These standards form the foundation for professional audiometric practice, with updates reflecting advances in equipment and methodology. The test environment must be controlled to avoid external influences on hearing thresholds. Testing typically occurs in a sound-treated booth where ambient levels meet the maximum permissible ambient levels (MPANLs) in bands from 125 Hz to 8 kHz, as specified in ANSI S3.1-1999 (R2018) and ISO 8253-1:2010, to prevent masking of low-level signals. Patients receive clear instructions prior to testing, such as raising a hand or pressing a to indicate detection of the tone, ensuring consistent subjective responses. Standard protocols begin with otoscopy to visualize the and rule out obstructions like cerumen impaction, which could affect results. then proceeds by testing frequencies of 250 Hz, 500 Hz, 1000 Hz, 2000 Hz, 4000 Hz, and 8000 Hz, starting with the better and progressing bilaterally. Masking is applied when necessary to isolate the test , for air conduction when interaural attenuation (≈40-60 dB) is exceeded by the intensity difference, and for , where interaural attenuation is ≈0 dB, typically requiring masking of the non-test . The Hughson-Westlake ascending method is widely used to determine thresholds, involving presentation of tones in 5 dB steps until detection, followed by descending to confirm, achieving accuracy within ±5 dB. For pediatric populations, protocols incorporate age-appropriate modifications to elicit reliable responses. Visual reinforcement audiometry (VRA), effective from around 6 months of age, uses lights or toys as rewards for head-turning toward sounds, adapting the standard procedure for non-verbal children while adhering to core ISO and ANSI guidelines.

Subjective Testing Methods

Subjective testing methods in audiometry rely on the patient's conscious responses to auditory stimuli, providing insights into perceived hearing thresholds and speech understanding. These techniques require active participation, such as raising a hand or pressing a button when a sound is detected, making them suitable for cooperative adults and older children but challenging for infants or those with cognitive impairments. Key procedures include pure-tone audiometry, speech audiometry, and automated tracing methods like Békésy audiometry, often complemented by immittance measures to assess middle ear function indirectly through patient tolerance. Pure-tone audiometry determines the softest sound levels (thresholds) a patient can detect across frequencies, typically from 250 Hz to 8000 Hz, using air conduction via headphones or inserts and bone conduction via a vibrator placed on the mastoid process. Air conduction thresholds evaluate the entire auditory pathway from the outer ear to the cochlea, while bone conduction bypasses the outer and middle ear to directly stimulate the cochlea, helping differentiate conductive from sensorineural hearing loss. Testing follows protocols like the Hughson-Westlake method, ascending in 5 dB steps until detection, with responses confirming the threshold. Interaural attenuation—the sound energy loss when crossing the skull—is approximately 40-60 dB for air conduction, necessitating masking of the non-test ear to prevent crossover responses, whereas it is about 0 dB for bone conduction, requiring masking in nearly all cases to isolate the test ear. Speech audiometry extends pure-tone testing by assessing functional hearing for verbal stimuli. The speech reception threshold (SRT) measures the lowest intensity level at which 50% of spondaic words—two-syllable words with equal stress, such as "" or "hotdog"—are correctly repeated, providing an estimate of everyday speech detection. Word recognition score (WRS) evaluates speech discrimination by presenting monosyllabic words at a comfortable level, typically 40 dB sensation level (SL) above the SRT, where the percentage of correctly identified words indicates clarity of understanding. Additional measures include the most comfortable level (MCL), the intensity preferred for listening without discomfort, and the uncomfortable level (UCL), the threshold where sounds become intolerable, aiding in fitting. Békésy audiometry offers an automated variant of pure-tone testing, where the patient traces thresholds by holding a button to attenuate tones that are audible and releasing it when inaudible, producing continuous or interrupted tone tracings across frequencies. This method reveals threshold fluctuations, such as Type I (normal, overlapping continuous and pulsed tones) or Type III (conductive loss, wider pulsed tracing), and is useful for detecting nonorganic through inconsistent patterns. It reduces examiner bias but still demands patient attention. Immittance measures, while largely objective, incorporate subjective elements in patient tolerance during probe insertion and include tympanometry to evaluate middle ear compliance via pressure changes in the ear canal. Tympanograms are classified by Jerger's system: Type A shows normal peak compliance at ambient pressure, indicating intact middle ear function; Type B is flat with no peak, suggesting fluid or perforation; Type C displays a shifted peak to negative pressure, denoting Eustachian tube dysfunction; Type D features a shallow peak for reduced mobility, as in otosclerosis; and Type As or Ad indicate abnormally shallow or deep compliance, respectively. These curves correlate with conductive issues but require patient stillness. Despite their utility, subjective methods have limitations, including invalidity for infants or uncooperative patients who cannot provide reliable responses, often necessitating objective alternatives. They are also susceptible to bias from or nonorganic factors, where patients may exaggerate thresholds, detectable through inconsistencies like those in Békésy tracings or SRT discrepancies with pure-tone averages.

Objective Audiometry Techniques

Electrophysiological Assessments

Electrophysiological assessments in audiometry involve objective techniques that measure electrical potentials generated by the in response to stimuli, providing insights into neural function without relying on patient cooperation. These methods record bioelectric activity from the auditory nerve through the and higher centers, typically using surface electrodes on the . They are particularly valuable for evaluating infants, uncooperative individuals, and cases of suspected retrocochlear . The auditory brainstem response (ABR) is a cornerstone of these assessments, capturing synchronized neural firing along the auditory pathway from the to the within the first 10 milliseconds post-stimulus. ABR testing employs a standard electrode montage with the active at the vertex (Cz), reference electrodes on the ipsilateral mastoid or (A1 or A2), and a ground on the or posterior . Clicks or tone bursts at intensities starting from 80 dB normalized hearing level (nHL) elicit the response, which manifests as five vertex-positive waves: Wave I (distal auditory nerve, ~1.5 ms), Wave II (~2.5 ms), Wave III (, ~3.5 ms), Wave IV (~4.5 ms), and Wave V (, ~5.5-6 ms for an 80 dB nHL click in adults with normal hearing). ABR is used to assess neural synchrony and estimate hearing thresholds, with ABR thresholds typically 10-20 dB higher than behavioral pure-tone thresholds due to differences in stimulus specificity and neural recruitment. Electrocochleography (ECochG) provides more peripheral detail by recording responses directly from the and auditory nerve, often via a transtympanic placed on the after piercing the tympanic membrane. It measures the cochlear microphonic (CM), a reflecting outer activity and basilar membrane motion; the summating potential (SP), a DC shift from inner hair cells and stria vascularis; and the compound (AP), corresponding to auditory nerve firing (~1.5 ms latency). In diagnosing , ECochG detects endolymphatic hydrops through an elevated SP/AP ratio (≥0.50 for clicks), with extratympanic variants showing sensitivity of 47.6% and specificity of 83.8%, though combining with other audiological measures improves sensitivity to 63.5% without significant specificity gains. Middle latency responses (MLR) extend the assessment to cortical levels, recording potentials 12-75 ms post-stimulus from thalamocortical pathways using electrodes at Cz or temporal sites (C3/C4, T3/T4) referenced to the or chin. Key components include Na (~12-21 ms), Pa (~21-38 ms), Nb, and Pb (~50 ms), which evaluate higher-order auditory processing and are sensitive to cortical lesions (sensitivity 0.52-0.64 when paired with ABR). These responses aid in localizing central auditory dysfunction. Applications of electrophysiological assessments include universal newborn hearing screening, implemented in many countries since the 1990s using automated ABR protocols with 35 dB nHL clicks to detect bilateral >35 dB HL, achieving high referral rates for confirmation. ABR also serves in intraoperative monitoring during surgeries like acoustic resection to preserve auditory function by tracking real-time changes in wave latencies and amplitudes.

Otoacoustic Emission Testing

Otoacoustic emission (OAE) testing measures low-level sounds produced by the outer hair cells in the in response to acoustic stimuli, providing an objective assessment of cochlear function and outer hair cell integrity. These emissions, first discovered by David Kemp in 1978 as the "Kemp effect," occur when the actively amplifies incoming sounds through electromotility of outer hair cells, generating measurable echoes that travel back through the to the . OAE testing is particularly valuable because it does not require behavioral responses from the patient, making it ideal for infants, young children, or uncooperative individuals. There are several types of evoked OAEs, with transient-evoked OAEs (TEOAEs) and distortion-product OAEs (DPOAEs) being the most commonly used in clinical practice. TEOAEs are elicited by brief broadband stimuli such as clicks, producing emissions across a wide frequency range that reflect the overall health of the cochlea. In contrast, DPOAEs are generated by presenting two simultaneous pure tones of different frequencies (f1 and f2, with f2 > f1), typically at a frequency ratio of 1.2, resulting in nonlinear distortion products that can be recorded at specific frequencies like 2f1 - f2 for frequency-specific evaluation. These types allow for targeted assessment of cochlear regions, with TEOAEs providing a general screen and DPOAEs enabling more precise frequency mapping. The procedure involves inserting a small probe into the external ear canal, which contains a speaker to deliver the stimuli and a sensitive microphone to record the emissions while minimizing noise interference. The probe seals the canal to ensure accurate measurement, and the test typically lasts 30-60 seconds per ear, with automated analysis determining pass/refer criteria based on emission amplitude thresholds relative to noise floors (e.g., signal-to-noise ratio >6 dB). Emissions are strongest and most reliably detected in the 1-4 kHz frequency range, corresponding to the primary speech frequencies, but they are often absent or weak above 4 kHz in up to 50% of normal adult ears due to olivocochlear efferent suppression. In clinical applications, OAE testing is a cornerstone of universal neonatal hearing screening programs, achieving approximately 90% sensitivity for detecting moderate (≥30-40 dB HL) when combined with automated testing. It is also used for monitoring in patients receiving or aminoglycosides, where serial DPOAE measurements can detect early cochlear damage before changes appear on , as recommended by guidelines from the American Academy of Audiology.

Audiograms and Data Interpretation

Audiogram Construction and Features

An is a graphical representation of hearing thresholds obtained from , plotting the softest detectable across various frequencies against intensity levels. The horizontal x-axis displays frequencies in hertz (Hz), typically ranging from 125 Hz to 8000 Hz on a to reflect the human ear's perceptual spacing of pitches, with lower frequencies on the left and higher on the right. The vertical y-axis represents hearing level in decibels hearing level (dB HL), inverted such that better hearing (lower thresholds) appears at the top, usually spanning from -10 dB HL to 120 dB HL. Standard symbols denote thresholds for air conduction and bone conduction testing, differentiated by ear and masking status to ensure accurate interpretation. For the right ear, air conduction thresholds (unmasked) are marked with a circle (O), while left ear air conduction uses a cross (X). Bone conduction thresholds are indicated with square brackets: [ for the right ear (unmasked) and ] for the left ear (unmasked), with angle brackets < or > used for masked bone conduction to prevent cross-hearing. These conventions, established by the American Speech-Language-Hearing Association (ASHA), facilitate clear visualization of conductive versus sensorineural components. Audiogram configurations describe the pattern of threshold elevations across frequencies, aiding in identifying the nature of hearing impairment. Normal hearing is characterized by thresholds of 0 to 25 dB HL across tested frequencies. A flat configuration shows relatively equal loss at all frequencies, often seen in certain sensorineural losses. Sloping configurations exhibit progressively worse thresholds at higher frequencies, common in age-related or . Rising configurations display greater loss at lower frequencies, improving toward higher pitches, as may occur in or Meniere's disease. Masking is applied to the contralateral (non-test) ear during testing to isolate responses when interaural differences exceed the interaural attenuation—approximately 40 dB for supra-aural in air conduction or 0 dB for —calculated as the presentation level to the test ear minus the non-test ear's threshold. This masking index ensures the signal is heard primarily by the intended , preventing erroneous thresholds from cross-hearing. The is an overlaid curve on the highlighting the frequency range critical for speech intelligibility, spanning approximately 300 to 3400 Hz, where most phonemes and conversational sounds reside at moderate intensities (20 to 50 dB HL). This visual aid contextualizes how within this band impacts daily communication.

Classification of Hearing Loss

The classification of hearing loss in audiometry primarily involves assessing the degree (severity) and type (site of ) based on pure-tone thresholds obtained from an . The degree is typically determined using the pure-tone average (PTA), calculated as the mean threshold at 500 Hz, 1000 Hz, and 2000 Hz for the being assessed, which correlates with speech detection capabilities. This PTA provides a standardized metric for categorizing impairment levels, guiding clinical management and intervention needs. Degrees of hearing loss are delineated as follows:
DegreePTA Range (dB HL)
Normal≤25
Mild26–40
Moderate41–55
Moderately severe56–70
Severe71–90
Profound>90
These thresholds reflect the intensity required for sound detection, with normal hearing allowing perception of conversational speech at typical levels, while profound loss often necessitates alternative communication strategies. Some classifications, such as the American Speech-Language-Hearing Association (ASHA), include a "slight" category (16–25 dB HL) that may highlight early impairments affecting daily function. Types of hearing loss are identified by comparing air-conduction and bone-conduction thresholds on the . Conductive hearing loss arises from outer or issues, characterized by an air-bone gap exceeding 10 dB, where bone-conduction thresholds remain normal or near-normal. involves (cochlear) or auditory nerve pathology, showing no significant air-bone gap with elevated thresholds for both conduction types. Mixed hearing loss combines elements of both, with an air-bone gap present alongside elevated bone-conduction thresholds, while central (retrocochlear) loss affects neural pathways beyond the , often presenting as sensorineural but with additional auditory processing irregularities. Specific audiometric patterns warrant further investigation. Asymmetric hearing loss, defined as a difference exceeding 20 dB between ears at any frequency, may indicate retrocochlear such as tumors and requires . , an abnormal sensitivity to everyday sounds, is flagged by average loudness discomfort levels below 100 dB HL, compressing the between hearing threshold and discomfort level. Audiogram configurations provide etiologic clues. A cookie-bite pattern, with greater mid-frequency loss (e.g., 500–2000 Hz) and relative sparing of low and high frequencies, is commonly associated with genetic or hereditary conditions. In contrast, a notched configuration, typically a sharp dip at 3000–6000 Hz with recovery at higher frequencies, points to noise-induced etiology from acoustic trauma. These patterns, interpreted alongside degree and type, inform targeted diagnostic pursuits without altering the core classification framework.

Clinical and Occupational Applications

Educational and School-Based Screening

Universal newborn hearing screening (UNHS) programs utilize objective audiometric techniques, such as otoacoustic emissions (OAE) testing and automated (AABR), to detect in all infants before hospital discharge or within the first month of life, as recommended by the Joint Committee on Infant Hearing (JCIH) 2019 position statement. These methods enable early identification, with screening coverage exceeding 95% in developed countries like the , where over 98% of newborns are screened annually according to Centers for Disease Control and Prevention (CDC) data. The prevalence of permanent , including profound cases, ranges from 1 to 3 per 1,000 newborns, underscoring the importance of universal implementation to ensure timely diagnosis and intervention. In educational settings, school-based hearing screening protocols typically involve for children aged 6 years and older, presenting tones at frequencies of 1,000 Hz, 2,000 Hz, and 4,000 Hz at 20-25 dB hearing level (dB HL) to identify potential hearing thresholds above normal limits. For younger children in or early elementary programs, visual reinforcement audiometry (VRA) or play audiometry is employed, where responses to sounds are reinforced with visual stimuli or toys to engage toddlers and assess hearing behaviorally. These screenings occur annually or biennially, often in through 12th grade, to detect both permanent and acquired losses that could impact academic performance. Otitis media with effusion (OME), a common condition in children and the leading cause of temporary conductive hearing losses in pediatric populations, leads to fluctuating mild-to-moderate impairments that resolve with treatment but may contribute to speech and language challenges if persistent. Interventions in settings include the provision of (FM) systems, which enhance the by wirelessly transmitting the teacher's voice directly to the child's or personal receiver, facilitating better comprehension in noisy classrooms. Early amplification through hearing aids, fitted promptly after diagnosis, significantly prevents language delays, with studies showing improved and in children identified via UNHS and protocols.

Workplace Hearing Conservation Programs

Workplace hearing conservation programs (HCPs) are mandated by the Occupational Safety and Health Administration (OSHA) under 29 CFR 1910.95 to protect workers from noise-induced hearing loss in occupational settings where noise exposure reaches or exceeds 85 decibels (dBA) as an 8-hour time-weighted average (TWA). Established in 1983, this standard requires employers to implement comprehensive programs including noise monitoring, audiometric testing, provision of hearing protection devices (HPDs), employee training, and recordkeeping, with audiometry serving as a core component for early detection and prevention. Audiometric testing within HCPs begins with a baseline audiogram conducted within six months of an employee's initial exposure to at or above level, followed by annual testing to monitor hearing health. A standard threshold shift (STS) is defined as an average hearing level change of 10 dB or greater at 2000, 3000, and 4000 Hz in either ear compared to the baseline, triggering follow-up measures such as retesting, HPD refitting, and referral for further evaluation. , such as noise-reducing machinery, are prioritized over or HPDs like earplugs and , though all are integrated to minimize exposure. High-risk industries include , where approximately 46% of workers face hazardous exposure, and , with 56% prevalence, alongside and utilities. Exposure to ototoxic chemicals, such as organic solvents (e.g., , styrene) and metals (e.g., lead, mercury), often compounds the effects of , increasing the risk of hearing damage even at moderate levels. Emerging tools like mobile applications for self-screening, including the NIOSH app for noise assessment and validated apps such as hearWHO, are supplementing traditional audiometry by enabling quick, on-site evaluations in workplaces. Studies indicate that compliant HCPs can reduce the overall incidence of , with one analysis showing workers 28% less likely to experience hearing shifts when enrolled in such programs compared to those without. Effective implementation, including consistent audiometric follow-up, has been associated with improved hearing thresholds over time and lower rates of occupational hearing claims.

Advanced Topics and Research

Speech and Non-Linear Hearing Phenomena

effect describes the human ability to selectively attend to a single speech stream amid competing background sounds, such as in a noisy social gathering. This phenomenon relies on binaural cues, including interaural time differences (ITDs) and interaural level differences (ILDs), which help segregate the target sound from interferers, as well as linguistic context that aids in parsing meaningful content from irrelevant noise. Psychoacoustic studies have shown that early auditory processing stages contribute to this selection, with attention modulating the perceptual grouping of sounds based on spatial and semantic features. Speech intelligibility, a key measure of how well acoustic signals are understood, is influenced by factors like frequency band contributions and . The Speech Intelligibility Index (), an evolution of the earlier Articulation Index (AI), quantifies this through the formula
SII=(Ii×Ai)\text{SII} = \sum (I_i \times A_i)
where IiI_i represents the importance weight of the ii-th frequency band for , and AiA_i is the audibility factor (ranging from 0 to 1 based on signal audibility in that band). In noisy environments, the (SNR) is pivotal; for normal-hearing listeners, an SNR of approximately 0 dB yields about 50% word or sentence understanding in standard tests, highlighting the narrow margin for effective communication.
Non-linear hearing phenomena arise primarily from active processes in the , where outer hair cells amplify incoming signals through electromotility, providing a compressive gain of 50-60 dB at low sound levels to enhance sensitivity across the . This nonlinearity manifests in effects like two-tone suppression, where one tone reduces the response to a nearby tone at the basilar membrane, and distortion products, such as cubic difference tones generated when two simultaneous tones interact, which can be measured as otoacoustic emissions. These mechanisms sharpen selectivity but also introduce distortions that influence overall auditory perception. Human auditory , essential for parsing rapid speech elements, is demonstrated by gap detection thresholds of approximately 2-10 ms in for normal-hearing individuals, varying with stimulus bandwidth and listener age. further exemplifies binaural processing, utilizing ITDs as small as less than 1 ms (thresholds around 10-20 μs for low frequencies) and ILDs ranging from 1-20 dB (with thresholds near 1 dB for high frequencies) to determine . In blind individuals, extends these capabilities, where self-generated tongue clicks produce pulses that reflect off objects, allowing spatial mapping with acuity comparable to in trained users.

Current Innovations and Future Directions

Recent advancements in digital technologies have transformed audiometry by enabling accessible, remote, and automated hearing assessments. AI-driven applications for audiometry have demonstrated high diagnostic accuracy, with studies reporting sensitivities of 89% and specificities of 93% when compared to for detecting hearing thresholds. These apps utilize algorithms to analyze user responses to calibrated tones delivered via device speakers or , achieving up to 97.5% accuracy in identifying in adults. The surge in tele-audiology during the 2020s, accelerated by the , has further expanded access, with surveys indicating that 86% of audiologists view it as essential for post-pandemic service delivery, facilitating remote fittings and follow-ups through video consultations and app-based tools. Objective testing methods have also evolved, particularly for pediatric populations where behavioral responses are unreliable. Cortical auditory evoked potentials (CAEPs) provide a non-invasive way to assess activity in infants, serving as objective markers for validation and maturation tracking, with recent studies highlighting their utility in confirming aided thresholds as low as 30-40 dB HL. Wideband represents another innovation, measuring absorbance across a broad frequency range (250-8000 Hz) in a single pressure sweep, offering superior detection of subtle dysfunctions like compared to traditional 226 Hz probes. Emerging research in audiometry focuses on regenerative and prosthetic interventions to address underlying pathologies. targeting the ATOH1 gene, which promotes differentiation, has shown preclinical promise in rodent models for , with a 2025 meta-analysis demonstrating average reductions in thresholds of approximately 21 dB. A prior phase 1/2 clinical trial was deemed safe but did not show significant hearing improvement. As of 2025, research continues to explore approaches for regeneration. Integration of neural prosthetics, such as cochlear implants, with audiometric protocols has advanced through electrically evoked s, allowing precise mapping of implant performance across cochlear regions to optimize sound coding strategies. Regulatory and predictive tools are bridging global disparities, where the estimates 430 million people live with disabling as of 2025. The U.S. Food and Drug Administration's 2022 approval of over-the-counter hearing aids has democratized access for mild-to-moderate cases, enabling self-fitting devices without prescriptions. models applied to serial audiograms can now predict progression of age-related or noise-induced loss with accuracies exceeding 80%, using features like threshold slopes to forecast declines over 5-10 years and guide early interventions.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.