Hubbry Logo
Absolute threshold of hearingAbsolute threshold of hearingMain
Open search
Absolute threshold of hearing
Community hub
Absolute threshold of hearing
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Absolute threshold of hearing
Absolute threshold of hearing
from Wikipedia

The absolute threshold of hearing (ATH), also known as the absolute hearing threshold or auditory threshold, is the minimum sound level of a pure tone that an average human ear with normal hearing can hear with no other sound present. The absolute threshold relates to the sound that can just be heard by the organism.[1][2] The absolute threshold is not a discrete point and is therefore classed as the point at which a sound elicits a response a specified percentage of the time.[1]

The threshold of hearing is generally reported in reference to the RMS sound pressure of 20 micropascals, i.e. 0 dB SPL, corresponding to a sound intensity of 0.98 pW/m2 at 1 atmosphere and 25 °C.[3] It is approximately the quietest sound a young human with undamaged hearing can detect at 1 kHz.[4] The threshold of hearing is frequency-dependent and it has been shown that the ear's sensitivity is best at frequencies between 2 kHz and 5 kHz,[5] where the threshold reaches as low as −9 dB SPL.[6][7][8]

Average hearing thresholds in decibels (SPL) (the unit of 'dB(HL)' shown on the vertical axis is incorrect) are plotted from 125 to 8000 Hz for younger (18-30 year olds, red circles) and older adults (60-67 year olds, black diamonds). The hearing of older adults is shown to be significantly less sensitive than that of younger adults at frequencies of 4000 and 8000 Hz.

Psychophysical methods for measuring thresholds

[edit]

Measurement of the absolute hearing threshold provides some basic information about our auditory system.[4] The tools used to collect such information are called psychophysical methods. Through these, the perception of a physical stimulus (sound) and our psychological response to the sound is measured.[9]

Several psychophysical methods can measure absolute threshold. These vary, but certain aspects are identical. Firstly, the test defines the stimulus and specifies the manner in which the subject should respond. The test presents the sound to the listener and manipulates the stimulus level in a predetermined pattern. The absolute threshold is defined statistically, often as an average of all obtained hearing thresholds.[4]

Some procedures use a series of trials, with each trial using the 'single-interval "yes"/"no" paradigm'. This means that sound may be present or absent in the single interval, and the listener has to say whether they thought the stimulus was there. When the interval does not contain a stimulus, it is called a "catch trial".[4]

Classical methods

[edit]

Classical methods date back to the 19th century and were first described by Gustav Theodor Fechner in his work Elements of Psychophysics.[9] Three methods are traditionally used for testing a subject's perception of a stimulus: the method of limits, the method of constant stimuli, and the method of adjustment.[4]

Series of descending and ascending runs in Method of Limits
Method of limits
In the method of limits, the tester controls the level of the stimuli. Single-interval yes/no paradigm' is used, but there are no catch trials.
The trial uses several series of descending and ascending runs.
The trial starts with the descending run, where a stimulus is presented at a level well above the expected threshold. When the subject responds correctly to the stimulus, the level of intensity of the sound is decreased by a specific amount and presented again. The same pattern is repeated until the subject stops responding to the stimuli, at which point the descending run is finished.
In the ascending run, which comes after, the stimulus is first presented well below the threshold and then gradually increased in two decibel (dB) steps until the subject responds. As there are no clear margins to 'hearing' and 'not hearing', the threshold for each run is determined as the midpoint between the last audible and first inaudible level.
The subject's absolute hearing threshold is calculated as the mean of all obtained thresholds in both ascending and descending runs.
There are several issues related to the method of limits. First is anticipation, which is caused by the subject's awareness that the turn-points determine a change in response. Anticipation produces better ascending thresholds and worse descending thresholds.
Habituation creates completely opposite effect, and occurs when the subject becomes accustomed to responding either "yes" in the descending runs and/or "no" in the ascending runs. For this reason, thresholds are raised in ascending runs and improved in descending runs.
Another problem may be related to step size. Too large a step compromises accuracy of the measurement as the actual threshold may be just between two stimulus levels.
Finally, since the tone is always present, "yes" is always the correct answer.[4]
Method of constant stimuli
In the method of constant stimuli, the tester sets the level of stimuli and presents them at completely random order.
Subject responding "yes"/"no" after each presentation
Thus, there are no ascending or descending trials.
The subject responds "yes"/"no" after each presentation.
The stimuli are presented many times at each level and the threshold is defined as the stimulus level at which the subject scored 50% correct. "Catch" trials may be included in this method.
Method of constant stimuli has several advantages over the method of limits. Firstly, the random order of stimuli means that the correct answer cannot be predicted by the listener. Secondarily, as the tone may be absent (catch trial), "yes" is not always the correct answer. Finally, catch trials help to detect the amount of a listener's guessing.
The main disadvantage lies in the large number of trials needed to obtain the data, and therefore time required to complete the test.[4]
Method of adjustment
Method of adjustment shares some features with the method of limits, but differs in others. There are descending and ascending runs and the listener knows that the stimulus is always present.
The subject reduces or increase the level of the tone
However, unlike in the method of limits, here the stimulus is controlled by the listener. The subject reduces the level of the tone until it cannot be detected anymore, or increases until it can be heard again.
The stimulus level is varied continuously via a dial and the stimulus level is measured by the tester at the end. The threshold is the mean of the just audible and just inaudible levels.
Also this method can produce several biases. To avoid giving cues about the actual stimulus level, the dial must be unlabeled. Apart from the already mentioned anticipation and habituation, stimulus persistence (preservation) could influence the result from the method of adjustment.
In the descending runs, the subject may continue to reduce the level of the sound as if the sound was still audible, even though the stimulus is already well below the actual hearing threshold.
In contrast, in the ascending runs, the subject may have persistence of the absence of the stimulus until the hearing threshold is passed by certain amount.[10]

Modified classical methods

[edit]

Forced-choice methods

[edit]

Two intervals are presented to a listener, one with a tone and one without a tone. The listener must decide which interval had the tone in it. The number of intervals can be increased, but this may cause problems for the listener who has to remember which interval contained the tone.[4][11]

Adaptive methods

[edit]

Unlike the classical methods, where the pattern for changing the stimuli is preset, in adaptive methods the subject's response to the previous stimuli determines the level at which a subsequent stimulus is presented.[12]

Staircase (up-down) methods

[edit]
Series of descending and ascending trials runs and turning points

The simple 1-down-1-up method consists of a series of descending and ascending trial runs and turning points (reversals). The stimulus level is increased if the subject does not respond and decreased when a response occurs. Similar to the method of limits, the stimuli are adjusted in predetermined steps. After obtaining from six to eight reversals, the first one is discarded and the threshold is defined as the average of the midpoints of the remaining runs. Experiments have shown that this method provides only 50% accuracy.[12] To produce more accurate results, this simple method can be further modified by increasing the size of steps in the descending runs, e.g. 2-down-1-up method, 3-down-1-up methods.[4]

Bekesy's tracking method

[edit]
The threshold being tracked by the listener

Bekesy's method contains some aspects of classical methods and staircase methods. The level of the stimulus is automatically varied at a fixed rate. The subject is asked to press a button when the stimulus is detectable. Once the button is pressed, the level is automatically decreased by the motor-driven attenuator and increased when the button is not pushed. The threshold is thus tracked by the listeners, and calculated as the mean of the midpoints of the runs as recorded by the automat.[4]

Hysteresis effect

[edit]
Descending runs give better hearing thresholds than ascending runs

Hysteresis can be defined roughly as 'the lagging of an effect behind its cause'. When measuring hearing thresholds it is always easier for the subject to follow a tone that is audible and decreasing in amplitude than to detect a tone that was previously inaudible.

This is because 'top-down' influences mean that the subject expects to hear the sound and is, therefore, more motivated with higher levels of concentration.

The 'bottom-up' theory explains that unwanted external (from the environment) and internal (e.g., heartbeat) noise results in the subject only responding to the sound if the signal-to-noise ratio is above a certain point.

In practice this means that when measuring threshold with sounds decreasing in amplitude, the point at which the sound becomes inaudible is always lower than the point at which it returns to audibility. This phenomenon is known as the 'hysteresis effect'.

Psychometric function of absolute hearing threshold

[edit]

Psychometric function 'represents the probability of a certain listener's response as a function of the magnitude of the particular sound characteristic being studied'.[13]

To give an example, this could be the probability curve of the subject detecting a sound being presented as a function of the sound level. When the stimulus is presented to the listener one would expect that the sound would either be audible or inaudible, resulting in a 'doorstep' function. In reality a grey area exists where the listener is uncertain as to whether they have actually heard the sound or not, so their responses are inconsistent, resulting in a psychometric function.

The psychometric function is a sigmoid function characterised by being 's' shaped in its graphical representation.

Minimal audible field vs minimal audible pressure

[edit]

Two methods can be used to measure the minimal audible stimulus[2] and therefore the absolute threshold of hearing. Minimal audible field involves the subject sitting in a sound field and stimulus being presented via a loudspeaker.[2][14] The sound level is then measured at the position of the subject's head with the subject not in the sound field.[2] Minimal audible pressure involves presenting stimuli via headphones[2] or earphones[1][14] and measuring sound pressure in the subject's ear canal using a very small probe microphone.[2] The two different methods produce different thresholds[1][2] and minimal audible field thresholds are often 6 to 10 dB better than minimal audible pressure thresholds.[2] It is thought that this difference is due to:

  • monaural vs binaural hearing. With minimal audible field both ears are able to detect the stimuli but with minimal audible pressure only one ear is able to detect the stimuli. Binaural hearing is more sensitive than monaural hearing/[1]
  • physiological noises heard when ear is occluded by an earphone during minimal audible pressure measurements.[2] When the ear is covered the subject hears body noises, such as heart beat, and these may have a masking effect.

Minimal audible field and minimal audible pressure are important when considering calibration issues and they also illustrate that the human hearing is most sensitive in the 2–5 kHz range.[2]

Temporal summation

[edit]

Temporal summation is the relationship between stimulus duration and intensity when the presentation time is less than 1 second. Auditory sensitivity changes when the duration of a sound becomes less than 1 second. The threshold intensity decreases by about 10 dB when the duration of a tone burst is increased from 20 to 200 ms.

For example, suppose that the quietest sound a subject can hear is 16 dB SPL if the sound is presented at a duration of 200 ms. If the same sound is then presented for a duration of only 20 ms, the quietest sound that can now be heard by the subject goes up to 26 dB SPL. In other words, if a signal is shortened by a factor of 10 then the level of that signal must be increased by as much as 10 dB to be heard by the subject.

The ear operates as an energy detector that samples the amount of energy present within a certain time frame. A certain amount of energy is needed within a time frame to reach the threshold. This can be done by using a higher intensity for less time or by using a lower intensity for more time. Sensitivity to sound improves as the signal duration increases up to about 200 to 300 ms, after that the threshold remains constant.[2]

The timpani of the ear operates more as a sound pressure sensor. Also a microphone works the same way and is not sensitive to sound intensity.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The absolute threshold of hearing (ATH) is the minimum level of a that produces an auditory sensation in a quiet environment for an observer with normal hearing, typically defined as the intensity detected at least 50% of the time. This threshold marks the lower boundary of human audibility and varies significantly with stimulus frequency, sound duration, and individual factors such as age and otological health. The ATH exhibits a characteristic U-shaped curve across the audible frequency spectrum, with the highest sensitivity (lowest threshold) occurring between approximately 1 and 5 kHz, where it reaches about 0 dB sound pressure level (SPL), equivalent to 20 μPa. At lower frequencies (below 500 Hz) and higher frequencies (above 8 kHz), the threshold rises sharply, requiring sound pressures up to 80 dB SPL or more for detection, limiting the effective to roughly 20 Hz to 20 kHz for young adults with normal hearing. This frequency dependence reflects the biomechanical properties of the and outer/ transmission efficiency. The standard curve for ATH is formalized in ISO 226, which defines the 0-phon based on empirical data from otologically normal listeners. Measurement of the ATH employs psychophysical techniques to account for perceptual variability, including the method of constant stimuli, method of limits, and adaptive forced-choice procedures, often conducted in soundproof chambers using pure tones presented via or free-field speakers. These methods ensure thresholds are determined relative to chance performance, typically yielding values expressed in dB hearing level (HL) or dB SPL, with clinical focusing on frequencies from 250 Hz to 8 kHz to assess hearing status. The ATH plays a foundational role in fields like for diagnosing , for modeling , and acoustics for designing audio systems that respect human sensitivity limits. Individual thresholds can deviate by 10-15 dB from the average due to factors like age-related , which elevates thresholds progressively above 2 kHz.

Fundamentals

Definition and Scope

The absolute threshold of hearing refers to the lowest level, expressed in decibels level (dB SPL), of a that is detectable at least 50% of the time by a listener in a quiet environment at a specified . This threshold represents the minimum stimulus intensity necessary for auditory detection under ideal, noise-free conditions, serving as a fundamental measure in auditory . For young adults with normal hearing, the threshold is typically around 0 dB SPL at 1 kHz, corresponding to a reference sound pressure of 20 μPa. The scope of the absolute threshold of hearing is confined to pure-tone detection in silent surroundings, focusing solely on the perceptual boundary for sound presence without interference. It does not encompass masked thresholds, where influences detection, nor does it extend to complex stimuli such as speech or frequency tasks. This delimitation ensures the concept remains centered on unmasked, basic auditory sensitivity, distinct from broader psychoacoustic or clinical assessments. The threshold intensity, denoted as IthresholdI_{\text{threshold}}, is the minimum detectable sound intensity, commonly quantified in decibels using the formula L=10log10(II0),L = 10 \log_{10} \left( \frac{I}{I_0} \right), where I0I_0 is the reference intensity equivalent to a pressure of 20 μPa (or 101210^{-12} W/m²). This logarithmic scale captures the wide dynamic range of human hearing, from the faintest detectable tones to intense sounds, with the absolute threshold defining the lower limit. The varies with frequency across the audible spectrum (approximately 20 Hz to 20 kHz), reflecting the ear's differential sensitivity. The standard curve for the ATH is defined in ISO 226:2023, based on empirical data from otologically normal young adults. Thresholds are lowest in the mid-frequency range of 2–4 kHz, reaching as low as about -5 dB SPL. At the extremes, sensitivity decreases markedly; for instance, at 20 Hz, the threshold rises to about 80 dB SPL, requiring substantially higher intensity for detection due to reduced cochlear responsiveness at low frequencies.

Historical Development

The study of the absolute threshold of hearing traces its roots to the , when early psychophysicists began exploring the limits of auditory perception. Thomas Young contributed foundational insights in the early 1800s by investigating the relationship between sound and pitch, as well as the approximate intensity limits of human hearing, through experiments linking to musical tones. Building on this, Hermann von Helmholtz in the 1860s conducted systematic measurements of auditory thresholds and using resonance-based models, establishing key concepts in selectivity that informed later threshold research. In the 1920s and 1930s, advanced the field through pioneering experiments on cochlear mechanics and measurements at the University of Budapest and later Harvard. His work involved direct observations of basilar membrane vibrations and threshold sensitivity across frequencies, revealing the traveling wave mechanism and providing early empirical data on hearing sensitivity curves that shaped understandings of the . A landmark contribution came from and Wilden A. Munson in 1933, who conducted psychophysical tests at Bell Laboratories to map equal-loudness contours; their findings, based on listener judgments of pure tones at various intensities, indirectly defined the threshold of hearing as the 0-phon contour, offering the first comprehensive frequency-dependent profile of minimal audibility. Post-World War II research marked a shift from subjective reports to standardized psychophysical testing, yielding the first reliable audiometric data in the 1940s. Von Békésy's development of an automatic audiometer in the late 1940s enabled self-recording threshold traces, improving precision in clinical and experimental settings by automating frequency and intensity sweeps. This evolution facilitated broader adoption of pure-tone audiometry, with early standardized data from military and industrial studies establishing normative thresholds around 0 dB hearing level (HL) for young adults. Standardization efforts culminated in the 1950s through the (ISO), which incorporated insights from von Békésy, Fletcher, and Munson into ISO/R 226 (1961), defining normal equal-loudness-level contours and absolute thresholds based on averaged listener data from multiple labs. This standard has been revised several times, with the current version ISO 226:2023 providing updated contours as of 2023. These milestones transitioned auditory research from qualitative observations to quantifiable, reproducible metrics essential for .

Measurement Techniques

Classical Psychophysical Methods

Classical psychophysical methods, originally formalized by Gustav Theodor Fechner in his seminal work Elements of Psychophysics, provide foundational techniques for estimating the absolute threshold of hearing by relying on the observer's subjective judgments of tone detectability. These methods, including the method of limits, method of constant stimuli, and method of adjustment, were adapted for auditory research to determine the minimum sound intensity detectable at various frequencies, often using pure tones presented via headphones or in free-field conditions. They emphasize manual control and repeated trials to account for variability in human perception, though they are susceptible to biases inherent in yes/no response paradigms. The method of limits involves presenting tones in ascending series (starting below threshold and increasing intensity until detected) and descending series (starting above threshold and decreasing until undetectable), with the threshold estimated as the average of the reversal points where the observer's response changes from "not heard" to "heard" or vice versa. Typically, multiple pairs of ascending and descending runs (e.g., 5 pairs per frequency) are conducted to improve reliability, making this approach simple and efficient for clinical . However, it is prone to expectation , where observers anticipate the stimulus and report detection prematurely in ascending trials or delay in descending ones, leading to systematic errors in threshold estimation. In the method of constant stimuli, a fixed set of 5-9 tone intensities, spanning below and above the expected threshold, is presented in random order multiple times (often 20 or more trials total), and the proportion of "heard" responses is plotted against intensity to form an curve; the is defined as the intensity yielding 50% detection, interpolated if necessary. This technique minimizes order effects through randomization and provides a statistical estimate of threshold, enhancing precision for auditory sensitivity measurements. Its primary drawback is the time required for sufficient trials, as fewer presentations increase variability from guessing or internal noise. The method of adjustment allows the observer to manually control the tone intensity, typically alternating between ascending adjustments (increasing from inaudible to just audible) and descending adjustments (decreasing from audible to just inaudible), with the threshold taken as the average of these settings across several trials. In , this is exemplified by Békésy audiometry, where the listener traces threshold excursions using a , enabling rapid assessment of hearing sensitivity across frequencies. While quick and intuitive, it can introduce errors from overshooting or motor inconsistencies in adjustment, though alternating directions helps mitigate anticipation bias. Common error sources across these methods include practice effects, where repeated exposure improves detection and lowers apparent thresholds; fatigue, which elevates thresholds in prolonged sessions; and criterion shifts, where the observer's decision standard for reporting a tone varies due to motivation or attention. To address these, a typical measurement session for the absolute threshold at a single frequency involves 20-50 trials, distributed across methods to balance efficiency and accuracy, often with breaks to reduce fatigue.

Adaptive and Forced-Choice Methods

Forced-choice methods represent a class of psychophysical procedures designed to minimize in measurements by requiring the observer to identify the interval containing the auditory signal from multiple alternatives, such as two, three, or four options. In a typical two-interval forced-choice (2IFC) task, tones are presented in two sequential intervals, one with the signal and one without, and the observer selects which interval contained the tone; this approach leverages signal detection theory to separate sensitivity from decision criteria. Thresholds are defined at performance levels corresponding to 75% correct responses for 2IFC, approximately 79% for three-interval forced-choice (3IFC), and 84% for four-interval forced-choice (4IFC), ensuring reliable estimates without reliance on subjective "yes/no" judgments. These methods, rooted in signal detection theory, effectively reduce the influence of observer conservatism or optimism, providing more objective thresholds for hearing sensitivity. Staircase, or up-down, methods integrate adaptive intensity adjustments with forced-choice paradigms to efficiently converge on the threshold by altering stimulus levels based on observer responses. In a basic up-down procedure, intensity decreases after a correct response (hit) and increases after an incorrect one (miss), while variants like the 2-down-1-up rule—requiring two consecutive correct responses to decrease intensity and one incorrect to increase it—target a convergence at the 70.7% correct performance level on the psychometric function. These transformed up-down techniques, widely adopted in auditory , allow for rapid estimation by focusing trials near the threshold region, with initial step sizes of 2-5 dB that are typically halved after each reversal to refine precision. Seminal work formalized these methods for psychoacoustic applications, emphasizing their robustness for small sample sizes and minimal assumptions about the underlying response distribution. Bekésy's tracking method employs a continuous sweep with automated intensity modulation, where the observer controls the trace by pressing a to raise intensity when the tone is audible and releasing it to lower intensity when inaudible, producing a self-recorded . Developed for clinical , it operates in fixed-speed mode (constant sweep rate across frequencies) or variable-speed mode (observer-paced), enabling the generation of threshold traces that reveal patterns like Type I (normal) or Type V (non-organic loss) configurations. This technique facilitates detailed profiling of hearing sensitivity without discrete trials, particularly useful for differentiating conductive and sensorineural impairments. These adaptive and forced-choice approaches offer significant advantages over classical methods, including reduced trial numbers (typically 10-20 per point) for faster testing and enhanced through computer software, which minimizes experimenter bias and enables precise control of step sizes and convergence criteria. By converging efficiently on targeted performance levels, they improve reliability in threshold estimation, making them standard in modern audiological assessments.

Key Phenomena

Hysteresis Effect

The hysteresis effect in absolute threshold of hearing refers to the directional dependency observed in psychophysical measurements, where thresholds obtained during descending intensity scans (decreasing from audible to inaudible levels) are typically 1-3 dB higher than those during ascending scans (increasing from inaudible to audible levels), particularly at mid-frequencies around 1-2 kHz. This discrepancy arises because the point at which a tone becomes inaudible in a descending scan occurs at a higher intensity than the point at which it becomes detectable in an ascending scan, reflecting a lag in perceptual response influenced by the scan direction. The effect is minimal at low frequencies below 500 Hz, where differences are often negligible, but it increases with frequency up to a peak near 1-2 kHz before diminishing at higher frequencies. This phenomenon was first systematically observed by in 1947 using tracking , a method in which subjects manually adjusted intensity via a button to trace their threshold over frequency sweeps. In these experiments, the resulting threshold tracings formed characteristic looped curves, with the ascending and descending paths not overlapping, illustrating the as a closed contour on the plot. Subsequent studies confirmed these loops in automated Békésy procedures, attributing the separation to slight but consistent offsets in response timing and sensitivity during intensity modulation. Possible causes include , where prolonged exposure to suprathreshold sounds in descending scans reduces sensitivity, leading to delayed detection of fading signals; off-frequency listening, in which listeners may shift attention to adjacent frequencies during scans to compensate for faint tones; or criterion shifts, where decision biases (e.g., anticipation of hearing or not hearing) alter the perceptual boundary based on recent stimulus history. These factors contribute to the observed without implying , though differences exceeding 5 dB may indicate nonorganic influences. To mitigate the hysteresis effect and estimate a more reliable true threshold, standard protocols recommend averaging multiple ascending and descending trials, as this balances directional biases and reduces variability to within 1 dB at most frequencies. This averaging approach is particularly important in clinical to ensure accurate representation of the , especially since the effect peaks where human hearing sensitivity is highest.

Psychometric Function

The psychometric function characterizes the sigmoid-shaped relationship between and the probability of detection at the of hearing. As stimulus intensity increases from inaudible levels, the detection probability rises gradually from the chance level—typically 50% for yes/no tasks or 50% for two-interval forced-choice paradigms—to near 100% correct responses. The is defined as the intensity yielding 50% detection in yes/no procedures or an equivalent performance level adjusted for the task's guess rate in forced-choice setups, providing a standardized measure of auditory sensitivity. The of the psychometric function, representing its steepness, quantifies how sharply detection probability changes with intensity, often around 2-5% per dB near the threshold frequency of 1-4 kHz, where auditory sensitivity peaks; the function broadens at lower and higher frequencies due to reduced resolution. This width reflects variability in perceptual discrimination, with steeper slopes indicating lower internal uncertainty. Psychometric functions are typically modeled parametrically using Weibull or logistic distributions to fit empirical and estimate threshold and parameters. A common formulation employs the Weibull : P(d)=γ+(1γ)[1exp((Iα)β)]P(d) = \gamma + (1 - \gamma) \left[1 - \exp\left(-\left(\frac{I}{\alpha}\right)^\beta\right)\right] where P(d)P(d) is the detection probability, γ\gamma is the guess rate (e.g., 0.5 for yes/no), II is the stimulus intensity, α\alpha scales the threshold location, and β\beta controls the slope's steepness. The shape and position of the psychometric function are influenced by sensory noise, which adds variability to the neural representation of the stimulus, and decision criteria, where listeners set internal boundaries for reporting detection based on principles. These functions are fitted to experimental trial data using to derive reliable estimates of threshold and slope, accounting for individual response patterns.

Temporal Summation

Temporal summation refers to the phenomenon in which the absolute threshold of hearing decreases as the duration of an auditory stimulus increases, reflecting the auditory system's ability to integrate over time. For pure tones, the threshold typically drops by approximately 10 log_{10}(T) dB, where T is the stimulus duration in seconds, for durations up to 200-300 ms, after which it plateaus, indicating the limit of complete temporal integration. This results in complete summation for durations shorter than about 200 ms, where the system behaves as if integrating total energy, and partial summation beyond that point, with shallower slopes. The physiological basis for this integration lies in the cochlear nerve fibers, where auditory nerve responses to envelopes are temporally summed at the first between inner cells and auditory nerve fibers, enabling detection thresholds to depend on the cumulative pressure over time rather than instantaneous levels. This process follows an intensity-duration trade-off described by the equation ITk=constant,I \cdot T^k = \text{constant}, where II is intensity, TT is duration, and k1k \approx 1 for short tones, implying near-perfect energy summation (since a tenfold increase in TT offsets a 10 dB increase in II). In measurements, absolute thresholds for a 10 ms tone are typically 10-15 dB higher than for a 500 ms tone at frequencies around 1 kHz, with the effect being steeper at low frequencies (e.g., below 500 Hz), where longer integration times yield greater threshold reductions. Temporal breaks down for intermittent stimuli, with no effective integration across gaps exceeding 200 ms, leading to higher thresholds for pulsed sounds compared to continuous ones of equivalent total energy, which has implications for designing auditory signals in noisy environments.

Measurement Modalities

Minimal Audible Field

The minimal audible field (MAF) represents the lowest level detectable by a listener in a free or diffuse sound field, standardized for in anechoic or rooms to simulate natural acoustic environments. This threshold is defined relative to the sound pressure in the absence of the listener, typically using pure-tone stimuli presented from a under controlled conditions. In the measurement procedure, the observer is seated at a fixed position, usually facing the sound source directly, while pure tones are emitted from a calibrated ; detection thresholds are determined through psychophysical methods such as the method of limits or constant stimuli. Due to acoustic around the head and body, MAF thresholds are approximately 6 dB higher than those measured via earphones at mid-frequencies (500-4000 Hz), as the effective at the differs from the free-field level. MAF testing provides key advantages by replicating real-world binaural hearing, enabling the use of natural head and torso cues for and detection that are absent in monaural earphone methods. follows ISO 389-7:2019 standards, which specify reference equivalent threshold levels (RETSPLs) for pure tones in free-field conditions with frontal incidence, ensuring reproducibility across setups. These standards support audiometric equipment validation in environments mimicking everyday listening scenarios. Frequency dependence in MAF thresholds arises from anatomical factors, including pinna gain that amplifies high-frequency sounds directed toward the , resulting in lower (better) thresholds relative to what would occur without such directional enhancement. Typical RETSPL values from ISO 389-7:2019 for binaural free-field listening are summarized below for select audiometric frequencies (based on empirical data consistent with prior versions, as specific table extraction requires standard purchase; values approximate those from foundational studies like Poulsen and Han, 2000):
Frequency (Hz)RETSPL (dB re 20 μPa)
12522
25011
5004
1,0002
2,000-1.5
4,000-6.5
8,00011.5
In contrast to minimal audible pressure measurements, which apply sound directly to the ear canal via transducers, MAF emphasizes environmental propagation and listener interaction.

Minimal Audible Pressure

The minimal audible pressure (MAP) is defined as the lowest detectable at the , measured monaurally using calibrated insert earphones or probe tubes that seal the and deliver controlled acoustic stimuli directly to the tympanic membrane. This approach isolates the pressure at the , expressed in decibels sound pressure level (dB SPL) relative to a reference of 20 μPa, the standard for 0 dB SPL. In the measurement procedure, pure-tone thresholds are established by presenting stimuli through sealed transducers, with pressure verified in the or a standardized 6-cc coupler to ensure accuracy and avoid influences from ambient acoustics. These thresholds serve as the basis for hearing level (HL) calibration, where 0 dB HL at 1 kHz aligns with the average normal threshold of approximately 9 dB SPL (equivalent to about 56 μPa). The ANSI S3.6 standard specifies reference equivalent threshold sound pressure levels (RETSPLs) for using this method, ensuring reproducibility across clinical devices. MAP offers advantages in precision and control, enabling isolated monaural assessment without contributions from binaural summation or environmental reflections, which is ideal for diagnostic . Relative to free-field methods, MAP yields lower inter-subject variability and a relatively flat threshold across frequencies (e.g., 9–15 dB SPL from 500 Hz to 8 kHz), as the sealed delivery eliminates head-related acoustic cues.

Applications and Variations

Audiological Standards

International standards for audiometric testing ensure consistency and reliability in measuring the absolute threshold of hearing in clinical settings. The (ISO) provides key guidelines through ISO 8253-1:2010, which specifies procedures for pure-tone air-conduction , covering frequencies from 125 Hz to 8000 Hz in octave intervals and using 5 dB intensity steps to determine thresholds. This standard outlines masking requirements, test conditions, and reporting formats to minimize variability in clinical assessments. In the United States, the Acoustical Society of America (ASA) maintains complementary specifications via ANSI/ASA S3.6-2018, which defines the performance criteria for audiometers, including signal generation, output levels, and calibration tolerances for pure-tone testing. A central feature is the establishment of 0 dB hearing level (HL) as the reference zero, calibrated to the average pure-tone thresholds of otologically normal young adults at each standard frequency. Audiometers must undergo regular to maintain accuracy, typically using an artificial or coupler to verify output levels against reference equivalents, with checks recommended annually or after repairs. For normal hearing, thresholds across octave frequencies from 250 Hz to 8000 Hz generally fall within -10 to 20 dB HL, reflecting the typical range for young adults without auditory pathology. Extended high-frequency testing up to 16 kHz facilitates early detection of , particularly in occupational or noise-exposed populations, as outlined in standards such as ISO 389-5:2006 and related ISO 389 series references for reference equivalent threshold sound pressure levels above 8 kHz. These enhancements specify additional requirements and procedures for frequencies beyond 8 kHz, promoting standardized clinical protocols for comprehensive threshold assessment. The 2025 revision of ANSI/ASA S3.6 reaffirms the existing specifications without introducing new technical changes.

Individual and Population Differences

The absolute threshold of hearing exhibits significant individual and population-level variations, primarily influenced by age, gender, noise exposure, and other biological factors. Age-related changes, known as , lead to a progressive elevation in hearing thresholds, particularly at higher frequencies. Longitudinal studies indicate an average threshold shift of approximately 0.7 to 1 dB per year in high frequencies among older adults, equating to 7-10 dB per decade, with cumulative losses reaching up to 40 dB at 8 kHz by age 70 in otologically normal populations. These shifts follow age-graded norms outlined in ISO 7029, which provide median thresholds for adults aged 18 to 80 years across frequencies from 125 Hz to 8 kHz, showing steeper increases in males and at frequencies above 2 kHz. Gender differences contribute subtle but consistent variations, with males typically exhibiting 2-5 dB higher thresholds than females at high frequencies (3-10 kHz), attributed to greater cumulative exposure and hormonal factors. Occupational exposure further exacerbates these differences, often resulting in permanent threshold shifts of 10-20 dB in the 3-6 kHz range among exposed workers, independent of age. Population data from ISO 7029 establish these as normative benchmarks, while ethnic variations in thresholds are minimal after controlling for socioeconomic and exposure factors, though non-Hispanic Black individuals may show slightly better sensitivity (1-3 dB lower thresholds) at certain frequencies. Additional factors such as ototoxic drugs and also influence individual thresholds. Medications like antibiotics and can induce high-frequency threshold elevations of 20-50 dB, progressing from the basal outward. Genetic predispositions, particularly autosomal dominant nonsyndromic loci (e.g., DFNA2, DFNA5), contribute to earlier or more severe threshold shifts in affected families, often starting in mid-adulthood. Longitudinal research confirms a general progression rate of approximately 0.7 dB per year (or 7 dB per decade) across populations when accounting for these multifactorial influences, emphasizing the need for personalized audiometric monitoring.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.