Hubbry Logo
Audio frequencyAudio frequencyMain
Open search
Audio frequency
Community hub
Audio frequency
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Audio frequency
Audio frequency
from Wikipedia
Sound measurements
Characteristic
Symbols
 Sound pressure p, SPL, LPA
 Particle velocity v, SVL
 Particle displacement δ
 Sound intensity I, SIL
 Sound power P, SWL, LWA
 Sound energy W
 Sound energy density w
 Sound exposure E, SEL
 Acoustic impedance Z
 Audio frequency AF
 Transmission loss TL

An audio frequency or audible frequency (AF) is a periodic vibration whose frequency is audible to the average human. The SI unit of frequency is the hertz (Hz). It is the property of sound that most determines pitch.[1]

The generally accepted standard hearing range for humans is 20 to 20,000 Hz (20 kHz).[2][3][4] In air at atmospheric pressure, these represent sound waves with wavelengths of 17 metres (56 ft) to 1.7 centimetres (0.67 in). Frequencies below 20 Hz are generally felt rather than heard, assuming the amplitude of the vibration is great enough. Sound waves that have frequencies below 20 Hz are called infrasonic and those above 20 kHz are called ultrasonic.

Sound propagates as mechanical vibration waves of pressure and displacement, in air or other substances.[5] In general, frequency components of a sound determine its "color", its timbre. When speaking about the frequency (in singular) of a sound, it means the property that most determines its pitch.[6] Higher pitches have higher frequency, and lower pitches have lower frequency.

The frequencies an ear can hear are limited to a specific range of frequencies. The audible frequency range for humans is typically given as being between about 20 Hz and 20,000 Hz (20 kHz), though the high frequency limit usually reduces with age. Other species have different hearing ranges. For example, some dog breeds can perceive vibrations up to 60,000 Hz (60 kHz).[7]

In many media, such as air, the speed of sound is approximately independent of frequency, so the wavelength of the sound waves (distance between repetitions) is approximately inversely proportional to frequency.

Frequencies and descriptions

[edit]
Frequency (Hz) Octave Description
16 to 32 1st The lower human threshold of hearing, and the lowest pedal notes of a pipe organ.
32 to 512 2nd to 5th Rhythm frequencies, where the lower and upper bass notes lie.
512 to 2,048 6th to 7th Defines human speech intelligibility, gives a horn-like or tinny quality to sound.
2,048 to 8,192 8th to 9th Gives presence to speech, where labial and fricative sounds lie.
8,192 to 16,384 10th Brilliance, the sounds of bells and the ringing of cymbals and sibilance in speech.
16,384 to 32,768 11th Beyond brilliance, nebulous sounds approaching and just passing the upper human threshold of hearing
Oscillogram of a pure tone middle C (262 Hz). (Scale: 1 square is equal to 1 millisecond)
C5, an octave above middle C. The frequency is twice that of middle C (523 Hz).
C3, an octave below middle C. The frequency is half that of middle C (131 Hz).
MIDI note Frequency (Hz) Description Sound file
0 8.17578125 Lowest organ note n/a (fundamental frequency inaudible)
12 16.3515625 Lowest note for tuba, large pipe organs, Bösendorfer Imperial grand piano n/a (fundamental frequency inaudible under average conditions)
24 32.703125 Lowest C on a standard 88-key piano
36 65.40625 Lowest note for cello
48 130.8125 Lowest note for viola, mandola
60 261.625 Middle C
72 523.25 C in middle of treble clef
84 1,046.5 Approximately the highest note reproducible by the average female human voice
96 2,093 Highest note for a flute
108 4,186 Highest note on a standard 88-key piano
120 8,372
132 16,744 Approximately the tone that a typical CRT television emits while running.

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Audio frequency refers to the oscillation rates of sound waves or electrical signals that fall within the range audible to the human ear, generally from 20 Hz to 20 kHz, encompassing the full spectrum of pitches perceivable by healthy young adults under quiet conditions. This range defines the audible portion of the acoustic spectrum, where low frequencies produce deep bass tones and high frequencies yield sharp treble, with human auditory sensitivity peaking between approximately 2 kHz and 5 kHz. In acoustics, audio frequencies represent pressure variations in air caused by vibrating sources, measured in hertz (Hz) as cycles per second, and are fundamental to sound perception via the ear's , which tonotopically maps frequencies to neural signals for processing. The lower limit of 20 Hz corresponds to infrasonic vibrations just entering audibility, while the upper 20 kHz boundary marks the onset of ultrasonic waves beyond typical human detection, though individual variation exists—newborns may hear up to 20 kHz, but often reduces high-frequency sensitivity with age. Complex sounds, like speech or , comprise multiple audio frequencies superimposed, with speech fundamentals around 100–300 Hz and harmonics extending to 8 kHz or higher for . Audio engineering standards emphasize reproducing this full 20 Hz to 20 kHz range to achieve high-fidelity sound, dividing it into sub-bands for design: sub-bass (20–60 Hz) for rumble, bass (60–250 Hz) for warmth, low mids (250–500 Hz) for body, midrange (500–2 kHz) for clarity and vocals, upper mids (2–4 kHz) for presence, and treble (4–20 kHz) for air and sparkle. Devices like loudspeakers and amplifiers are specified by their across this band, ideally flat within ±3 dB to minimize , as deviations can alter perceived or balance. Beyond hearing, audio systems may extend slightly for headroom, but the core focus remains on this audible spectrum to support applications from recording and to spatial audio and .

Fundamentals

Definition

Audio frequency refers to the range of periodic vibrations, typically mechanical in the form of sound waves or electrical/electromagnetic signals, that fall within the spectrum perceivable by the average human ear, generally from 20 Hz to 20 kHz. is quantified in hertz (Hz), the SI unit representing the number of cycles or oscillations per second. This range distinguishes audio frequencies from , which encompasses vibrations below 20 Hz, and , which includes those above 20 kHz; both are typically inaudible to humans without specialized equipment. Human perception of pitch within audio frequencies follows a , where musical intervals such as octaves are defined by a doubling of (e.g., from 440 Hz to 880 Hz), ensuring equal perceptual steps across the spectrum rather than linear frequency increments. The term "audio frequency" emerged in the early amid advancements in radio and , where it became necessary to differentiate low-frequency audible signals from higher radio frequencies used for transmission, as seen in amplifier designs by around 1912 and treatises by the 1910s.

Physical Properties

Audio frequencies manifest as sound waves, which are longitudinal pressure waves propagating through a medium such as air. In this context, the wave consists of alternating regions of compression and , where air molecules oscillate parallel to the direction of wave travel, creating variations in local pressure. The frequency of these oscillations, measured in hertz (Hz), represents the number of complete cycles per second and fundamentally determines the pitch of the sound, though pitch perception varies with human hearing. The λ\lambda of an audio frequency wave is inversely related to its ff and directly proportional to the cc in the medium, given by the formula λ=cf\lambda = \frac{c}{f}. At standard (20°C), the in dry air is approximately 343 m/s. For example, a 1 kHz tone has a of about 34 cm, illustrating how lower frequencies produce longer waves that can interact differently with environments. While primarily governs the intensity and loudness of sound—quantified using the (dB) scale, where sound pressure level (SPL) in dB is calculated as Lp=20log10(pp0)L_p = 20 \log_{10} \left( \frac{p}{p_0} \right) with reference pressure p0=20μp_0 = 20 \muPa—the content plays a key role in defining the through the distribution of frequencies. Complex audio signals comprise multiple frequencies, and their relative amplitudes contribute to the unique quality distinguishing, say, a from a . Audio frequencies are measured using instruments like oscilloscopes, which visualize time-domain waveforms to determine period and thus , and spectrum analyzers, which display frequency-domain content. The , particularly the (FFT) algorithm, decomposes complex waveforms into their constituent sinusoidal components, enabling precise identification of frequency spectra in audio signals. These tools are essential for analyzing both pure tones and sounds within the typical audible range of 20 Hz to 20 kHz.

Human Auditory Range

Audible Spectrum Limits

The audible spectrum for human hearing is conventionally defined as the frequency range from 20 Hz to 20,000 Hz (20 kHz) for young adults with normal hearing under standard conditions. This range represents the boundaries where pure tones can be detected at threshold levels, as established through psychoacoustic measurements of minimum audible fields. The 226 standard on equal-loudness contours provides the foundational data for these limits, specifying levels and frequencies perceived as equally loud by otologically normal listeners aged 18 to 25 years. At the lower end, frequencies below 20 Hz are generally not perceived as discrete auditory tones but rather as tactile vibrations or pressure sensations, marking the transition to . This limit arises because the cochlea's basilar membrane responds less effectively to very low frequencies, requiring high levels of approximately 78 dB SPL for just-detectable tonal near 20 Hz. The upper limit of 20 kHz typically applies to adolescents and young adults, but it declines progressively with age due to , a affecting the inner ear's hair cells. For instance, individuals over 40 years often experience thresholds shifting below 15 kHz, with high-frequency sensitivity dropping first and most severely, as evidenced by audiometric data showing steeper losses above 8 kHz in older populations. Individual variations further influence these boundaries; women generally exhibit slightly higher sensitivity, particularly in high frequencies, with thresholds about 2 dB better than men across tested ranges, possibly due to sex-specific differences in cochlear mechanics and hormonal influences. Additionally, chronic noise exposure accelerates boundary shifts by damaging outer hair cells, primarily impacting frequencies from 3 to 6 kHz initially and extending to broader high-frequency loss with prolonged exposure above 85 dBA.

Variations in Hearing Sensitivity

Human hearing sensitivity varies significantly across the audio frequency spectrum, with the ear exhibiting peak responsiveness in the mid-frequency range of approximately 2 to 5 kHz, where the threshold of hearing is lowest, and reduced sensitivity at both low and high extremes. This frequency response curve reflects the combined effects of the outer ear's resonance and the inner ear's mechanical properties, resulting in a dip in sensitivity below 500 Hz and above 8 kHz for most adults. The Fletcher-Munson curves, also known as equal-loudness contours, illustrate these variations by mapping the sound pressure levels required for tones of different frequencies to be perceived as equally loud relative to a 1 kHz reference. Developed through experimental measurements, these contours show that at moderate listening levels, low frequencies demand substantially higher levels for equivalent perceived ; for instance, a 100 Hz tone requires about 10 dB more intensity than a 1 kHz tone to sound equally loud. The contours flatten at higher sound pressure levels, indicating less pronounced sensitivity differences, but the mid-frequency peak persists across all levels. This uneven sensitivity profile profoundly influences audio system design, particularly in emphasizing the midrange frequencies (roughly 1 to 3 kHz) that carry the bulk of speech intelligibility information, such as consonant sounds, which are crucial for clear communication. Audio engineers prioritize balanced in this range to ensure perception, as under-emphasis here can degrade understanding even if overall volume is adequate. Exposure to intense noise can induce temporary threshold shifts (TTS), a reversible decrease in hearing sensitivity that typically affects high frequencies first and recovers within hours to days, serving as an early indicator of potential damage. Repeated or prolonged exposures may lead to permanent threshold shifts (PTS), resulting in lasting high-frequency hearing loss, often starting around 4 kHz, due to damage to cochlear hair cells in the basal region. Such shifts are closely linked to tinnitus, a perception of phantom noise frequently matching the affected high-frequency bands, which can persist even after threshold recovery and significantly impacts quality of life.

Frequency Classification

Low Frequencies

Low audio frequencies, spanning approximately 20 Hz to 250 Hz, form the bass range essential for conveying depth, weight, and emotional impact in reproduction. These frequencies correspond to longer wavelengths—ranging from about 17 meters at 20 Hz to 1.4 meters at 250 Hz—compared to higher pitches, influencing how interacts with environments and equipment. Natural sources of low frequencies abound in everyday acoustics. Thunder produces rumbling sounds primarily in the 20-120 Hz range, creating a sense of vast power through infrasonic components that can extend below 10 Hz. Bass drums generate fundamental tones between 40-100 Hz, delivering the punchy attack in percussion ensembles. Male vocals typically feature fundamental frequencies around 100-200 Hz, contributing to the resonant of lower-pitched speech and . The physical characteristics of low frequencies lead to distinct acoustic behaviors. Their extended wavelengths readily excite room modes—resonant frequencies determined by room dimensions—resulting in standing waves that cause bass buildup or nulls at specific locations, unevenly distributing . To counter this, subwoofers are employed as dedicated drivers capable of handling these wavelengths with high , often placed strategically to minimize modal peaks. Human hearing sensitivity drops markedly below 200 Hz, requiring higher levels for equivalent perceived compared to frequencies. Challenges in handling low frequencies stem from their high energy demands and recording complexities. Achieving adequate reproduction necessitates substantially more amplifier power—often 10 times or more than for mid frequencies—to drive speakers against air resistance and achieve comparable volume, due to driver inefficiencies and the effects on bass propagation. In recording, phase misalignment between microphones or tracks can cause destructive interference in the low end, producing a thin or hollow bass response that undermines mix clarity.

Mid Frequencies

Mid frequencies, encompassing approximately 250 Hz to 4,000 Hz, constitute the central portion of the audible spectrum. This range is particularly important as it includes most human voices, melodic instruments, and the details that provide clarity and presence to sound. They are vital for achieving clarity and definition in audio signals. This range primarily handles the articulation of speech consonants and the rich harmonic structures that define many acoustic and electronic sounds. Prominent sources within this band include the human voice, where formants—the resonant peaks of the vocal tract—typically occur between 500 Hz and 2,000 Hz, shaping vowel quality and consonant sharpness for effective communication. In music, electric guitars derive their core tonal presence and note attack from harmonics concentrated in the 250 Hz to 4,000 Hz region, while the piano's mid-octave keys and overtones, spanning similar frequencies, provide melodic warmth and sustain. Perceptually, the mid frequencies align with peak human hearing sensitivity, especially from 1,000 Hz to 4,000 Hz, where the ear perceives sounds most acutely and equal responses are maximized. This heightened responsiveness ensures that the conveys the bulk of informational content, such as phonetic details essential for speech intelligibility in both quiet and noisy environments. Excessive energy in the mid frequencies, however, can introduce harshness, particularly in the upper portion around 2,000 Hz to 4,000 Hz, leading to listener discomfort and reduced enjoyment. In complex audio mixes, overlapping elements from vocals, guitars, and other instruments frequently cause clutter, muddying separation and demanding careful balancing to maintain transparency.

High Frequencies

High audio frequencies, typically encompassing the range from approximately 4 kHz to 20 kHz, correspond to the treble portion of the audible and play a key role in imparting airiness, sparkle, and intricate detail to soundscapes. This band is essential for capturing transient elements that enhance perceived clarity and spatial depth in audio reproduction. Sibilance, the sharp hissing quality in like "s" and "sh," predominantly occurs within the 5-10 kHz subrange, where excessive emphasis can lead to if not balanced properly. Prominent sources of high frequencies include percussion instruments such as cymbals and hi-hats, whose metallic attacks and decays generate harmonics extending up to 10 kHz and beyond, providing rhythmic bite and shimmer in musical contexts. Natural examples encompass bird calls, which often feature shrill components in this range for communication, and breathy vocal elements, where upper harmonics add an ethereal quality to human voices. These sources collectively contribute to the vividness and nuance in environmental and artistic audio. The short wavelengths of high-frequency sounds—ranging from about 8.6 cm at 4 kHz to 1.7 cm at 20 kHz—facilitate precise auditory imaging and localization, as these dimensions allow the to create significant interaural level differences by shadowing waves arriving at the far . This property supports enhanced stereo separation and directionality cues in listening environments. However, high frequencies attenuate more rapidly in air than lower ones due to increased molecular absorption and effects, which scale with frequency and limit their effective over longer distances. Presbycusis, the progressive age-related decline in hearing, initially impairs sensitivity to frequencies above 4 kHz, resulting in a loss of detail that affects the discernment of consonants, harmonics, and subtle environmental sounds. This high-frequency vulnerability arises from degenerative changes in the cochlea's basal turn, leading to reduced auditory acuity for treble elements and diminished overall .

Psychoacoustic Effects

Perceived Pitch

Human perception of pitch is fundamentally logarithmic with respect to frequency, meaning that equal intervals in perceived pitch correspond to multiplicative changes in frequency rather than additive ones. This non-linear relationship arises because the auditory system processes sound in a way that compresses higher frequencies, making the perceived difference between tones more uniform on a logarithmic scale. A classic example of this logarithmic scaling is the octave interval, where the pitch doubles in perception when the exactly doubles; for instance, the A4 at 440 Hz is perceived as the same one higher as A5 at 880 Hz. To model this psychoacoustically, the provides an approximation of perceived pitch, transforming linear ff (in Hz) to mels mm via the formula: m=2595log10(1+f700)m = 2595 \log_{10}\left(1 + \frac{f}{700}\right) This equation, derived from empirical studies, captures the scale's near-linear behavior at low frequencies and logarithmic at higher ones, aiding in applications like and audio processing. Another key aspect of pitch perception is the phenomenon, where the brain infers the of a complex tone from its higher harmonics even if the fundamental itself is absent in the signal. This illusory pitch arises because the analyzes the periodicity and spectral structure of the harmonics, reconstructing the missing low-frequency component cognitively rather than from direct cochlear stimulation. Originally described in early psychoacoustic research, this effect underscores how pitch is a constructed percept rather than a simple reflection of physical frequency. In cultural contexts, musical scales like equal temperament divide the octave into 12 equal semitones, each with a frequency ratio of 21/121.05952^{1/12} \approx 1.0595, allowing consistent transposition across keys in Western music traditions. This system, while an approximation of just intonation's simple ratios (e.g., 3:2 for a perfect fifth), prioritizes versatility over perfect harmonic purity, influencing composition and performance globally.

Frequency Masking

Frequency masking is a psychoacoustic phenomenon in which the of one is diminished or obscured by the presence of another , due to the limited resolution of the human in processing simultaneous or closely timed acoustic events. This effect arises from the nonlinear behavior of the , where stronger neural responses to a dominant suppress weaker responses to nearby s. Simultaneous masking occurs when a louder sound, such as a low-frequency tone, hides quieter sounds at adjacent frequencies within the same , as the auditory filters cannot resolve them distinctly. Temporal masking, on the other hand, involves sounds that precede (pre-masking) or follow (post-masking) the target sound by a short duration, typically up to 200 milliseconds, where the masker's excitation lingers in the auditory pathway, rendering the target inaudible. These processes are quantified through masking thresholds, which define the minimum level at which a sound becomes perceptible in the presence of a masker. The concept of critical bands underlies frequency masking, representing frequency regions where the ear's resolution is roughly constant, leading to intra-band interactions that amplify masking effects. The Bark scale models these bands, spanning approximately 24 critical bands across the audible spectrum from 20 Hz to 16 kHz, with band widths increasing from about 100 Hz at low frequencies to over 3 kHz at high frequencies. The Bark number zz for a given ff in hertz is calculated as: z=13arctan(0.00076f)+3.5arctan((f7500)2)z = 13 \arctan(0.00076 f) + 3.5 \arctan\left( \left( \frac{f}{7500} \right)^2 \right) This scale ensures that masking is primarily confined within each band, facilitating models of auditory perception. In audio compression applications, frequency masking enables efficient data reduction by exploiting these thresholds to eliminate inaudible spectral components. The MP3 codec, developed in the early 1990s, employs a psychoacoustic model to compute simultaneous and temporal masking curves, allocating fewer bits to masked regions while preserving audible content, achieving compression ratios up to 12:1 with minimal perceptual degradation. For example, in music production, a prominent bass line can mask vocal formants around 200-500 Hz, reducing lyrical intelligibility unless addressed through spectral separation. Similarly, in recordings, the inherent noise floor—typically -90 dB or lower in professional setups—is masked by louder program material, ensuring that subtle hiss or hum remains below the auditory threshold during playback.

Technical Applications

Audio Reproduction Systems

Audio reproduction systems are engineered to faithfully capture, store, and reproduce sound waves across the audible spectrum, ensuring minimal and coverage of frequencies from approximately 20 Hz to 20 kHz to align with human hearing capabilities. These systems encompass components for input (), digital or analog storage (via sampling or mechanical means), and output (speakers), each optimized to handle specific frequency bands without introducing artifacts like or issues. Microphones serve as the primary capture devices in audio reproduction, converting acoustic pressure variations into electrical signals while aiming for a flat to accurately represent the input . Studio condenser microphones, for instance, typically exhibit a response that is flat within ±3 dB from 20 Hz to 20 kHz, allowing precise recording of bass fundamentals, harmonics, and high-frequency transients in professional environments. This design minimizes coloration, ensuring the captured signal mirrors the original sound's content as closely as possible. In storage, the Nyquist-Shannon sampling dictates that the sampling rate must exceed twice the highest of interest to prevent , necessitating a rate greater than 40 kHz for the full audible band up to 20 kHz. The (CD) standard adopts 44.1 kHz as its sampling rate, providing a margin above the Nyquist limit to accommodate practical filters while capturing the complete spectrum without loss. This rate originated from adaptations in video recording technologies but has become the benchmark for consumer , enabling storage formats that preserve fidelity during playback. Modern formats employ higher sampling rates, such as 96 kHz and 192 kHz at 24-bit depth, to extend beyond the audible range for enhanced detail and reduced quantization noise, supported by streaming services and file formats like . Speakers reproduce stored audio by converting electrical signals back into acoustic waves, often using specialized drivers to manage different frequency ranges efficiently. Woofers handle low and lower midrange frequencies (typically 40 Hz to 2-3 kHz in two-way systems), leveraging larger diaphragms for efficient bass reproduction, while tweeters manage high frequencies (above 2-3 kHz), employing smaller, lighter cones to achieve rapid response for treble details. Crossover networks, consisting of passive filters with inductors and capacitors, divide the signal at these band boundaries—such as around 2-3 kHz for two-way systems—to direct appropriate frequencies to each driver, preventing damage and overlap distortions. The historical evolution of audio reproduction systems reflects progressive expansions in frequency coverage, driven by technological advancements. Early phonographs, reliant on mechanical needles and acoustic horns, were limited to a narrow band of roughly 100 Hz to 5 kHz due to groove constraints and playback mechanics, restricting reproduction to midrange tones with significant high- and low-frequency roll-off. By the 1950s, the advent of high-fidelity (hi-fi) systems, incorporating , vacuum-tube amplifiers, and improved electrostatic or dynamic drivers, achieved near-full-range response from 20 Hz to 20 kHz, enabling balanced playback of orchestral lows to highs in home environments.

Equalization and Processing

Equalization (EQ) is a fundamental process in audio engineering that involves adjusting the balance of frequency components within an to achieve desired tonal characteristics during production and playback. This technique allows engineers to shape sound by boosting or attenuating specific frequency ranges, often to correct imbalances or enhance artistic intent. Common types of equalizers include parametric and graphic EQs. A parametric equalizer provides precise control over three main parameters: the center , gain (boost or cut amount), and Q-factor (bandwidth, which determines the steepness of the filter). This flexibility makes it ideal for surgical adjustments, such as targeting narrow resonances. In contrast, a graphic equalizer uses fixed bands, typically aligned with ISO standard third-octave centers, allowing users to adjust gain sliders for each band without altering the center . Shelving filters, often integrated into parametric EQs, provide broad adjustments to high or low ; for example, a high-shelf filter boosts or cuts all above a specified point with a gradual slope, commonly used to add airiness to the upper range. The primary goals of equalization include compensating for room acoustics and enhancing overall clarity. Room acoustics can introduce peaks and dips in due to reflections and standing waves, which EQ helps mitigate by attenuating problematic frequencies to achieve a flatter response. For instance, to enhance clarity, engineers often cut low-mid frequencies around 200-400 Hz to reduce "mud," a buildup that obscures definition in mixes. In digital audio workstations (DAWs), equalization frequently relies on Fast Fourier Transform (FFT)-based processing for real-time spectral analysis and manipulation, enabling efficient handling of complex signals without introducing significant latency. Multi-band compression, which applies dynamic range control across frequency bands (such as low, mid, and high as referenced in frequency classification), requires careful phase considerations; linear-phase modes preserve transient timing by minimizing phase shifts from crossover filters, preventing smearing in the recombined signal. A notable standardization in equalization is the RIAA curve for vinyl records, established in 1954 by the . This curve attenuates low frequencies (below 500 Hz) by up to 20 dB during recording to limit groove width and reduce noise, while boosting high frequencies (above 2 kHz) to maintain ; the inverse curve is applied during playback to restore the original balance.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.