Hubbry Logo
Acoustic locationAcoustic locationMain
Open search
Acoustic location
Community hub
Acoustic location
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Acoustic location
Acoustic location
from Wikipedia
Swedish soldiers operating an acoustic locator in 1940

Acoustic location is a method of determining the position of an object or sound source by using sound waves. Location can take place in gases (such as the atmosphere), liquids (such as water), and in solids (such as in the earth).

Location can be done actively or passively:

  • Active acoustic location involves the creation of sound in order to produce an echo, which is then analyzed to determine the location of the object in question.
  • Passive acoustic location involves the detection of sound or vibration created by the object being detected, which is then analyzed to determine the location of the object in question.

Both of these techniques, when used in water, are known as sonar; passive sonar and active sonar are both widely used.

Acoustic mirrors and dishes, when using microphones, are a means of passive acoustic localization, but when using speakers are a means of active localization. Typically, more than one device is used, and the location is then triangulated between the several devices.

As a military air defense tool, passive acoustic location was used from mid-World War I[1] to the early years of World War II to detect enemy aircraft by picking up the noise of their engines. It was rendered obsolete before and during World War II by the introduction of radar, which was far more effective (but interceptable). Acoustic techniques had the advantage that they could 'see' around corners and over hills, due to sound diffraction.

Civilian uses include locating wildlife[2] and locating the shooting position of a firearm.[3]

Summary

[edit]

Acoustic source localization[4] is the task of locating a sound source given measurements of the sound field. The sound field can be described using physical quantities like sound pressure and particle velocity. By measuring these properties it is (indirectly) possible to obtain a source direction.

Traditionally sound pressure is measured using microphones. Microphones have a polar pattern describing their sensitivity as a function of the direction of the incident sound. Many microphones have an omnidirectional polar pattern which means their sensitivity is independent of the direction of the incident sound. Microphones with other polar patterns exist that are more sensitive in a certain direction. This however is still no solution for the sound localization problem as one tries to determine either an exact direction, or a point of origin. Besides considering microphones that measure sound pressure, it is also possible to use a particle velocity probe to measure the acoustic particle velocity directly. The particle velocity is another quantity related to acoustic waves however, unlike sound pressure, particle velocity is a vector. By measuring particle velocity one obtains a source direction directly. Other more complicated methods using multiple sensors are also possible. Many of these methods use the time difference of arrival (TDOA) technique.

Some have termed acoustic source localization an "inverse problem" in that the measured sound field is translated to the position of the sound source.

Methods

[edit]

Different methods for obtaining either source direction or source location are possible.

Time difference of arrival

[edit]

The traditional method to obtain the source direction is using the time difference of arrival (TDOA) method. This method can be used with pressure microphones as well as with particle velocity probes.

With a sensor array (for instance a microphone array) consisting of at least two probes it is possible to obtain the source direction using the cross-correlation function between each probes' signal. The cross-correlation function between two microphones is defined as

which defines the level of correlation between the outputs of two sensors and . In general, a higher level of correlation means that the argument is relatively close to the actual time-difference-of-arrival. For two sensors next to each other the TDOA is given by

where is the speed of sound in the medium surrounding the sensors and the source.

A well-known example of TDOA is the interaural time difference. The interaural time difference is the difference in arrival time of a sound between two ears. The interaural time difference is given by

where

is the time difference in seconds,
is the distance between the two sensors (ears) in meters,
is the angle between the baseline of the sensors (ears) and the incident sound, in degrees.

Triangulation

[edit]

In trigonometry and geometry, triangulation is the process of determining the location of a point by measuring angles to it from known points at either end of a fixed baseline, rather than measuring distances to the point directly (trilateration). The point can then be fixed as the third point of a triangle with one known side and two known angles.

For acoustic localization this means that if the source direction is measured at two or more locations in space, it is possible to triangulate its location.

Indirect methods

[edit]

Steered response power (SRP) methods are a class of indirect acoustic source localization methods. Instead of estimating a set of time-differences of arrival (TDOAs) between pairs of microphones and combining the acquired estimates to find the source location, indirect methods search for a candidate source location over a grid of spatial points. In this context, methods such as the steered-response power with phase transform (SRP-PHAT)[5] are usually interpreted as finding the candidate location that maximizes the output of a delay-and-sum beamformer. The method has been shown to be very robust to noise and reverberation, motivating the development of modified approaches aimed at increasing its performance in real-time acoustic processing applications.[6]

Military use

[edit]
T3 sound locator 1927
Pre-World War II photograph of Japanese Emperor Shōwa (Hirohito) inspecting military acoustic locators mounted on 4-wheel carriages

Military uses have included locating submarines[7] and aircraft.[8] The first use of this type of equipment was claimed by Commander Alfred Rawlinson of the Royal Naval Volunteer Reserve, who in the autumn of 1916 was commanding a mobile anti-aircraft battery on the east coast of England. He needed a means of locating Zeppelins during cloudy conditions and improvised an apparatus from a pair of gramophone horns mounted on a rotating pole. Several of these equipments were able to give a fairly accurate fix on the approaching airships, allowing the guns to be directed at them despite being out of sight.[9] Although no hits were obtained by this method, Rawlinson claimed to have forced a Zeppelin to jettison its bombs on one occasion.[10]

The air-defense instruments usually consisted of large horns or microphones connected to the operators' ears using tubing, much like a very large stethoscope.[11][12]

Sound location equipment in Germany, 1939. It consists of four acoustic horns, a horizontal pair and a vertical pair, connected by rubber tubes to stethoscope-type earphones worn by the two technicians left and right. The stereo earphones enabled one technician to determine the direction and the other the elevation of the aircraft.

End of the 1920s, an operational comparison of multiple large acoustic listening devices from different nations by the Meetgebouw in The Netherlands showed drawbacks. Fundamental research showed that the human ear is better than one understood in the 20s and 30s. New listening devices closer to the ears and with airtight connections were developed. Moreover, mechanical prediction equipment, given the slow speed of sound as compared to the faster planes, and height corrections provided information to point the searchlight operators and the anti-aircraft gunners to where the detected aircraft flies. Searchlights and guns needed to be at a distance from the listening device. Therefore, electric direction indicator devices were developed.[13]

Most of the work on anti-aircraft sound ranging was done by the British. They developed an extensive network of sound mirrors that were used from World War I through World War II.[14][15] Sound mirrors normally work by using moveable microphones to find the angle that maximizes the amplitude of sound received, which is also the bearing angle to the target. Two sound mirrors at different positions will generate two different bearings, which allows the use of triangulation to determine a sound source's position.

As World War II neared, radar began to become a credible alternative to the sound location of aircraft. For typical aircraft speeds of that time, sound location only gave a few minutes of warning.[8] The acoustic location stations were left in operation as a backup to radar, as exemplified during the Battle of Britain.[16] Today, the abandoned sites are still in existence and are readily accessible.[14][dead link]

After World War II, sound ranging played no further role in anti-aircraft operations.[citation needed][needs update]

Active / passive locators

[edit]

Active locators have some sort of signal generation device, in addition to a listening device. The two devices do not have to be located together.

Sonar

[edit]

Sonar (sound navigation and ranging) is a technique that uses sound propagation under water (or occasionally in air) to navigate, communicate or to detect other vessels. There are two kinds of sonar – active and passive. A single active sonar can localize in range and bearing as well as measuring radial speed. However, a single passive sonar can only localize in bearing directly, though Target Motion Analysis can be used to localize in range, given time. Multiple passive sonars can be used for range localization by triangulation or correlation, directly.

Biological echo location

[edit]

Dolphins, whales and bats use echolocation to detect prey and avoid obstacles.

Time-of-arrival localization

[edit]

Having speakers/ultrasonic transmitters emitting sound at known positions and time, the position of a target equipped with a microphone/ultrasonic receiver can be estimated based on the time of arrival of the sound. The accuracy is usually poor under non-line-of-sight conditions, where there are blockages in between the transmitters and the receivers. [17]

Seismic surveys

[edit]
A three-dimensional echo-sounding representation of a canyon under the Red Sea by survey vessel HMS Enterprise

Seismic surveys involve the generation of sound waves to measure underground structures. Source waves are generally created by percussion mechanisms located near the ground or water surface, typically dropped weights, vibroseis trucks, or explosives. Data are collected with geophones, then stored and processed by computer. Current technology allows the generation of 3D images of underground rock structures using such equipment.

Other

[edit]

Because the cost of the associated sensors and electronics is dropping, the use of sound ranging technology is becoming accessible for other uses, such as for locating wildlife.[18]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Acoustic location, also known as acoustic source localization, is the process of determining the position of one or more sources in an environment by analyzing acoustic signals captured by an of sensors, such as . This technique exploits the physical properties of wave propagation, including time delays, phase differences, and intensity variations, to estimate the source coordinates in two- or . Fundamental to the method is the use of algorithms that account for factors like speed, environmental reflections, and interference to achieve accurate positioning. The core principles of acoustic location rely on measuring differences in signal arrival times or directions across multiple sensors, enabling techniques such as time difference of arrival (TDoA), (AoA), and . For instance, TDoA uses to compute delays between sensors, while scans potential source directions by delaying and summing signals to reinforce phases from the true source. Advanced methods incorporate near-field acoustic (NAH) for reconstructing fields or inverse boundary element methods (IBEM) for complex geometries, often limited to specific ranges like up to 6.4 kHz for NAH. Recent developments integrate , including convolutional neural networks (CNNs) and transformers, which can achieve accuracies up to 98% in controlled settings by learning from acoustic features. Acoustic location finds diverse applications across , industrial, and domains, addressing challenges from reverberant indoor environments to scenarios. In contexts, it enables detection with over 93% accuracy and (UAV) tracking at distances as precise as 0.13 meters . Industrially, it identifies noise sources in machinery, such as engines or vehicles, using for exterior diagnostics or mapping for energy flow analysis. uses include for with 97% accuracy, via phase-shift methods like FingerIO at millimeter resolution, and care applications such as mobile sensing or videoconferencing speaker localization. Despite its versatility, challenges like and low signal-to-noise ratios persist, particularly indoors, necessitating hybrid approaches for robust performance.

Introduction

Definition and Principles

Acoustic location is the process of estimating the position of a source or a reflector using propagating through media such as air, , or solids. This technique distinguishes between passive localization, which identifies the position of an emitting source by analyzing received signals, and active localization, which determines the position of a reflector (such as an object or target) by emitting acoustic pulses and measuring echoes. The core principles of acoustic location rely on the propagation of sound waves, which are mechanical disturbances traveling through an elastic medium at a finite speed. The , denoted as cc, is approximately 343 m/s in dry air at 20°C but varies with factors like , , , and the medium—for instance, around 1480 m/s in and up to several thousand m/s in solids like . These variations affect the accuracy of position estimates, as the propagation time and wave characteristics depend on the environmental conditions. During propagation, sound waves are subject to reflection at boundaries between media, due to speed gradients (such as inversions in air), and from absorption or , which reduces intensity over . A fundamental for is d=c×td = c \times t, where dd is the to the source or reflector and tt is the of the wave. Basic wave theory underpins these processes, with the given by λ=cf\lambda = \frac{c}{f}, where ff is the ; this relationship determines how waves interact with obstacles and influences resolution in localization.

Historical Development

The origins of acoustic location trace back to , when the threat of aerial attacks prompted the development of passive sound detection systems for early warning. In Britain, acoustic mirrors—large concrete structures designed to reflect and focus engine noise from approaching aircraft—were first constructed around 1916 along coastal defenses to provide advance notice of raids. These early devices, such as the mirror completed in 1917, could detect aircraft up to 15 miles away under ideal conditions, marking a pivotal shift toward organized acoustic in . During the interwar period and into , Britain expanded its network, led by physicist Dr. William Sansome Tucker, building a network of around 30 structures with parabolic designs up to 200 feet in length, particularly along the south and east coasts. These systems, including the prominent examples at Hythe and Denge, amplified faint propeller sounds to guide anti-aircraft defenses, offering up to 15 minutes of warning before radar's widespread adoption rendered them obsolete by the early 1940s. Concurrently, acoustic location evolved for underwater applications, with arrays deployed for detection; British efforts in sound and early precursors laid groundwork for pulsed acoustic signaling that improved directional accuracy against U-boats. Following , acoustic location for air defense declined sharply as and advanced technologies provided superior range and reliability, leading to the decommissioning of most mirrors by the . However, acoustic methods persisted in niche domains, such as seismic , where -derived arrays enabled mapping of underwater geological structures for oil and gas prospecting, building on wartime research. In the modern era since the early 2000s, acoustic location has experienced a revival through microphone arrays and digital signal processing (DSP), enabling precise sound source localization in complex environments without relying on large physical reflectors. These advancements, including beamforming algorithms for multichannel audio, have integrated with artificial intelligence (AI) for real-time applications, such as urban noise monitoring in smart cities, where distributed microphone networks detect incidents like gunshots or emergencies with sub-second latency. By 2025, machine learning-enhanced acoustic systems have become integral to autonomous vehicles, with AI-driven microphone setups—exemplified by Fraunhofer's "Hearing Car"—analyzing ambient sounds to identify hazards like sirens or pedestrians obscured from visual sensors, improving safety in dynamic traffic scenarios.

Localization Methods

Time-Based Methods

Time-based methods for acoustic location estimate the position of a sound source by measuring the time it takes for acoustic signals to reach multiple sensors, leveraging the known to compute distances or distance differences. These techniques form the foundation of multilateration in acoustic systems, where the propagation time of sound waves provides range information without requiring direct line-of-sight or angular measurements. The time-of-arrival (TOA) method relies on precisely measuring the absolute arrival time of the sound signal at each , assuming all clocks are synchronized between the source emission and the receivers. With the cc (typically around 343 m/s in air at standard conditions), the distance did_i from the source at position (x,y,z)(x, y, z) to the ii-th at (xi,yi,zi)(x_i, y_i, z_i) is given by di=ctid_i = c \cdot t_i, where tit_i is the measured arrival time. The source position is then solved via multilateration using the : (xxi)2+(yyi)2+(zzi)2=cti\sqrt{(x - x_i)^2 + (y - y_i)^2 + (z - z_i)^2} = c \cdot t_i
Add your contribution
Related Hubs
User Avatar
No comments yet.