Recent from talks
Nothing was collected or created yet.
Reflection seismology
View on WikipediaThis article may be confusing or unclear to readers. (October 2017) |

Reflection seismology (or seismic reflection) is a method of exploration geophysics that uses the principles of seismology to estimate the properties of the Earth's subsurface from reflected seismic waves. The method requires a controlled seismic source of energy, such as dynamite or Tovex blast, a specialized air gun or a seismic vibrator. Reflection seismology is similar to sonar and echolocation.

History
[edit]
Reflections and refractions of seismic waves at geologic interfaces within the Earth were first observed on recordings of earthquake-generated seismic waves. The basic model of the Earth's deep interior is based on observations of earthquake-generated seismic waves transmitted through the Earth's interior (e.g., Mohorovičić, 1910).[1] The use of human-generated seismic waves to map in detail the geology of the upper few kilometers of the Earth's crust followed shortly thereafter and has developed mainly due to commercial enterprise, particularly the petroleum industry.
Seismic reflection exploration grew out of the seismic refraction exploration method, which was used to find oil associated with salt domes.[2] Ludger Mintrop, a German mine surveyor, devised a mechanical seismograph in 1914 that he successfully used to detect salt domes in Germany. He applied for a German patent in 1919 that was issued in 1926. In 1921 he founded the company Seismos, which was hired to conduct seismic exploration in Texas and Mexico, resulting in the first commercial discovery of oil using the refraction seismic method in 1924.[3] The 1924 discovery of the Orchard salt dome in Texas led to a boom in seismic refraction exploration along the Gulf Coast, but by 1930 the method had led to the discovery of most of the shallow Louann Salt domes, and the refraction seismic method faded.[2]
After WWI, those involved in the development of commercial applications of seismic waves included Mintrop, Reginald Fessenden, John Clarence Karcher, E. A. Eckhardt, William P. Haseman, and Burton McCollum. In 1920, Haseman, Karcher, Eckhardt and McCollum founded the Geological Engineering Company. In June 1921, Karcher, Haseman, I. Perrine and W. C. Kite recorded the first exploration reflection seismograph near Oklahoma City, Oklahoma.[4]: 4–10
Early reflection seismology was viewed with skepticism by many in the oil industry. An early advocate of the method commented:
- "As one who personally tried to introduce the method into general consulting practice, the senior writer can definitely recall many times when reflections were not even considered on a par with the divining rod, for at least that device had a background of tradition."[5]
The Geological Engineering Company folded due to a drop in the price of oil. In 1925, oil prices had rebounded, and Karcher helped to form Geophysical Research Corporation (GRC) as part of the oil company Amerada. In 1930, Karcher left GRC and helped to found Geophysical Service Incorporated (GSI). GSI was one of the most successful seismic contracting companies for over 50 years and was the parent of an even more successful company, Texas Instruments. Early GSI employee Henry Salvatori left that company in 1933 to found another major seismic contractor, Western Geophysical. Many other companies using reflection seismology in hydrocarbon exploration, hydrology, engineering studies, and other applications have been formed since the method was first invented. Major service companies in recent years have included CGG, ION Geophysical, Petroleum Geo-Services, Polarcus, TGS and WesternGeco, but since the oil price crash of 2015, providers of seismic services have continued to struggle financially such as Polarcus,[6] whilst companies that were seismic acquisition industry leaders just ten years ago such as CGG[7] and WesternGeco[8] have now removed themselves from the seismic acquisition environment entirely and restructured to focus upon their existing seismic data libraries, seismic data management and non-seismic related oilfield services.
Summary of the method
[edit]Seismic waves are mechanical perturbations that travel in the Earth at a speed governed by the acoustic impedance of the medium in which they are travelling. The acoustic (or seismic) impedance, Z, is defined by the equation:
- ,
where v is the seismic wave velocity and ρ (Greek rho) is the density of the rock.
When a seismic wave travelling through the Earth encounters an interface between two materials with different acoustic impedances, some of the wave energy will reflect off the interface and some will refract through the interface. At its most basic, the seismic reflection technique consists of generating seismic waves and measuring the time taken for the waves to travel from the source, reflect off an interface and be detected by an array of receivers (as geophones or hydrophones) at the surface.[9] Knowing the travel times from the source to various receivers, and the velocity of the seismic waves, a geophysicist then attempts to reconstruct the pathways of the waves in order to build up an image of the subsurface.
In common with other geophysical methods, reflection seismology may be seen as a type of inverse problem. That is, given a set of data collected by experimentation and the physical laws that apply to the experiment, the experimenter wishes to develop an abstract model of the physical system being studied. In the case of reflection seismology, the experimental data are recorded seismograms, and the desired result is a model of the structure and physical properties of the Earth's crust. In common with other types of inverse problems, the results obtained from reflection seismology are usually not unique (more than one model adequately fits the data) and may be sensitive to relatively small errors in data collection, processing, or analysis.[10] For these reasons, great care must be taken when interpreting the results of a reflection seismic survey.
The reflection experiment
[edit]The general principle of seismic reflection is to send elastic waves (using an energy source such as dynamite explosion or Vibroseis) into the Earth, where each layer within the Earth reflects a portion of the wave's energy back and allows the rest to refract through. These reflected energy waves are recorded over a predetermined time period (called the record length) by receivers that detect the motion of the ground in which they are placed. On land, the typical receiver used is a small, portable instrument known as a geophone, which converts ground motion into an analogue electrical signal. In water, hydrophones are used, which convert pressure changes into electrical signals. Each receiver's response to a single shot is known as a “trace” and is recorded onto a data storage device, then the shot location is moved along and the process is repeated. Typically, the recorded signals are subjected to significant amounts of signal processing.[4]: 2–3, 21
Reflection and transmission at normal incidence
[edit]
When a seismic P-wave encounters a boundary between two materials with different acoustic impedances, some of the energy in the wave will be reflected at the boundary, while some of the energy will be transmitted through the boundary. The amplitude of the reflected wave is predicted by multiplying the amplitude of the incident wave by the seismic reflection coefficient , determined by the impedance contrast between the two materials.[4]
For a wave that hits a boundary at normal incidence (head-on), the expression for the reflection coefficient is simply
- ,
where and are the impedance of the first and second medium, respectively.[4]
Similarly, the amplitude of the incident wave is multiplied by the transmission coefficient to predict the amplitude of the wave transmitted through the boundary. The formula for the normal-incidence transmission coefficient is
- .[4]
As the sum of the energies of the reflected and transmitted wave has to be equal to the energy of the incident wave, it is easy to show that
- .
By observing changes in the strength of reflections, seismologists can infer changes in the seismic impedances. In turn, they use this information to infer changes in the properties of the rocks at the interface, such as density and wave velocity,[4] by means of seismic inversion.
Reflection and transmission at non-normal incidence
[edit]
The situation becomes much more complicated in the case of non-normal incidence, due to mode conversion between P-waves and S-waves, and is described by the Zoeppritz equations. In 1919, Karl Zoeppritz derived 4 equations that determine the amplitudes of reflected and refracted waves at a planar interface for an incident P-wave as a function of the angle of incidence and six independent elastic parameters.[9] These equations have 4 unknowns and can be solved but they do not give an intuitive understanding for how the reflection amplitudes vary with the rock properties involved.[11]
The reflection and transmission coefficients, which govern the amplitude of each reflection, vary with angle of incidence and can be used to obtain information about (among many other things) the fluid content of the rock. Practical use of non-normal incidence phenomena, known as AVO (see amplitude versus offset) has been facilitated by theoretical work to derive workable approximations to the Zoeppritz equations and by advances in computer processing capacity. AVO studies attempt with some success to predict the fluid content (oil, gas, or water) of potential reservoirs, to lower the risk of drilling unproductive wells and to identify new petroleum reservoirs. The 3-term simplification of the Zoeppritz equations that is most commonly used was developed in 1985 and is known as the "Shuey equation". A further 2-term simplification is known as the "Shuey approximation", is valid for angles of incidence less than 30 degrees (usually the case in seismic surveys) and is given below:[12]
where = reflection coefficient at zero-offset (normal incidence); = AVO gradient, describing reflection behaviour at intermediate offsets and = angle of incidence. This equation reduces to that of normal incidence at =0.
Interpretation of reflections
[edit]The time it takes for a reflection from a particular boundary to arrive at the geophone is called the travel time. If the seismic wave velocity in the rock is known, then the travel time may be used to estimate the depth to the reflector. For a simple vertically traveling wave, the travel time from the surface to the reflector and back is called the Two-Way Time (TWT) and is given by the formula
- ,
where is the depth of the reflector and is the wave velocity in the rock.[4]: 81
A series of apparently related reflections on several seismograms is often referred to as a reflection event. By correlating reflection events, a seismologist can create an estimated cross-section of the geologic structure that generated the reflections.[4]: 196–199
Sources of noise
[edit]
In addition to reflections off interfaces within the subsurface, there are a number of other seismic responses detected by receivers and are either unwanted or unneeded:
Air wave
[edit]The airwave travels directly from the source to the receiver and is an example of coherent noise. It is easily recognizable because it travels at a speed of 330 m/s, the speed of sound in air.
Ground roll / Rayleigh wave / Scholte wave / Surface wave
[edit]A Rayleigh wave typically propagates along a free surface of a solid, but the elastic constants and density of air are very low compared to those of rocks so the surface of the Earth is approximately a free surface. Low velocity, low frequency and high amplitude Rayleigh waves are frequently present on a seismic record and can obscure signal, degrading overall data quality. They are known within the industry as ‘Ground Roll’ and are an example of coherent noise that can be attenuated with a carefully designed seismic survey.[13] The Scholte wave is similar to ground roll but occurs at the sea-floor (fluid/solid interface) and it can possibly obscure and mask deep reflections in marine seismic records.[14] The velocity of these waves varies with wavelength, so they are said to be dispersive and the shape of the wavetrain varies with distance.[15]
Refraction / Head wave / Conical wave
[edit]A head wave refracts at an interface, travelling along it, within the lower medium and produces oscillatory motion parallel to the interface. This motion causes a disturbance in the upper medium that is detected on the surface.[9] The same phenomenon is utilised in seismic refraction.
Multiple reflection
[edit]An event on the seismic record that has incurred more than one reflection is called a multiple. Multiples can be either short-path (peg-leg) or long-path, depending upon whether they interfere with primary reflections or not.[16][17]
Multiples from the bottom of a body of water and the air-water interface are common in marine seismic data, and are suppressed by seismic processing.
Cultural noise
[edit]Cultural noise includes noise from weather effects, planes, helicopters, electrical pylons, and ships (in the case of marine surveys), all of which can be detected by the receivers.
Electromagnetic noise
[edit]Particularly important in urban environments (i.e. power lines), it is hardly removable. Some particular sensor as microelectromechanical systems (MEMs) are used to decrease these interference when in such environments.[18]
2D versus 3D
[edit]The original seismic reflection method involved acquisition along a two-dimensional vertical profile through the crust, now referred to as 2D data. This approach worked well with areas of relatively simple geological structure where dips are low. However, in areas of more complex structure, the 2D technique failed to properly image the subsurface due to out of plane reflections and other artefacts. Spatial aliasing is also an issue with 2D data due to the lack of resolution between the lines. Beginning with initial experiments in the 1960s, the seismic technique explored the possibility of full three-dimensional acquisition and processing. In the late 1970s the first large 3D datasets were acquired and by the 1980s and 1990s this method became widely used.[19][20]
Applications
[edit]Reflection seismology is used extensively in a number of fields and its applications can be categorised into three groups,[21] each defined by their depth of investigation:
- Near-surface applications – an application that aims to understand geology at depths of up to approximately 1 km, typically used for engineering and environmental surveys, as well as coal[22] and mineral exploration.[23] A more recently developed application for seismic reflection is for geothermal energy surveys,[24] although the depth of investigation can be up to 2 km deep in this case.[25]
- Hydrocarbon exploration – used by the hydrocarbon industry to provide a high resolution map of acoustic impedance contrasts at depths of up to 10 km within the subsurface. This can be combined with seismic attribute analysis and other exploration geophysics tools and used to help geologists build a geological model of the area of interest.
- Mineral exploration – The traditional approach to near-surface (<300 m) mineral exploration has been to employ geological mapping, geochemical analysis and the use of aerial and ground-based potential field methods, in particular for greenfield exploration,[26] in the recent decades reflection seismic has become a valid method for exploration in hard-rock environments.
- Crustal studies – investigation into the structure and origin of the Earth's crust, through to the Moho discontinuity and beyond, at depths of up to 100 km.
A method similar to reflection seismology which uses electromagnetic instead of elastic waves, and has a smaller depth of penetration, is known as Ground-penetrating radar or GPR.
Hydrocarbon exploration
[edit]Reflection seismology, more commonly referred to as "seismic reflection" or abbreviated to "seismic" within the hydrocarbon industry, is used by petroleum geologists and geophysicists to map and interpret potential petroleum reservoirs. The size and scale of seismic surveys has increased alongside the significant increases in computer power since the late 20th century. This led the seismic industry from laboriously – and therefore rarely – acquiring small 3D surveys in the 1980s to routinely acquiring large-scale high resolution 3D surveys. The goals and basic principles have remained the same, but the methods have slightly changed over the years.
The primary environments for seismic hydrocarbon exploration are land, the transition zone and marine:
Land – The land environment covers almost every type of terrain that exists on Earth, each bringing its own logistical problems. Examples of this environment are jungle, desert, arctic tundra, forest, urban settings, mountain regions and savannah.
Transition Zone (TZ) – The transition zone is considered to be the area where the land meets the sea, presenting unique challenges because the water is too shallow for large seismic vessels but too deep for the use of traditional methods of acquisition on land. Examples of this environment are river deltas, swamps and marshes,[27] coral reefs, beach tidal areas and the surf zone. Transition zone seismic crews will often work on land, in the transition zone and in the shallow water marine environment on a single project in order to obtain a complete map of the subsurface.

Marine – The marine zone is either in shallow water areas (water depths of less than 30 to 40 metres would normally be considered shallow water areas for 3D marine seismic operations) or in the deep water areas normally associated with the seas and oceans (such as the Gulf of Mexico).
Seismic data acquisition
[edit]Seismic data acquisition is the first of the three distinct stages of seismic exploration, the other two being seismic data processing and seismic interpretation.[28]
Seismic surveys are typically designed by National oil companies and International oil companies who hire service companies such as CGG, Petroleum Geo-Services and WesternGeco to acquire them. Another company is then hired to process the data, although this can often be the same company that acquired the survey. Finally the finished seismic volume is delivered to the oil company so that it can be geologically interpreted.
Land survey acquisition
[edit]

Land seismic surveys tend to be large entities, requiring hundreds of tons of equipment and employing anywhere from a few hundred to a few thousand people, deployed over vast areas for many months.[29] There are a number of options available for a controlled seismic source in a land survey and particularly common choices are Vibroseis and dynamite. Vibroseis is a non-impulsive source that is cheap and efficient but requires flat ground to operate on, making its use more difficult in undeveloped areas. The method comprises one or more heavy, all-terrain vehicles lowering a steel plate onto the ground, which is then vibrated with a specific frequency distribution and amplitude.[30] It produces a low energy density, allowing it to be used in cities and other built-up areas where dynamite would cause significant damage, though the large weight attached to a Vibroseis truck can cause its own environmental damage.[31] Dynamite is an impulsive source that is regarded as the ideal geophysical source due to it producing an almost perfect impulse function but it has obvious environmental drawbacks. For a long time, it was the only seismic source available until weight dropping was introduced around 1954,[32] allowing geophysicists to make a trade-off between image quality and environmental damage. Compared to Vibroseis, dynamite is also operationally inefficient because each source point needs to be drilled and the dynamite placed in the hole.
Unlike in marine seismic surveys, land geometries are not limited to narrow paths of acquisition, meaning that a wide range of offsets and azimuths is usually acquired and the largest challenge is increasing the rate of acquisition. The rate of production is obviously controlled by how fast the source (Vibroseis in this case) can be fired and then move on to the next source location. Attempts have been made to use multiple seismic sources at the same time in order to increase survey efficiency and a successful example of this technique is Independent Simultaneous Sweeping (ISS).[33]
A land seismic survey requires substantial logistical support; in addition to the day-to-day seismic operation itself, there must also be support for the main camp for resupply activities, medical support, camp and equipment maintenance tasks, security, personnel crew changes and waste management. Some operations may also operate smaller 'fly' camps that are set up remotely where the distance is too far to travel back to the main camp on a daily basis and these will also need logistical support on a frequent basis.
Marine survey acquisition (Towed Streamer)
[edit]







Towed streamer marine seismic surveys are conducted using specialist seismic vessels that tow one or more cables known as streamers just below the surface typically between 5 and 15 metres depending upon the project specification that contain groups of hydrophones (or receiver groups) along their length (see diagram). Modern streamer vessels normally tow multiple streamers astern which can be secured to underwater wings, commonly known as doors or vanes that allow a number of streamers to be towed out wide to the port and starboard side of a vessel. Current streamer towing technology such as seen on the PGS operated Ramform series of vessels built between 2013 and 2017[34] has pushed the number of streamers up to 24 in total on these vessels. For vessels of this type of capacity, it is not uncommon for a streamer spread across the stern from 'door to door' to be in excess on one nautical mile. The precise configuration of the streamers on any project in terms of streamer length, streamer separation, hydrophone group length and the offset or distance between the source centre and the receivers will be dependent upon the geological area of interest below the sea floor that the client is trying to get data from.
Streamer vessels also tow high energy sources, principally high pressure air gun arrays that operate at 2000psi that fire together to the create a tuned energy pulse into the seabed from which the reflected energy waves are recorded on the streamer receiver groups. Gun arrays are tuned, that is the frequency response of the resulting air bubble from the array when fired can be changed depending upon the combination and number of guns in a specific array and their individual volumes. Guns can be located individual on an array or can be combined to form clusters. Typically, source arrays have a volume of 2000 cubic inches to 7000 cubic inches, but this will depend upon the specific geology of the survey area.
Marine seismic surveys generate a significant quantity of data [35] due to the size of modern towed streamer vessels and their towing capabilities.
A seismic vessel with 2 sources and towing a single streamer is known as a Narrow-Azimuth Towed Streamer (or NAZ or NATS). By the early 2000s, it was accepted that this type of acquisition was useful for initial exploration but inadequate for development and production,[36] in which wells had to be accurately positioned. This led to the development of the Multi-Azimuth Towed Streamer (MAZ) which tried to break the limitations of the linear acquisition pattern of a NATS survey by acquiring a combination of NATS surveys at different azimuths (see diagram).[37] This successfully delivered increased illumination of the subsurface and a better signal to noise ratio.
The seismic properties of salt poses an additional problem for marine seismic surveys, it attenuates seismic waves and its structure contains overhangs that are difficult to image. This led to another variation on the NATS survey type, the wide-azimuth towed streamer (or WAZ or WATS) and was first tested on the Mad Dog field in 2004.[38] This type of survey involved 1 vessel solely towing a set of 8 streamers and 2 separate vessels towing seismic sources that were located at the start and end of the last receiver line (see diagram). This configuration was "tiled" 4 times, with the receiver vessel moving further away from the source vessels each time and eventually creating the effect of a survey with 4 times the number of streamers. The end result was a seismic dataset with a larger range of wider azimuths, delivering a breakthrough in seismic imaging.[36] These are now the three common types of marine towed streamer seismic surveys.
Marine survey acquisition (Ocean Bottom Seismic (OBS))
[edit]Marine survey acquisition is not just limited to seismic vessels; it is also possible to lay cables of geophones and hydrophones on the sea bed in a similar way to how cables are used in a land seismic survey, and use a separate source vessel. This method was originally developed out of operational necessity in order to enable seismic surveys to be conducted in areas with obstructions, such as production platforms, without having the compromise the resultant image quality.[39] Ocean bottom cables (OBC) are also extensively used in other areas that a seismic vessel cannot be used, for example in shallow marine (water depth <300m) and transition zone environments, and can be deployed by remotely operated underwater vehicles (ROVs) in deep water when repeatability is valued (see 4D, below). Conventional OBC surveys use dual-component receivers, combining a pressure sensor (hydrophone) and a vertical particle velocity sensor (vertical geophone), but more recent developments have expanded the method to use four-component sensors i.e. a hydrophone and three orthogonal geophones. Four-component sensors have the advantage of being able to also record shear waves,[40] which do not travel through water but can still contain valuable information.
In addition to the operational advantages, OBC also has geophysical advantages over a conventional NATS survey that arise from the increased fold and wider range of azimuths associated with the survey geometry.[41] However, much like a land survey, the wider azimuths and increased fold come at a cost and the ability for large-scale OBC surveys is severely limited.
In 2005, ocean bottom nodes (OBN) – an extension of the OBC method that uses battery-powered cableless receivers placed in deep water – was first trialled over the Atlantis Oil Field in a partnership between BP and Fairfield Geotechnologies.[42] The placement of these nodes can be more flexible than the cables in OBC and they are easier to store and deploy due to their smaller size and lower weight.
Marine survey acquisition (Ocean Bottom Nodes (OBN))
[edit]
The development of node technology came as a direct development from that of ocean bottom cable technology, i.e. that ability to place a hydrophone in direct contact with the seafloor to eliminate the seafloor to hydrophone sea water space that exists with towed streamer technology. The ocean bottom hydrophone concept itself is not new and has been used for many years in scientific research, but its rapid use as a data acquisition methodology in oil and gas exploration is relatively recent.
Nodes are self-contained 4-component units which include a hydrophone and three horizontal and vertical axis orientation sensors. Their physical dimensions vary depending on the design requirement and the manufacturer, but in general nodes tend to weigh in excess of 10 kilograms per unit to counteract buoyancy issues and to lessen the chance of movement on the seabed due to currents or tides.
Nodes are usable in areas where streamer vessels may not be able to safely enter and so for the safe navigation of node vessels and prior to the deployment of nodes, a bathymetry seabed survey is normally conducted of the survey area using side-scan technology to map the seabed topography in detail. This will identify any possible hazards that could impact the safe navigation of node and source vessels and also to identify any issues for node deployment including subsea obstructions, wrecks, oilfield infrastructure or sudden changes in water depths from underwater cliffs, canyons or other locations where nodes may not be stable or not make a good connection to the seabed.
Unlike OBC operations, a node vessel does not connect to a node line, whereas ocean bottom cables need to be physically attached to a recorder vessel to record data in real-time. With nodes, until the nodes are recovered and the data reaped from them (reaping is the industry term used to remove data from a recovered node when it is placed within a computerised system that copies the hard drive data from the node), there is an assumption that the data will be recorded as there is no real-time quality control element to a node's operating status as they are self-contained and not connected to any system once they are deployed. The technology is now well-established and very reliable and once a node and its battery system has passed all of its set up criteria there is a high degree of confidence that a node will work as specified. Technical downtime during node projects, i.e. individual node failures during deployment are usually in single figures as a percentage of the total nodes deployed.
Nodes are powered by either rechargeable internal Lithium-ion battery packs or replaceable non-rechargeable batteries - the design and the specification of the node will determine what battery technology is used. The battery life of a node unit is a critical consideration in the design of a node project; this is because once the battery runs out on a node, the data that has been collected is no longer stored on the solid-state hard drive and all data recorded since it was deployed on the sea floor will be lost. Therefore, a node with a 30-day battery life must be deployed, record data, be recovered and reaped within that 30-day period. This also ties in with the number of nodes that are to be deployed as this is closely related to battery life too. If too many nodes are deployed and the OBN crew's resources are not sufficient to recover these in time or external factors such as adverse weather limits recovery operations, batteries can expire and data can be lost. Disposable or non-rechargeable batteries can also create a significant waste management issue as batteries must be transported to and from an operation and the drained batteries disposed of by a licensed contractor ashore.
Another important consideration is that of synchronising the timing of individual node clock units with an internal clock drift correction. Any error in synchronising nodes properly before they are deployed can create unusable data. Because node acquisition is often multi-directional and from a number of sources simultaneously across a 24-hour time frame, for accurate data processing it is vital that all of the nodes are working to the same clock time.
The node type and specification will determine the node handling system design and the deployment and recovery modes. At present there are two mainstream approaches; node on a rope and ROV operations.
Node on a rope
This method requires the node to be attached to a steel wire or a high specification rope. Each node will be evenly spaced along the rope which will have special fittings to securely connect the node to the rope, for example every 50 metres depending upon the prospect design. This rope is then laid by a specialist node vessel using an node handling system, usually with dynamic positioning along a pre-defined node line. The nodes are ‘landed’ onto pre-plotted positions with an agreed and acceptable error radius, for example, a node must be placed within a 12.5 metre radius from the navigation pre-plot positions. They are often accompanied by pingers, small transponders that can be detected by an underwater acoustic positioning transducer which allows a pinging vessel or the node vessel itself to establish a definite sea floor position for each node on deployment. Depending on the contract, pingers can be located on every node or every third node, for example. Pinging and pinging equipment is the industry shorthand for the use of USBL or Ultra-short baseline acoustic positioning systems which are interfaced with vessel based differential GPS or Differential Global Positioning System navigation equipment.
Node lines are usually recovered by anchor or grapple hook dragging to recover the node line back on board the vessel. Handling systems on node vessels are used to store, deploy and recover nodes and their specific design will depend upon the node design. Small nodes can include a manual handling element whereas larger nodes are automatically handled by robotic systems for moving, storing, recharging and reaping nodes. Node vessels also use systems such as spoolers to manage rope lines and rope bins to store the many kilometres of ropes often carried onboard node on a rope vessels.
Node on a rope is normally used where there is shallow water within the prospect, for example less than 100 metres or a transition zone area close to a beach. For deeper water operations, a dynamic positioning vessel is used to ensure accurate deployment of nodes, but these larger vessels have a limitation as to how far into shore they can safely navigate into; the usual cutoff will be between 15 and 20 metres water depth depending on the vessel and its in-water equipment. Specialist shallow water boats can then be used for deploying and recovering nodes in water depths as shallow as 1 to 3 metres. These shallow water nodes can then be used to tie-in with geophones on the shore to provide a consistent seismic line transition from water to land.
There are some issues with this approach which make them vulnerable to damage or loss on a project and these all must be risk assessed and mitigated. Since nodes connected together on a rope sit on the sea floor unattended: they can be moved due to strong currents, the ropes can snag on seabed obstructions, they can be dragged by third party vessel anchors and caught by trawling fishing vessels. The threat of these types of potential hazards to this equipment should normally be identified and assessed during the project planning phase, especially in oilfield locations where well heads, pipelines and other subsea structures exist and where any contact with these must be avoided, normally achieved by adopting exclusion zones. Since it is possible for node lines to be moved after deployment, the issue of node position on recovery is critical and therefore positioning during both deployment and recovery is a standard navigation quality control check. In some case, node lines may need to be recovered and re-laid if the nodes have moved outside of the contract specifications.
ROV deployment
This method utilises ROV (remotely operated underwater vehicle) technology to handle and place nodes at their pre-plotted positions. This type of deployment and recovery method uses a basket full of nodes which is lowered into the water. An ROV will connect with the compatible node basket and remove individual nodes from a tray in a pre-defined order. Each node will be placed on its allocated pre-plot position. On recovery, the process works in reverse; the node to be recovered is picked up by the ROV, placed into the node basket tray until the basket is full when it is lifted back to the surface. The basket is recovered onto the node vessel, the nodes are removed from the basket and reaped.
ROV operations are normally used for deep water node projects, often in water depths to 3000 metres in the open ocean. However, there are some issues with ROV operations that need to be considered. ROV operations tend to be complex, especially deep water ROV operations and so the periodic maintenance demands may impact upon production. Umbilical's and other high technology spares for ROV's can be extremely expensive and repairs to ROVs which require onshore or third-party specialist support will stop a node project. Due to extreme water depths, the node deployment and recovery production rate will be much lower due to the time for node basket transit from surface to seafloor and there will almost certainly be weather or sea condition limitation for ROV operations in open ocean areas. The logistics for supporting operations far from shore can also be problematic for regular resupply, bunkering and crew change activities.
Time lapse acquisition (4D)
[edit]Time lapse or 4D surveys are 3D seismic surveys repeated after a period of time, the 4D term refers to the fourth dimension which in this case is time. Time lapse surveys are acquired in order to observe reservoir changes during production and identify areas where there are barriers to flow that may not be detectable in conventional seismic. Time lapse surveys consist of a baseline survey and a monitor or repeat survey, acquired after the field has been in production. Most of these surveys have been repeated NATS surveys as they are cheaper to acquire and most fields historically already had a NATS baseline survey. Some of these surveys are collected using ocean-bottom cables because the cables can be accurately placed in their previous location after being removed. Better repetition of the exact source and receiver location leads to improved repeatability and better signal to noise ratios. A number of 4D surveys have also been set up over fields in which ocean bottom cables have been purchased and permanently deployed. This method can be known as life of field seismic (LoFS) or permanent reservoir monitoring (PRM).[36]
4D seismic surveys using towed streamer technology can be very challenging as the aim of a 4D survey it to repeat the original or baseline survey as accurately as possible. Weather, tides, current and even the time of year can have a significant impact upon how accurately such a survey can achieve that repeatability goal.
OBN has proven to be another very good way to accurately repeat a seismic acquisition. The world's first 4D survey using nodes was acquired over the Atlantis Oil Field in 2009, with the nodes being placed by a ROV in a water depth of 1300–2200 metres to within a few metres of where they were previously placed in 2005.[43]
Seismic data processing
[edit]There are three main processes in seismic data processing: deconvolution, common-midpoint (CMP) stacking and migration.[44]
Deconvolution is a process that tries to extract the reflectivity series of the Earth, under the assumption that a seismic trace is just the reflectivity series of the Earth convolved with distorting filters.[45] This process improves temporal resolution by collapsing the seismic wavelet, but it is nonunique unless further information is available such as well logs, or further assumptions are made. Deconvolution operations can be cascaded, with each individual deconvolution designed to remove a particular type of distortion.
CMP stacking is a robust process that uses the fact that a particular location in the subsurface will have been sampled numerous times and at different offsets. This allows a geophysicist to construct a group of traces with a range of offsets that all sample the same subsurface location, known as a Common Midpoint Gather.[46] The average amplitude is then calculated along a time sample, resulting in significantly lowering the random noise but also losing all valuable information about the relationship between seismic amplitude and offset. Less significant processes that are applied shortly before the CMP stack are Normal moveout correction and statics correction. Unlike marine seismic data, land seismic data has to be corrected for the elevation differences between the shot and receiver locations. This correction is in the form of a vertical time shift to a flat datum and is known as a statics correction, but will need further correcting later in the processing sequence because the velocity of the near-surface is not accurately known. This further correction is known as a residual statics correction.
Seismic migration is the process by which seismic events are geometrically re-located in either space or time to the location the event occurred in the subsurface rather than the location that it was recorded at the surface, thereby creating a more accurate image of the subsurface.
Seismic interpretation
[edit]
The goal of seismic interpretation is to obtain a coherent geological story from the map of processed seismic reflections.[47] At its most simple level, seismic interpretation involves tracing and correlating along continuous reflectors throughout the 2D or 3D dataset and using these as the basis for the geological interpretation. The aim of this is to produce structural maps that reflect the spatial variation in depth of certain geological layers. Using these maps hydrocarbon traps can be identified and models of the subsurface can be created that allow volume calculations to be made. However, a seismic dataset rarely gives a picture clear enough to do this. This is mainly because of the vertical and horizontal seismic resolution[48] but often noise and processing difficulties also result in a lower quality picture. Due to this, there is always a degree of uncertainty in a seismic interpretation and a particular dataset could have more than one solution that fits the data. In such a case, more data will be needed to constrain the solution, for example in the form of further seismic acquisition, borehole logging or gravity and magnetic survey data. Similarly to the mentality of a seismic processor, a seismic interpreter is generally encouraged to be optimistic in order encourage further work rather than the abandonment of the survey area.[49] Seismic interpretation is completed by both geologists and geophysicists, with most seismic interpreters having an understanding of both fields.
In hydrocarbon exploration, the features that the interpreter is particularly trying to delineate are the parts that make up a petroleum reservoir – the source rock, the reservoir rock, the seal and trap.
Seismic attribute analysis
[edit]Seismic attribute analysis involves extracting or deriving a quantity from seismic data that can be analysed in order to enhance information that might be more subtle in a traditional seismic image, leading to a better geological or geophysical interpretation of the data.[50] Examples of attributes that can be analysed include mean amplitude, which can lead to the delineation of bright spots and dim spots, coherency and amplitude versus offset. Attributes that can show the presence of hydrocarbons are called direct hydrocarbon indicators.
Crustal studies
[edit]The use of reflection seismology in studies of tectonics and the Earth's crust was pioneered in the 1970s by groups such as the Consortium for Continental Reflection Profiling (COCORP), who inspired deep seismic exploration in other countries such as BIRPS in Great Britain and ECORS in France.[51] The British Institutions Reflection Profiling Syndicate (BIRPS) was started up as a result of oil hydrocarbon exploration in the North Sea. It became clear that there was a lack of understanding of the tectonic processes that had formed the geological structures and sedimentary basins which were being explored. The effort produced some significant results and showed that it is possible to profile features such as thrust faults that penetrate through the crust to the upper mantle with marine seismic surveys.[52]
Environmental impact
[edit]As with all human activities, seismic reflection surveys have some impact on the Earth's natural environment and both the hydrocarbon industry and environmental groups partake in research to investigate these effects.
Land
[edit]On land, conducting a seismic survey may require the building of roads, for transporting equipment and personnel, and vegetation may need to be cleared for the deployment of equipment. If the survey is in a relatively undeveloped area, significant habitat disturbance may occur and many governments require seismic companies to follow strict rules regarding destruction of the environment; for example, the use of dynamite as a seismic source may be disallowed. Seismic processing techniques allow for seismic lines to deviate around natural obstacles, or use pre-existing non-straight tracks and trails. With careful planning, this can greatly reduce the environmental impact of a land seismic survey. The more recent use of inertial navigation instruments for land survey instead of theodolites decreased the impact of seismic by allowing the winding of survey lines between trees.
The potential impact of any seismic survey on land needs to be assessed at the planning stage and managed effectively. Well regulated environments would generally require Environmental and Social Impact Assessment (ESIA) or Environmental Impact Assessment (EIA) reports prior to any work starting. Project planning also needs to recognise that once a project has completed, what impact if any, will be left behind. It is the contractors and clients responsibility to manage the remediation plan as per the contract and as per the laws where the project has been conducted.
Depending upon the size of a project, land seismic operations can have a significant local impact and a sizeable physical footprint, especially where storage facilities, camp utilities, waste management facilities (including black and grey water management), general and seismic vehicle parking areas, workshops and maintenance facilities and living accommodation are required. Contact with local people can cause potential disruptions to their normal lives such as increased noise, 24-hour operations and increased traffic and these have to be assessed and mitigated.
Archeological considerations are also important and project planning must accommodate legal, cultural and social requirements that will have to be considered. Specialist techniques can be used to assessed safe working distances from buildings and archeological structures to minimise their impact and prevent damage.
Marine
[edit]The main environmental concern for marine seismic surveys is the potential for noise associated with the high-energy seismic source to disturb or injure animal life, especially cetaceans such as whales, porpoises, and dolphins, as these mammals use sound as their primary method of communication with one another.[53] High-level and long-duration sound can cause physical damage, such as hearing loss, whereas lower-level noise can cause temporary threshold shifts in hearing, obscuring sounds that are vital to marine life, or behavioural disturbance.[54]
A study has shown[55] that migrating humpback whales will leave a minimum 3 km gap between themselves and an operating seismic vessel, with resting humpback whale pods with cows exhibiting increased sensitivity and leaving an increased gap of 7–12 km. Conversely, the study found that male humpback whales were attracted to a single operating airgun as they were believed to have confused the low-frequency sound with that of whale breaching behaviour. In addition to whales, sea turtles, fish and squid all showed alarm and avoidance behaviour in the presence of an approaching seismic source. It is difficult to compare reports on the effects of seismic survey noise on marine life because methods and units are often inadequately documented.
The gray whale will avoid its regular migratory and feeding grounds by >30 km in areas of seismic testing.[citation needed] Similarly the breathing of gray whales was shown to be more rapid, indicating discomfort and panic in the whale. It is circumstantial evidence such as this that has led researchers to believe that avoidance and panic might be responsible for increased whale beachings although research is ongoing into these questions.
Even so, airguns are shut down only when cetaceans are seen at very close range, usually under 1 km[56]
Offering another point of view, a joint paper from the International Association of Geophysical Contractors (IAGC) and the International Association of Oil and Gas Producers (IOGP) argue that the noise created by marine seismic surveys is comparable to natural sources of seismic noise, stating:[57]
The UK government organisation, the Joint Nature Conservation Committee (more commonly known as JNCC) is "...the public body that advises the UK Government and devolved administrations on UK-wide and international nature conservation."[58] has had a long term vested interest in the impact of geophysical or seismic surveys on the marine environment for many years. Even back in the 1990s, it was understood at a government level that the impact of the sound energy produced by seismic surveys needed to be investigated and monitored.[59] JNCC guidelines have been and continue to be one of the references used internationally as a possible baseline standard for surveys in seismic contracts world-wide, such as the 'JNCC guidelines for minimising the risk of injury to marine mammals from geophysical surveys (seismic survey guidelines)', 2017.[60]
A complicating factor in the discussion of seismic sound energy as a disruptive factor to marine mammals is that of the size and scale of seismic surveys as they are conducted into the 21st century. Historically, seismic surveys tended to have a duration of weeks or months and to be localised, but with OBN technology, surveys can cover thousands of square kilometres of ocean and can continue for years, all of the time putting sound energy into the ocean 24 hours a day from multiple energy sources. One current example of this is the 85,000 square kilometre mega seismic survey contract [61] signed by the Abu Dhabi national oil company ADNOC in 2018 with an estimated duration into 2024 across a range of deep-water areas, coastal areas, islands and shallow water locations. It may be very difficult to assess the long-term impact of these huge operations on marine life.
In 2017, IOGP recommended[62] that, to avoid disturbance whilst surveying:
- Protective measures are employed to address site-specific environmental conditions of each operation to ensure that sound exposure and vessel traffic do not harm marine mammals.
- Surveys planned to avoid known sensitive areas and time periods, such as breeding and feeding areas.
- Exclusion zones are typically established around the seismic source to further protect marine fauna from any potentially detrimental effects of sound. The exclusion zone is typically a circle with a radius of at least 500 meters around the sound source.
- Trained observers and listening devices are used to visually and acoustically monitor that zone for marine mammals and other protected species before any sound-producing operations begin. These observers help ensure adherence to the protective practices during operations and their detailed reports provide information on the biodiversity of the survey area to the local governments.
- Sound production typically begins with a “soft-start” or “ramp-up” that involves a gradual increase of the sound level from the air gun source from a very low level to full operational levels at the beginning of the seismic lines – usually over 20 to 40 minutes. This soft-start procedure is intended to allow time for any animal that may be close to the sound source to move away as the sound grows louder.
As second factor is the regulatory environment where the seismic survey is taking place. In highly regulated locations such as the North Sea or the Gulf of Mexico, the legal requirements will be clearly stated at the contract level and both contractor and client will comply with the regulations as the consequences of non-compliance can be severe such as substantial fines or withdrawal of permits for exploration blocks. However, there are some countries which have a varied and rich marine biome but where the environmental laws are weak and where a regulator is ineffective or even non-existent. This situation, where the regulatory framework is not robust can severely compromise any attempts at protecting marine environments: this is frequently found where state-owned oil and gas companies are dominant in a country and where the regulator is also a state-owned and operated entity and therefore it is not considered to be truly independent.
See also
[edit]- Deconvolution
- Depth conversion, the conversion of acoustic waves two-way travel time to actual depth
- Exploration geophysics
- LIGO
- Passive seismic
- SEG-Y, a popular file format for seismic reflection data
- Seismic migration
- Seismic refraction
- Seismic Unix, open source software for processing of seismic reflection data
- Seismic wave
- Seismic wide-angle reflection and refraction
- Swell filter
- Synthetic seismogram
References
[edit]- ^ Grubišić, Vanda; Orlić, Mirko (2007). "Early Observations of Rotor Clouds by Andrija Mohorovičić" (PDF). Bulletin of the American Meteorological Society. 88 (5): 693–700. Bibcode:2007BAMS...88..693G. doi:10.1175/BAMS-88-5-693.
- ^ a b Telford, W. M.; et al. (1976). Applied Geophysics. Cambridge University Press. p. 220.
- ^ Sheriff, R. E.; Geldart, L. P. (1995). Exploration Seismology (2nd ed.). Cambridge University Press. pp. 3–6.
- ^ a b c d e f g h Sheriff, R. E.; Geldart, L. P. (1982). Exploration seismology, Volume 1, History, theory, and data acquisition. Cambridge: Cambridge University Press. p. 67. ISBN 0521243734.
- ^ Rosaire, E. E.; Adler, Joseph H. (January 1934). "Applications and limitations of the dip method". Bulletin of the American Association of Petroleum Geologists. 18 (1): 121.
- ^ "Polarcus: Appointment of Joint Provisional Liquidators". Polarcus.
- ^ "CGG: Delivering Geoscience Leadership".
- ^ "WesternGeco".
- ^ a b c Sheriff, R. E., Geldart, L. P., (1995), 2nd Edition. Exploration Seismology. Cambridge University Press.
- ^ Bube, Kenneth P.; Burridge, Robert (1 October 1983). "The One-Dimensional Inverse Problem of Reflection Seismology". SIAM Review. 25 (4): 497–559. doi:10.1137/1025122. ISSN 0036-1445.
- ^ Shuey, R. T. (1985). "A simplification of the Zoeppritz equations". Geophysics. 50 (4): 609–614. Bibcode:1985Geop...50..609S. doi:10.1190/1.1441936.
- ^ Avseth, P, T Mukerji and G Mavko (2005). Quantitative seismic interpretation. Cambridge University Press, Cambridge, p. 183
- ^ "Ground Roll". Schlumberger Oilfield Glossary. Archived from the original on 31 May 2012. Retrieved 8 September 2013.
- ^ Zheng, Yingcai; Fang, Xinding; Liu, Jing; Fehler, Michael C. (2013). "Scholte waves generated by seafloor topography". arXiv:1306.4383 [physics.geo-ph].
- ^ Dobrin, M. B., 1951, Dispersion in seismic surface waves, Geophysics, 16, 63–80.
- ^ "Multiples Reflection". Schlumberger Oifield Glossary. Archived from the original on 2 June 2012. Retrieved 8 September 2013.
- ^ Pendrel, J. (2006). "Seismic Inversion—A Critical Tool in Reservoir Characterization". Scandinavian Oil-Gas Magazine (5/6): 19–22.
- ^ Malehmir, Alireza; Zhang, Fengjiao; Dehghannejad, Mahdieh; Lundberg, Emil; Döse, Christin; Friberg, Olof; Brodic, Bojan; Place, Joachim; Svensson, Mats; Möller, Henrik (1 November 2015). "Planning of urban underground infrastructure using a broadband seismic landstreamer — Tomography results and uncertainty quantifications from a case study in southwestern Sweden". Geophysics. 80 (6): B177 – B192. Bibcode:2015Geop...80B.177M. doi:10.1190/geo2015-0052.1. ISSN 0016-8033.
- ^ Galbraith, M. (2001). "3D Seismic Surveys – Past, Present and Future". CSEG Recorder. 26 (6). Canadian Society of Exploration Geophysicists.
- ^ Cartwright, J.; Huuse, M. (2005). "3D seismic technology: the geological 'Hubble'". Basin Research. 17 (1): 1–20. Bibcode:2005BasR...17....1C. doi:10.1111/j.1365-2117.2005.00252.x. S2CID 129218651.
- ^ Yilmaz, Öz (2001). Seismic data analysis. Society of Exploration Geophysicists. p. 1. ISBN 1-56080-094-1.
- ^ Gochioco, Lawrence M. (1990). "Seismic surveys for coal exploration and mine planning". The Leading Edge. 9 (4): 25–28. Bibcode:1990LeaEd...9...25G. doi:10.1190/1.1439738.
- ^ Milkereit, B.; Eaton, D.; Salisbury, M.; Adam, E.; Bohlen, Thomas (2003). "3D Seismic Imaging for Mineral Exploration" (PDF). Commission on Controlled-Source Seismology: Deep Seismic Methods. Retrieved 8 September 2013.
- ^ "The Role of Geophysics In Geothermal Exploration". Quantec Geoscience. Archived from the original on 5 February 2013. Retrieved 8 September 2013.
- ^ Louie, John N.; Pullammanappallil, S. K. (2011). "Advanced seismic imaging for geothermal development" (PDF). New Zealand Geothermal Workshop 2011 Proceedings. Archived from the original (PDF) on 12 July 2012. Retrieved 8 September 2013.
- ^ Dentith, Michael; Mudge, Stephen T. (24 April 2014). Geophysics for the Mineral Exploration Geoscientist. Cambridge University Press. Bibcode:2014gmeg.book.....D. doi:10.1017/cbo9781139024358. ISBN 9780521809511. S2CID 127775731.
- ^ "Transition Zone". Geokinetics. Retrieved 8 September 2013.
- ^ Yilmaz, Öz (2001). Seismic data analysis : processing, inversion, and interpretation of seismic data (2nd ed.). Society of Exploration Geophysicists. ISBN 978-1-56080-094-1.
- ^ Jon Cocker (2011). "Land 3-D Seismic Survey Designed To Meet New Objectives". E & P. Hart Energy. Archived from the original on 19 February 2013. Retrieved 12 March 2012.s
- ^ Gluyas, J; Swarbrick, R (2004). Petroleum Geoscience. Blackwell Publishing. p. 22. ISBN 978-0-632-03767-4.
- ^ E. Sheriff, Robert; Geldart, L. P. (1995). Exploration Seismology (2nd ed.). Cambridge University Press. pp. 209–210. ISBN 0-521-46826-4.
- ^ E. Sheriff, Robert; Geldart, L. P. (1995). Exploration Seismology (2nd ed.). Cambridge University Press. p. 200. ISBN 0-521-46826-4.
- ^ Howe, Dave; Foster, Mark; Allen, Tony; Taylor, Brian; Jack, Ian (2008). "Independent simultaneous sweeping -a method to increase the productivity of land seismic crews". SEG Technical Program Expanded Abstracts 2008. pp. 2826–2830. doi:10.1190/1.3063932.
- ^ "PGS fleet | Seismic vessels". 19 November 2015.
- ^ E. Sheriff, Robert; Geldart, L. P. (1995). Exploration Seismology (2nd ed.). Cambridge University Press. p. 260. ISBN 0-521-46826-4.
- ^ a b c Barley, Brian; Summers, Tim (2007). "Multi-azimuth and wide-azimuth seismic: Shallow to deep water, exploration to production". The Leading Edge. 26 (4): 450–458. Bibcode:2007LeaEd..26..450B. doi:10.1190/1.2723209.
- ^ Howard, Mike (2007). "Marine seismic surveys with enhanced azimuth coverage: Lessons in survey design and acquisition" (PDF). The Leading Edge. 26 (4): 480–493. Bibcode:2007LeaEd..26..480H. doi:10.1190/1.2723212. Retrieved 8 September 2013.
- ^ Threadgold, Ian M.; Zembeck-England, Kristin; Aas, Per Gunnar; Fontana, Philip M.; Hite, Damian; Boone, William E. (2006). "Implementing a wide azimuth towed streamer field trial: The what, why and mostly how of WATS in Southern Green Canyon". SEG Technical Program Expanded Abstracts 2006. pp. 2901–2904. doi:10.1190/1.2370129.
- ^ "Ocean Bottom Cable". Schlumberger Oifield Glossary. Archived from the original on 28 July 2012. Retrieved 8 September 2013.
- ^ "Four-Component Seismic Data". Schlumberger Oilfield Glossary. Archived from the original on 16 July 2012. Retrieved 8 September 2013.
- ^ Stewart, Jonathan; Shatilo, Andrew; Jing, Charlie; Rape, Tommie; Duren, Richard; Lewallen, Kyle; Szurek, Gary (2004). "A comparison of streamer and OBC seismic data at Beryl Alpha field, UK North Sea". SEG Technical Program Expanded Abstracts 2004. pp. 841–844. doi:10.1190/1.1845303.
- ^ Beaudoin, Gerard (2010). "Imaging the invisible — BP's path to OBS nodes". SEG Technical Program Expanded Abstracts 2010. Society of Exploration Geophysicists. pp. 3734–3739. doi:10.1190/1.3513626.
- ^ Reasnor, Micah; Beaudoin, Gerald; Pfister, Michael; Ahmed, Imtiaz; Davis, Stan; Roberts, Mark; Howie, John; Openshaw, Graham; Longo, Andrew (2010). "Atlantis time-lapse ocean bottom node survey: A project team's journey from acquisition through processing". SEG Technical Program Expanded Abstracts 2010. Society of Exploration Geophysicists. pp. 4155–4159. doi:10.1190/1.3513730.
- ^ Yilmaz, Öz (2001). Seismic data analysis. Society of Exploration Geophysicists. p. 4. ISBN 1-56080-094-1.
- ^ E. Sheriff, Robert; Geldart, L. P. (1995). Exploration Seismology (2nd ed.). Cambridge University Press. p. 292. ISBN 0-521-46826-4.
- ^ "Common-midpoint". Schlumberger Oifield Glossary. Archived from the original on 31 May 2012. Retrieved 8 September 2013.
- ^ Gluyas, J; Swarbrick, R (2004). Petroleum Geoscience. Blackwell Publishing. p. 24. ISBN 978-0-632-03767-4.
- ^ Basics of Seismic Interpretation
- ^ E. Sheriff, Robert; Geldart, L. P. (1995). Exploration Seismology (2nd ed.). Cambridge University Press. p. 349. ISBN 0-521-46826-4.
- ^ "Petrel Seismic Attribute Analysis". Schlumberger. Archived from the original on 29 July 2013. Retrieved 8 September 2013.
- ^ "Consortium for Continental Reflection Profiling". Retrieved 6 March 2012.
- ^ Crustal Architecture and Images. "BIRPS". Retrieved 6 March 2012.
- ^ Richardson, W. John; et al. (1995). Marine Mammals and Noise. Academic Press. p. 1. ISBN 978-0-12-588441-9.
- ^ Gausland, Ingebret (2000). "Impact of seismic surveys on marine life" (PDF). The Leading Edge. 19 (8): 903–905. Bibcode:2000LeaEd..19..903G. doi:10.1190/1.1438746. Archived from the original (PDF) on 28 May 2013. Retrieved 8 March 2012.
- ^ McCauley, R.D.; et al. (2000). "Marine seismic surveys: A study of environmental implications" (PDF). The APPEA Journal. 40: 692–708. doi:10.1071/AJ99048. hdl:20.500.11937/80308. Archived from the original (PDF) on 28 May 2013. Retrieved 8 March 2012.
- ^ Cummings, Jim (January 2004). "Sonic Impact: a Precautionary Assessment of Noise Pollution From Ocean Seismic Surveys". Greenpeace. 45 Pp. Retrieved 16 November 2021.
- ^ Scientific Surveys and Marine Mammals – Joint OGP/IAGC Position Paper, December 2008 – "Archived copy" (PDF). Archived from the original (PDF) on 16 July 2011. Retrieved 12 September 2010.
{{cite web}}: CS1 maint: archived copy as title (link) - ^ "Who we are | JNCC - Adviser to Government on Nature Conservation".
- ^ Cetacean observations during seismic surveys in 1996.
- ^ "JNCC guidelines for minimising the risk of injury to marine mammals from geophysical surveys (Seismic survey guidelines) | JNCC Resource Hub".
- ^ "ADNOC Awards $519m Contract for World's Largest 3D Seismic Survey". 26 November 2020.
- ^ Recommended monitoring and mitigation measures for cetaceans during marine seismic survey geophysical operations. IOGP. 2017.
Further reading
[edit]The following books cover important topics in reflection seismology. Most require some knowledge of mathematics, geology, and/or physics at the university level or above.
- Brown, Alistair R. (2004). Interpretation of three-dimensional seismic data (sixth ed.). Society of Exploration Geophysicists and American Association of Petroleum Geologists. ISBN 0-89181-364-0.
- Biondi, B. (2006). 3d Seismic Imaging: Three Dimensional Seismic Imaging. Society of Exploration Geophysicists. ISBN 0-07-011117-0.
- Claerbout, Jon F. (1976). Fundamentals of geophysical data processing. McGraw-Hill. ISBN 1-56080-137-9.
- Ikelle, Luc T. & Lasse Amundsen (2005). Introduction to Petroleum Seismology. Society of Exploration Geophysicists. ISBN 1-56080-129-8.
- Scales, John (1997). Theory of seismic imaging. Golden, Colorado: Samizdat Press. Archived from the original on 18 August 2015.
- Yilmaz, Öz (2001). Seismic data analysis. Society of Exploration Geophysicists. ISBN 1-56080-094-1.
- Milsom, J., University College of London (2005). Field Geophysics. Wiley Publications. ISBN 978-0-470-84347-5.
{{cite book}}: CS1 maint: multiple names: authors list (link) - Chapman, C.H.. (2004). Fundamentals of Seismic Wave Propagation. Cambridge University Press. ISBN 978-0-521-81538-3.
Further research in reflection seismology may be found particularly in books and journals of the Society of Exploration Geophysicists, the American Geophysical Union, and the European Association of Geoscientists and Engineers.
External links
[edit]- Biography of Henry Salvatori
- Proving That The Seismic Reflection Method Really Works – Geophysical Society of Tulsa
- Reflection Seismology Literature at Stanford Exploration Project
- Website of the International Association of Geophysical Contractors
- IAGC/IOGP position paper on seismic surveys and marine mammals (PDF)
- Tutorial on seismic reflection data processing
- Information about using Seismic Survey in oil and gas exploration in Australia Archived 13 June 2013 at the Wayback Machine
Reflection seismology
View on GrokipediaReflection seismology is a method of exploration geophysics that uses the principles of seismology to estimate properties of the Earth's subsurface from reflected seismic waves generated by controlled sources and recorded at the surface.[1] Acoustic waves propagate downward, reflect at interfaces where seismic velocity or density changes create acoustic impedance contrasts—defined as , with as velocity and as density—and the reflection coefficient quantifies the amplitude of reflected energy relative to incident waves.[2][3] Developed in the early 1920s through experiments by pioneers like J. Clarence Karcher, it enabled the first commercial applications in petroleum prospecting by the late 1920s, transforming resource exploration by imaging stratigraphic traps and structural features otherwise undetectable.[4] On land, vibroseis trucks or explosive charges serve as sources with geophone arrays as receivers; offshore, air guns and streamer hydrophones predominate, yielding data processed via stacking, deconvolution, and migration to mitigate distortions from wave propagation and produce interpretable two- or three-dimensional sections revealing depths via two-way travel time .[1] Beyond hydrocarbons, it maps crustal architecture, groundwater aquifers, and engineering hazards, though marine surveys have drawn environmental concerns over marine mammal disturbance from high-amplitude sources.[2] Its empirical success stems from causal wave propagation governed by elastic theory, yielding probabilistic reservoir models when integrated with well data, with ongoing advances in full-waveform inversion enhancing resolution amid complex overburdens.[5]
Principles and Fundamentals
Basic Physics of Seismic Wave Reflection
Seismic waves in reflection seismology are primarily compressional P-waves, which propagate through the subsurface as longitudinal elastic disturbances, displacing particles parallel to the direction of travel.[2] These waves reflect at geological interfaces where there is a discontinuity in acoustic properties, governed by the continuity of stress and particle displacement across the boundary under the principles of elastodynamics.[6] Reflection arises from the partial return of wave energy due to impedance contrasts, with the incident wave partitioning into reflected and transmitted components.[7] Acoustic impedance , defined as the product of rock density and P-wave velocity , quantifies a medium's opposition to wave passage, .[8] P-wave velocity depends on the elastic moduli, specifically , where and are Lamé parameters reflecting bulk and shear stiffness.[9] Interfaces with significant produce strong reflections; no contrast yields no reflection, even across lithologic changes. For normal incidence, the amplitude reflection coefficient is , representing the ratio of reflected to incident wave amplitude, with positive indicating phase preservation and negative reversal.[6] [10] The transmission coefficient for amplitude is , ensuring energy conservation where for the incident energy fraction.[11] At normal incidence, no mode conversion to S-waves occurs, as particle motion aligns with propagation, simplifying analysis to P-waves only.[12] The two-way traveltime to depth and back is , assuming constant velocity ; in heterogeneous media, integration along the ray path accounts for varying .[13] Reflection strength scales with , typically 0.1-0.3 for sedimentary interfaces, enabling detection of thin layers if exceeding the wavelength quarter.[6] These principles underpin seismic imaging, where recorded reflections map subsurface structure via impedance variations.[7]Wave Propagation, Reflection, and Transmission
Seismic waves in reflection seismology primarily consist of compressional P-waves that propagate through subsurface media as elastic disturbances governed by the wave equation derived from Hooke's law and Newton's second law.[14] Propagation velocity depends on medium properties, with P-wave speed typically ranging from 1.5 km/s in unconsolidated sediments to over 8 km/s in crystalline basement rocks.[7] In homogeneous isotropic media, waves spread spherically from the source, undergoing geometric spreading and intrinsic attenuation due to viscoelastic damping, which reduces amplitude exponentially with distance as e^{-α r}, where α is the attenuation coefficient.[15] At an interface between two media with differing elastic properties, an incident P-wave undergoes partial reflection and transmission. The acoustic impedance Z = ρ v_p, product of density ρ and P-wave velocity v_p, quantifies this contrast; reflections arise from discontinuities in Z.[7] For normal incidence, the amplitude reflection coefficient R, ratio of reflected to incident wave amplitude, is R = (Z_2 - Z_1)/(Z_2 + Z_1), where subscript 1 denotes the incident medium and 2 the transmitting medium.[16] A positive R indicates no phase reversal (impedance increase), while negative R (impedance decrease) causes a 180° phase shift. The amplitude transmission coefficient T = 2 Z_2 / (Z_2 + Z_1). Energy conservation holds such that the reflected energy fraction is R² and transmitted is (Z_1 / Z_2) T², summing to unity for lossless interfaces.[17] For oblique incidence, common in seismic surveys with source-receiver offsets, refraction follows Snell's law: sin θ_1 / v_1 = sin θ_2 / v_2, where θ is the incidence or refraction angle.[15] Full wave behavior is described by Zoeppritz equations, coupling P- and SV-wave amplitudes for reflected and transmitted waves, accounting for mode conversions.[16] These yield angle-dependent R(θ), approximated for small contrasts and angles by the Shuey equation R(θ) ≈ R(0) + G sin² θ, where R(0) is the normal incidence coefficient and G relates to velocity and density contrasts, enabling amplitude variation with offset (AVO) analysis.[18] Transmission similarly varies, with critical angles possible if v_2 > v_1, leading to total internal reflection beyond θ_c = arcsin(v_1 / v_2). In practice, marine surveys use near-normal approximations due to water-layer multiples, while land data incorporate surface waves and anisotropy effects.[19]Key Concepts in Seismic Imaging
Seismic imaging in reflection seismology transforms raw seismic recordings into interpretable images of subsurface structures by accounting for wave propagation effects and focusing reflected energy at its origin. This process relies on the principle that seismic waves reflect at interfaces where acoustic impedance changes, with acoustic impedance defined as the product of density and seismic velocity , . The strength of reflection is quantified by the normal incidence reflection coefficient , where and are the impedances of the incident and transmitting media, respectively.[20][21] A fundamental step in imaging is the organization of data into common midpoint (CMP) gathers, where traces share the same subsurface midpoint between source and receiver. These gathers exhibit hyperbolic moveout due to varying source-receiver offsets, described by the normal moveout (NMO) equation , with as two-way traveltime, as zero-offset time, as offset, and as root-mean-square (RMS) velocity. Velocity analysis iteratively estimates by aligning hyperbolas across gathers to maximize stack coherence, enabling NMO correction that flattens events for subsequent stacking, which sums corrected traces to enhance signal-to-noise ratio and suppress noise.[22][23] Stacking produces a zero-offset time section approximating a collection of normal-incidence reflections, but it suffers from geometric distortions such as diffractions from point reflectors and overmigration of dipping events. Migration corrects these by extrapolating the wavefield backward in time or depth using the wave equation, repositioning amplitudes to their true subsurface locations. Post-stack time migration applies to the stacked section assuming a velocity model, while pre-stack depth migration processes individual traces for complex velocity fields, incorporating anisotropy and incorporating advanced algorithms like reverse-time migration for handling turning waves. Accurate velocity models, derived from well logs, tomography, or full-waveform inversion, are critical, as errors propagate imaging inaccuracies.[22][24] Additional concepts include amplitude variation with offset (AVO) analysis, which extends the Zoeppritz equations to oblique incidence via approximations like , aiding fluid detection by exploiting reflectivity dependence on angle. Resolution in imaging is limited by wavelength, with vertical resolution approximately where and is dominant frequency, influencing the ability to discern thin layers or faults. These techniques collectively enable high-fidelity subsurface models for resource exploration and geohazard assessment.[22][2]Historical Development
Pioneering Experiments (1910s-1920s)
The development of reflection seismology emerged from wartime geophysical research and petroleum exploration needs in the United States during the late 1910s. J. Clarence Karcher, a physicist who had worked on acoustic detection of submarines during World War I, filed patents in 1919 for a reflection seismograph that utilized controlled explosions to generate seismic waves and recorded their echoes from subsurface interfaces.[25] These early concepts built on refraction techniques but emphasized distinguishing reflections by their shorter travel times and specific wave patterns, aiming to map stratigraphic layers for oil traps rather than deep crustal refractions.[26] Initial laboratory-scale experiments confirmed the feasibility of reflection recording. On April 12, 1919, Karcher and collaborators obtained the first intentional seismic reflections within a Maryland rock quarry using dynamite charges and rudimentary geophones, detecting echoes from shallow bedrock interfaces at depths of tens of feet.[27] This demonstrated the principle of wave reflection at density contrasts, though field application remained untested amid skepticism from geologists accustomed to surface mapping and refraction surveys.[28] Field pioneering shifted to Oklahoma in 1921, where petroleum demand drove practical trials. On June 4, 1921, Karcher, along with University of Oklahoma physicists William P. Haseman and John L. Sherburne, conducted the first outdoor reflection tests near Oklahoma City, employing dynamite shots detonated in shallow holes and a line of geophones connected to galvanometers and photographic recorders.[29] These experiments captured reflections from depths around 600 feet, revealing layered sedimentary reflectors beneath the surface. By early August, the team recorded the world's first reflection seismic profile over a geologic structure along Vines Branch near Dougherty, Oklahoma, on August 9, 1921, identifying potential anticlinal features indicative of hydrocarbon reservoirs.[26][30] Subsequent 1920s efforts refined these methods amid technical hurdles like ambient noise from ground roll and refractions overpowering weaker reflections. Karcher, joined by geophysicists Irving A. Perrine and Daniel W. Ohern, formed exploration parties that surveyed structures in southern Oklahoma, correlating reflection times with known well logs to validate depths using average velocities of 5,000-6,000 feet per second in shales and sandstones.[28] Despite initial industry doubt—many operators favored wildcatting over "experimental" geophysics—these profiles demonstrated reflection's superiority for imaging shallow salt domes and faults invisible to refraction, laying groundwork for commercialization by mid-decade. Patents by others, such as John William Evans and Bevan Whitney in 1920, further spurred parallel developments in reflection timing circuits.[31][26]Commercialization and Early Exploration (1930s-1950s)
![Workers performing seismic tests, Seismic Explorations, Inc.][float-right] The commercialization of reflection seismology began in the late 1920s, transitioning from experimental refraction methods to systematic reflection surveys for oil exploration beyond simple salt dome structures. The first productive oil well drilled using reflection data was completed on December 4, 1928, marking a pivotal shift toward commercial application.[32] In 1930, Geophysical Service Incorporated (GSI) was founded by J. Clarence Karcher and Eugene McDermott, initially focusing on reflection surveys and rapidly expanding to over 30 crews by 1934, servicing major oil companies in the United States.[33] [28] Early commercial surveys in the 1930s employed dynamite as the primary energy source, with geophone arrays spread over distances of several hundred feet to capture reflected waves from subsurface interfaces. Data were recorded analogously on photographic paper using 10 to 12 channels, allowing for basic stacking of traces to improve signal-to-noise ratios in areas like the Gulf Coast and Permian Basin. This period saw reflection methods credited with the discovery of 131 oil fields in the U.S. Gulf Coast alone, demonstrating their efficacy in delineating stratigraphic traps and faulted structures where refraction alone failed.[34] By the 1940s and into the 1950s, exploration expanded offshore, with modified land equipment adapted for marine use, including dynamite charges detonated from boats and hydrophones trailed behind vessels. Continuous reflection profiling emerged as the dominant technique by the mid-1950s, enabling denser data coverage and initial ventures into deeper water targets.[35] [28] Crews proliferated, with companies like Western Geophysical, founded in 1933 by former GSI employee Henry Salvatori, contributing to widespread adoption across North American basins. These efforts laid the groundwork for seismic's role as a core exploration tool, though limited by analog processing and single-fold coverage until later digital innovations.[36]Technological Maturation (1960s-1980s)
During the 1960s, reflection seismology transitioned from analog to digital recording systems, enabling the acquisition of significantly more seismic traces and facilitating advanced processing techniques such as common depth point (CDP) stacking. This shift, which began with experimental digital systems in the late 1950s but gained widespread adoption by the mid-1960s, allowed for increased channel counts—from dozens to hundreds—and improved signal-to-noise ratios through digital filtering and stacking of multiple traces from the same subsurface reflection point.[28][37] The CDP method, conceptualized in the early 1950s, became routinely implemented with digital tools, providing redundancy for velocity analysis and noise suppression by stacking up to 24 or more traces per depth point, dramatically enhancing subsurface imaging resolution.[38][39] Marine acquisition matured with the invention of the air gun source in the early 1960s by Stephen Chelminski, which replaced explosives by releasing high-pressure air bubbles to generate repeatable seismic waves with reduced environmental hazards and better low-frequency content. By 1966, air guns were commercially deployed, often in arrays to optimize bubble suppression and signature control, paired with hydrophone streamers developed from World War II antisubmarine technology for continuous profiling.[40][41] On land, Vibroseis technology advanced, utilizing hydraulic vibrators mounted on trucks to sweep frequencies (typically 10-80 Hz) via correlated sweeps, offering controlled energy input and minimal surface disruption compared to dynamite shots; commercial systems proliferated in the 1960s, enabling higher fold coverage and safer operations.[42] In processing, the 1960s and 1970s saw the evolution of migration algorithms from graphical and analog models to digital implementations, correcting for wave diffraction and focusing energy to true subsurface positions. Key advancements included Sherwood's continuous automatic migration (CAM) in 1967 on early computers and diffraction summation methods in the late 1960s, which handled complex structures like faults and salt domes more accurately than time sections alone.[43] By the 1980s, pre-stack migration and finite-difference techniques emerged, supported by mainframe computing power, allowing for velocity model refinements and the onset of 3D surveys in complex terrains, though full 3D adoption was limited by data volume until decade's end.[43] These developments collectively increased exploration success rates, with seismic data volumes growing exponentially due to 48- to 120-fold stacks becoming standard.[28]Digital and Advanced Techniques (1990s-Present)
The 1990s marked a transition to fully digital workflows in reflection seismology, driven by advances in computing power and data storage, which enabled routine processing of large-scale 3D seismic datasets. Three-dimensional surveys, initially exploratory tools in the 1980s, became standard for regional exploration by the mid-1990s, incorporating true-amplitude preservation to better quantify rock properties and amplitudes for reservoir characterization.[44][45] This shift facilitated prestack depth migration (PSDM), which addressed limitations of time migration in structurally complex areas by accounting for velocity variations in depth domains; a practical 3D PSDM scheme was implemented by Unocal in 1990 for the Gulf of Suez, yielding superior imaging over poststack methods on a 0.5 km grid.[46] Beam migration variants, such as Gaussian-beam methods, also emerged in the early 1990s as efficient alternatives to Kirchhoff PSDM for handling steep dips and irregular velocities.[47][48] Subsequent decades saw the refinement of wave-equation-based imaging, with reverse-time migration (RTM) gaining prominence in the 2000s for its ability to model wave propagation bidirectionally, improving resolution in salt-dome provinces and subsalt targets. Full-waveform inversion (FWI), theoretically formulated earlier but computationally infeasible until high-performance computing matured, became viable for quantitative velocity model updates by the late 2000s; it minimizes waveform misfits to derive high-resolution subsurface images, outperforming ray-based tomography in resolving fine-scale heterogeneities. Applications of FWI expanded in the 2010s to elastic and viscoelastic domains, enhancing AVO (amplitude versus offset) analysis for fluid detection.[49] Four-dimensional (4D) time-lapse seismic, repeating 3D surveys to monitor reservoir changes, transitioned from experimental pilots in the 1980s to commercial use by the late 1990s, particularly in the North Sea, where it quantified fluid movements and pressure variations with repeat accuracies under 5% in controlled settings.[50][51] Since the 2010s, machine learning integration has accelerated data processing, with convolutional neural networks applied for coherent and random noise attenuation, achieving signal-to-noise improvements of 10-20 dB in field datasets without physics-based assumptions.[52] Automated fault detection and horizon picking via deep learning models have reduced interpretation times by factors of 5-10, while hybrid physics-ML approaches, such as learned inversion priors in FWI, mitigate cycle-skipping issues in low-frequency data scarcity.[53] Wide-azimuth (WAZ) and ocean-bottom node acquisitions, combined with these techniques, have further enhanced illumination in anisotropic media, supporting subsalt and unconventional resource plays.[45] These developments collectively prioritize data fidelity and computational efficiency, though challenges persist in scaling FWI to full-frequency bands and validating ML outputs against physical principles.Methodology and Techniques
Seismic Data Acquisition
Seismic data acquisition in reflection seismology requires generating controlled seismic waves using artificial sources and recording their reflections with arrays of detectors to image subsurface structures. The process aims to achieve dense sampling of wavefields for subsequent processing into interpretable sections or volumes.[54] Acquisition parameters, such as source-receiver offset and coverage fold, are designed to optimize resolution and signal-to-noise ratio while minimizing costs.[55] On land, sources commonly employ vibrators—hydraulic or mechanical devices that produce swept-frequency signals typically ranging from low to high frequencies—or impulsive methods like dynamite charges detonated in shallow holes. These generate primarily compressional (P-) waves that penetrate the subsurface, with velocities around 5-8 km/s in typical rocks. Receivers are geophones, electromechanical sensors that measure particle velocity and convert it to electrical signals, often grouped in arrays to suppress ground roll noise. Geophones are deployed in linear spreads for 2D surveys or dense grids for 3D surveys, with sources positioned at intervals along interwoven patterns to ensure multiple coverage per subsurface midpoint (common midpoint or CMP gather).[56][55][56] Marine acquisition utilizes vessel-towed systems, where air gun arrays—clusters of high-pressure air chambers—release bubbles to create broadband acoustic pulses as the primary source. These arrays are tuned for desired frequency content, typically emphasizing lower frequencies for deeper penetration. Receivers comprise hydrophones, pressure-sensitive transducers housed in buoyant streamers up to several kilometers long, towed behind the vessel at depths of 5-10 meters. The vessel sails predefined sail lines, firing sources at regular intervals (e.g., every 25-50 meters) while streamers provide near- to far-offset recordings. Streamer positioning is maintained using steerable birds and GPS for precise geometry.[56][56] Survey geometry distinguishes 2D from 3D methods: 2D acquisition follows single linear profiles, yielding vertical cross-sections suitable for regional mapping but limited in lateral resolution; 3D surveys employ orthogonal grids of source and receiver lines, producing cubic data volumes that enable detailed imaging of complex structures through multi-azimuth sampling. Fold, defined as the number of traces contributing to each CMP, typically ranges from 20-120 in modern surveys to stack and attenuate noise. Maximum offsets are selected to illuminate target depths, often 3-5 times the depth for adequate velocity information. Acquisition must account for environmental factors, such as terrain variability on land or streamer feathering in currents at sea, to maintain data quality.[55][55][54]Data Processing and Migration
Raw seismic data acquired in reflection seismology consist of traces recording wave arrivals from multiple sources and receivers, requiring extensive processing to mitigate distortions from acquisition geometry, near-surface effects, and noise, ultimately yielding interpretable subsurface images.[57] The standard workflow begins with quality control and header editing to identify and remove noisy or defective traces, followed by sorting into common midpoint (CMP) gathers based on source-receiver midpoints.[58] Geometry assignment ensures accurate positioning of sources, receivers, and midpoints, enabling subsequent corrections for offset-dependent travel times.[57] Noise attenuation precedes signal enhancement, targeting coherent noise like ground roll or multiples via frequency-wavenumber (f-k) filtering or predictive deconvolution, which exploits their periodicity to suppress them while preserving primary reflections.[3] Deconvolution then compresses the seismic wavelet to approximate a spike, countering the mixed-phase source signature and absorption effects that broaden reflections; predictive or spiking deconvolution operators are designed from autocorrelation analysis of pilot traces.[57] Static corrections, including elevation and refraction statics, compensate for time delays from weathered layers or topography, using uphole surveys or tomography to derive time shifts applied uniformly across traces.[59] Velocity analysis iteratively picks semblance maxima on CMP gathers to estimate root-mean-square (RMS) velocities, which guide normal moveout (NMO) corrections that flatten hyperbolae of primary reflections for a given offset, assuming a layered medium.[57] Overcorrected tails from steep dips or low velocities are muted, and the aligned traces are stacked to sum constructively primaries while attenuating random noise, producing a zero-offset time section with improved signal-to-noise ratio—typically by a factor of the square root of the fold (number of traces summed).[58] Post-stack filtering, such as trace equalization or balancing, further enhances coherency. Migration constitutes the final imaging step, repositioning dipping reflectors and diffraction hyperbolae from their apparent downgoing positions in the stacked section to true subsurface locations, thereby resolving lateral velocity variations and improving structural accuracy.[59] In time migration, such as Kirchhoff or phase-shift methods, traveltime operators trace wave paths using a smoothed velocity model to sum amplitudes along hyperbolic trajectories, effectively collapsing diffractions; this assumes a gently varying overburden and handles moderate dips up to about 30-40 degrees.[3] Depth migration, including pre-stack variants like reverse time migration (RTM), extrudes data into depth using two-way wave equations solved forward and backward in time, accommodating strong lateral velocity contrasts in complex geology such as subsalt or thrust belts, though computationally intensive—requiring grid sizes on the order of gigavoxels for 3D surveys.[58] Velocity model building, often via tomographic inversion of residuals, iteratively refines inputs to minimize imaging artifacts like misties at well ties.[59] Pre-stack migration preserves offset information for amplitude versus offset (AVO) analysis, while post-stack variants suffice for structural imaging in simpler settings.[57]Interpretation and Attribute Analysis
Seismic interpretation in reflection seismology entails the systematic analysis of processed seismic data volumes to identify and map subsurface geological features, such as horizons, faults, salt bodies, and stratigraphic units, thereby constructing structural and stratigraphic models of the subsurface.[60] This process typically begins with visual examination of post-stack migrated sections or volumes, where interpreters manually or semi-automatically pick continuous reflectors representing stratigraphic interfaces, calibrated against well logs and core data for depth conversion and velocity model refinement.[61] Quantitative aspects incorporate velocity analysis and depth migration to resolve imaging ambiguities, with techniques like horizon autotracking using pattern recognition algorithms to handle complex geometries in 3D datasets covering areas up to thousands of square kilometers.[62] Attribute analysis complements interpretation by deriving quantitative measures from seismic traces to highlight subtle geological variations not apparent in raw amplitude data, enabling enhanced detection of reservoir heterogeneities, fluid contacts, and fracture networks.[63] Common attributes include instantaneous amplitude, which quantifies reflection strength and aids in delineating gross rock volume; phase, which reveals timing shifts indicative of thin-bed tuning effects; and frequency, which correlates with lithology changes due to attenuation in porous media.[64] Structural attributes such as coherence measure lateral continuity to delineate faults, with semblance-based algorithms computing values between 0 and 1 across trace ensembles, where low values (<0.5) signal discontinuities; dip and azimuth attributes estimate reflector orientation, facilitating curvature analysis for anticline trap identification.[65] Stratigraphic and amplitude attributes, including root-mean-square (RMS) amplitude and average energy, support reservoir characterization by correlating high amplitudes with hydrocarbon saturation via bright-spot anomalies, though calibration with rock physics models is essential to distinguish gas sands (acoustic impedance contrasts up to 20-30% lower than brine-filled equivalents) from lithologic tuning.[66] Advanced applications employ multi-attribute fusion, such as self-organizing maps on 10-20 attributes to cluster seismic facies below tuning thickness (e.g., resolving beds <1/4 wavelength), improving prediction accuracy for net pay thickness by 15-25% in clastic reservoirs when integrated with machine learning classifiers trained on well control.[61] Quantitative interpretation extends this via pre-stack AVO (amplitude versus offset) analysis, where intercept-gradient decomposition per Zoeppritz equations classifies Class III sands (negative AVO gradients > -0.1) indicative of gas, and post-stack acoustic impedance inversion, which reconstructs P-impedance volumes (in g/cm³·km/s) from band-limited data using sparse-spike algorithms like model-based inversion, achieving resolutions down to 10-20 m in 2-4 kHz bandwidth surveys.[67] These methods demand rigorous uncertainty quantification, as amplitude distortions from multiples or anisotropic effects can bias attribute reliability by up to 10-15%, necessitating cross-validation with borehole data.[68]Noise Sources and Mitigation
Common Noise Types and Artifacts
Seismic reflection data are contaminated by various noise types that degrade signal quality and complicate interpretation. Noise is classified into coherent noise, which displays organized spatial or temporal patterns, and random noise, characterized by uncorrelated fluctuations. Coherent noise often arises from wave propagation effects or acquisition geometry, while random noise stems from environmental or instrumental sources.[69] Ground roll represents a primary coherent noise in land-based surveys, manifesting as low-velocity Rayleigh surface waves with velocities typically ranging from 100 to 300 m/s, slower than primary P-waves (2000–6000 m/s). These waves generate high-amplitude, dispersive arrivals that mask reflections, particularly at low frequencies below 20–30 Hz, and their energy decays slowly with distance due to minimal attenuation near the surface.[70][71] Multiples constitute another prevalent coherent noise, resulting from repeated reflections off strong impedance contrasts such as the sea surface or shallow layers. Short-path multiples, including water-layer reverberations in marine data, produce periodic echoes with time intervals determined by twice the layer thickness divided by the propagation velocity; for example, a 10 m water depth yields a 33 ms period at 1500 m/s velocity. Long-path multiples involve deeper subsurface bounces and can mimic primary events, complicating velocity analysis.[69][72] Air waves, direct acoustic waves traveling through the air rather than the subsurface, appear as low-velocity (typically 340 m/s) linear events in marine or land-airgun data, often overlapping near-offset traces and attenuating less than ground waves due to lower coupling. Guided waves, trapped in low-velocity layers like weathered zones, propagate as dispersive modes with characteristic velocities and frequencies, generating tube-wave-like patterns. Side-scattered energy from irregularities produces localized coherent arrivals, while refracted head waves from high-velocity interfaces contribute linear, low-amplitude noise.[73][74] Random noise includes cultural sources such as traffic or machinery vibrations, wind-induced geophone motion, and instrumental errors, lacking predictable patterns but elevating overall data variance. In marine environments, swell noise from boat motion introduces low-frequency coherent swells, while seismic interference from nearby surveys creates overlapping shot-like artifacts. Acquisition artifacts, like spatial aliasing from undersampled arrays, manifest as high-wavenumber distortions, and footprint effects from periodic sampling grids produce grid-aligned amplitude modulations in stacked sections.[75][76][72]Strategies for Noise Reduction
In reflection seismology, noise reduction strategies primarily target the enhancement of signal-to-noise ratio (SNR) through acquisition design and processing algorithms that exploit differences in signal and noise characteristics, such as coherence, velocity, and predictability. Acquisition-phase measures include deploying linear arrays of sources and receivers to achieve directional filtering, which attenuates surface waves like ground roll in land surveys by constructive interference of primary reflections and destructive interference of low-velocity noise.[77] In marine environments, tuned airgun arrays suppress bubble-pulse noise by phasing multiple airgun firings to minimize low-frequency oscillations. These array designs can improve initial SNR by 10-20 dB depending on array length and geometry.[78] Post-acquisition processing begins with domain transformations to separate signal from noise. Frequency-wavenumber (f-k) filtering transforms data into the f-k domain, where coherent noise with distinct apparent velocities (e.g., ground roll at low wavenumbers) is rejected via velocity-specific passbands, preserving reflections with higher velocities; this method effectively removes linear coherent events but risks signal leakage if velocity overlaps occur.[79] Tau-p (slant stack) transforms similarly decompose data into slowness (p) and intercept time (tau) domains, enabling adaptive rejection of hyperbolic source-generated noise, as demonstrated in shallow seismic surveys where it separated reflections from backscattered surface waves.[80] Predictive deconvolution targets short-period multiples by estimating the seismic wavelet from autocorrelation and subtracting periodic predictions, compressing the source wavelet and attenuating reverberations; applied pre-stack, it can reduce water-bottom multiples by factors of 5-10 in amplitude in areas with strong impedance contrasts.[81] Common midpoint (CMP) stacking serves as a cornerstone for random noise suppression, involving normal moveout (NMO) correction of traces from the same subsurface reflection point followed by summation, which diminishes incoherent noise by a factor of 1/√N (where N is the fold of coverage) while reinforcing coherent primaries; for typical 48-fold stacks, this yields 7-fold SNR improvement.[82] For coherent noise like multiples, Radon transforms (linear, parabolic, or hyperbolic) model events as integrals over parameterized curves, allowing subtraction of predicted noise trajectories; parabolic Radon variants excel in pre-stack multiple attenuation for curved reflectors, outperforming f-k filters in complex geology by preserving amplitude fidelity.[83] Post-stack filtering, such as dip or structure-oriented filters, further mitigates residual artifacts by aligning with local reflection dip, reducing random noise without distorting stratigraphy.[76] Emerging techniques leverage machine learning for adaptive denoising, such as convolutional neural networks (CNNs) trained on clean-noisy trace pairs to suppress random noise while retaining edges and faults; denoising CNNs (DnCNNs) applied to common-reflection-point gathers have demonstrated 5-15 dB SNR gains in field data without requiring clean labels via self-supervised variants.[84] These methods, however, demand large computational resources and risk over-smoothing if training data inadequately represent geologic variability, underscoring the need for hybrid approaches combining physics-based transforms with data-driven refinement.[85] Overall, effective noise reduction sequences iteratively apply these tools, validated by pre- and post-process SNR metrics, to ensure primary reflections dominate for subsequent migration and interpretation.[77]Applications
Hydrocarbon and Energy Resource Exploration
Reflection seismology serves as the foundational geophysical technique for hydrocarbon exploration, allowing geoscientists to image subsurface rock layers and identify structural and stratigraphic traps capable of accumulating oil and natural gas.[2] Acoustic waves are generated via sources such as vibroseis trucks on land or airgun arrays in marine settings, propagate downward, reflect at acoustic impedance contrasts between rock types, and are recorded by geophones or hydrophones to construct two-way travel time profiles that reveal potential reservoir geometries.[86] This method revolutionized petroleum prospecting in the 1920s, transitioning from refraction-based salt dome detection to reflection profiling, which enabled broader structural mapping and led to numerous field discoveries by the 1930s.[36] In practice, seismic surveys delineate anticlinal folds, fault blocks, and pinch-outs where hydrocarbons may migrate and seal, with data processing correcting for velocity variations and migrating reflections to their true subsurface positions for accurate depth imaging.[28] The adoption of 3D seismic arrays in the 1980s onward provided volumetric datasets, enhancing resolution to detect subtle traps and reservoir heterogeneities, thereby reducing exploratory drilling risks.[87] Field success rates for wildcat wells improved from approximately 65% in 1985 to 75% by 1994 following widespread 3D implementation, as these surveys minimized dry holes by better predicting commercial accumulations.[87] Overall, seismic methods have halved dry hole drilling incidences compared to pre-seismic eras, optimizing capital allocation in frontier basins like the Gulf of Mexico and Permian Basin.[88] Beyond initial exploration, reflection seismology supports reservoir characterization through amplitude anomalies indicating fluid content—such as bright spots for gas—and integration with well logs for porosity and permeability estimates, guiding development drilling and enhanced recovery strategies.[89] In marine environments, where over 30% of global oil production occurs, towed streamer acquisitions with airgun sources provide dense coverage for deepwater plays, as demonstrated in North Sea developments starting in the 1960s.[90] For unconventional resources like shale gas, high-resolution surveys map fracture networks and sweet spots, contributing to production booms in basins such as the Marcellus Shale since the 2000s.[91] While primarily hydrocarbon-focused, the technique extends to other energy resources, including coal seam delineation via shallow reflections, though with adaptations for lower velocities and thinner targets.[92] The method's efficacy stems from acoustic impedance contrasts at reservoir-seal interfaces, where hydrocarbons reduce interval velocities, producing diagnostic time lags and amplitude variations verifiable against direct well control, ensuring predictions align with empirical drilling outcomes rather than ungrounded assumptions.[25] Despite successes, interpretive pitfalls like multiples or anisotropic effects necessitate validation, underscoring seismic's role as a probabilistic tool that de-risks but does not guarantee discoveries.[93]Geothermal, Mineral, and Non-Hydrocarbon Uses
Reflection seismology aids geothermal exploration by imaging faults, fractures, and reservoir structures that control heat and fluid flow, enabling identification of viable drilling targets prior to invasive operations. In regions like Xian County, North China, seismic surveys have mapped geological features linked to geothermal potential, revealing structural traps and permeable zones at depths suitable for resource extraction.[94] For enhanced geothermal systems targeting hot dry rock, 3D reflection surveys using vibroseis sources have delineated crustal-scale features in research boreholes, such as those exceeding 5 km depth, with data acquired in 2019 demonstrating resolution of subtle reflectors indicative of fracture networks.[95] High-resolution profiling further refines site evaluation by highlighting drilling hazards like fault intersections, as applied in 2023 studies to optimize penetration into fractured reservoirs.[96] In hardrock mineral exploration, the technique images deep ore deposits and host structures via reflections from acoustic impedance contrasts between mineralized zones and surrounding lithologies, adapting petroleum-era processing to rugged terrains. Sparse 3D surveys have targeted iron oxide-copper-gold deposits, achieving detection of features at depths over 2 km through optimized source-receiver geometries that mitigate near-surface scattering.[97] Numerical modeling validates acquisition parameters for hardrock settings, emphasizing high-fold coverage to resolve thin, high-contrast layers akin to those in porphyry copper systems, with applications dating to campaigns in the early 2010s that informed drill targeting.[98] Such surveys provide structural frameworks for deposit delineation, reducing exploratory drilling by prioritizing intersections of faults and intrusions that localize mineralization.[99] Non-hydrocarbon applications extend to groundwater delineation, CO2 storage monitoring, and engineering site assessments, where reflection data resolve shallow-to-intermediate depth interfaces not easily captured by refraction alone. Offshore reflection profiling characterizes aquifer extents and confining layers for coastal groundwater studies, as in 2020 Mediterranean surveys revealing freshwater-saline interfaces via velocity pull-up effects.[100] In carbon sequestration, pre-injection 3D reflection volumes assess caprock integrity and fault seals, while time-lapse surveys track plume evolution, with modeling at sites like Atzbach-Schwanestadt in 2012 confirming sensitivity to saturation changes exceeding 10%.[101] For civil engineering, near-surface reflection detects bedrock topography, karst voids, and landslide planes, supporting infrastructure planning; for instance, mini-vibroseis lines have mapped failure surfaces in slopes with resolutions under 5 m, as used in Asian site investigations since the 2000s.[102][103]Academic and Crustal Structure Studies
Reflection seismology enables high-resolution imaging of the continental crust, typically penetrating 30–50 km depth, far beyond routine hydrocarbon targets, to map internal layering, faults, and the Moho discontinuity. Adapted from exploration techniques in the 1970s, deep reflection profiling uses vibroseis or explosives with dense geophone arrays and advanced processing to resolve subhorizontal reflectors indicative of compositional boundaries or deformation fabrics. Unlike refraction seismology, which averages velocities vertically and misses thin low-velocity zones, reflection methods highlight lateral heterogeneity and anisotropic features, providing causal insights into crustal evolution through empirical velocity contrasts and amplitude variations.[104][105][106] The Consortium for Continental Reflection Profiling (COCORP), launched in 1975 by the National Science Foundation, marked a pivotal academic initiative, conducting over 50 profiles across the United States to systematically probe the lithosphere. COCORP data revealed a reflective lower crust dominated by subhorizontal events, often 10–20 km thick, attributed to aligned minerals, fluid-filled cracks, or mylonitic shear zones from ancient tectonics, contrasting with the more transparent upper crust. Profiles in the southern Appalachians exposed westward-dipping reflections linked to Paleozoic thrusting, while Kansas surveys uncovered Proterozoic mafic intrusions at 3–5 km depth beneath Phanerozoic sediments, corroborated by velocity modeling and gravity data. In the Southern Oklahoma Aulacogen, reflections delineated failed rift structures extending to 20 km, illustrating multi-stage rifting and inversion.[107][108][109][110] These studies have informed global tectonic models, showing crustal fabrics like cratonic diffractions from scattered basement blocks versus orogenic dipping reflectors from collisional imbrication. For instance, Basin and Range profiles displayed listric normal faults soleing into ductile lower crust, supporting extension models with measured heave of 50–100 km. Internationally, analogous profiles in the Baltic Sea and Taiwan have imaged subduction relics and collision zones, with lower crustal reflectivity linked to ductile flow rather than magmatic underplating in some cases. Recent academic efforts, such as using local earthquakes for reflection stacking, achieve similar depths with lower costs in active margins, as demonstrated in Japanese arcs where PmP phases map crustal thickness variations of 5–10 km. COCORP's legacy persists in programs like Earth's Crust, emphasizing integration with refraction for robust velocity-depth models, though interpretations remain non-unique without petrophysical constraints.[111][112][113][114]Technological Advancements
Evolution to 3D and 4D Surveys
The transition from two-dimensional (2D) to three-dimensional (3D) seismic surveys addressed inherent limitations in 2D data, such as ambiguities in interpreting dipping reflectors and lateral discontinuities, which often led to erroneous structural models in complex subsurface environments.[115] Early experiments in 3D acquisition began in the 1960s, with the first cross-spread configuration tested in 1964 by Esso (now ExxonMobil) to capture volumetric data beyond linear profiles.[116] Exxon conducted one of the initial 3D surveys over the Friendswood field near Houston, Texas, in 1967, demonstrating improved resolution of salt domes and faults compared to 2D lines.[117] By the mid-1970s, full-scale 3D surveys emerged, including the first land-based effort in 1975 by Nederlandse Aardolie Maatschappij (NAM) and Shell in the Netherlands, covering complex gas fields and revealing structural details unattainable with 2D methods.[118] Widespread adoption of 3D surveys accelerated in the 1980s, driven by advancements in digital recording and computing power that enabled processing of vast datasets—often exceeding gigabytes per survey—allowing for migration algorithms to produce isotropic images of subsurface volumes.[119] These surveys typically employed dense geophone arrays in areal grids, with source-receiver offsets optimized for illumination of targets up to several kilometers deep, reducing drilling risks by 20-50% in mature fields through precise delineation of reservoirs and hazards.[28] By the late 1980s, 3D became standard for exploration in geologically challenging areas like salt provinces, where 2D interpretations frequently underestimated trap volumes.[43] Four-dimensional (4D) or time-lapse seismic surveys extended 3D methodology by repeating acquisitions over the same area at intervals of months to years, enabling detection of temporal changes in acoustic properties due to production-induced alterations like fluid saturation shifts or pressure depletion.[120] The concept was pioneered by ARCO (now Occidental Petroleum) in the early 1980s, with initial applications monitoring steam injection in heavy oil reservoirs, where baseline 3D surveys were differenced against monitors to quantify sweep efficiency.[121] Commercial viability grew in the 1990s as high-repeatability acquisition techniques—such as permanent seabed sensors and source wavelet stabilization—minimized non-repeatability noise below 5%, allowing quantitative inversion for reservoir properties like porosity changes exceeding 10%.[122] In fields like the North Sea's Troll or Valhall, 4D data has guided infill drilling by mapping bypassed hydrocarbons, with empirical success rates improving recovery by up to 15% through causal links between seismic anomalies and production logs.[123]Integration of Machine Learning and Inversion Methods
Machine learning, particularly deep learning architectures such as convolutional neural networks and transformers, has been integrated with inversion methods in reflection seismology to enhance subsurface imaging by addressing the nonlinear ill-posedness and computational demands of techniques like full waveform inversion (FWI). Traditional FWI relies on iterative optimization to minimize data residuals but often encounters local minima and cycle-skipping due to low-frequency limitations in seismic data. Hybrid approaches combine data-driven ML models, which learn direct mappings from observed seismic traces to subsurface parameters like velocity or reflectivity, with physics-informed constraints derived from the acoustic wave equation, thereby improving convergence and resolution without requiring exhaustive low-frequency components.[124][125] Physics-informed machine learning (PIML) exemplifies this integration by embedding forward wave propagation physics into neural networks, constraining the solution space and enabling inversion with datasets orders of magnitude smaller than those needed for purely supervised learning—often succeeding on local subsets of data to invert over 90% of test cases. For example, PIML variants outperform classical adjoint-state FWI in resisting local minima, achieving forward and inverse modeling speeds exceeding 1000 times faster post-training, though initial training remains resource-intensive. In reflection seismology, these methods facilitate high-resolution reflectivity inversion directly from prestack data, as demonstrated by Transformer-CNN hybrids using adaptive spatial feature fusion, which map noisy seismic inputs (SNR 5-15 dB) to reflection models with superior structural fidelity compared to least-squares reverse time migration, particularly in high-amplitude zones of synthetic 3D models like Overthrust.[125][126] Recent advancements include transfer learning-accelerated FWI, where pretrained neural networks provide warm-start initial models to reduce iterations in complex velocity updates (2025), and self-supervised multiparameter inversion that jointly estimates velocity and density while mitigating overfitting through unlabeled data augmentation (2024). Siamese network-based FWI further refines waveform matching by embedding observed and simulated data for comparative learning, enhancing robustness to geological complexities like faults and salt bodies (2024). These integrations extend to prestack inversion, where generative adversarial networks or Boltzmann machines estimate elastic properties independent of low-frequency starting models, yielding geologically plausible outputs for reflection-based hydrocarbon delineation. Despite benefits in efficiency and detail recovery, challenges include sensitivity to training data quality and limited generalization across diverse lithologies, prompting ongoing hybrid refinements.[127][128][129]Limitations and Challenges
Technical and Interpretive Pitfalls
Technical pitfalls in reflection seismology frequently originate in data acquisition and processing, where multiples—seismic events involving multiple reflections—generate coherent noise that violates single-scattering assumptions, leading to false subsurface images. Internal multiples, in particular, arise from shallow interfaces and can masquerade as primary reflections, complicating imaging in layered media.[130] Ghost reflections in marine surveys, caused by the free-surface air-water interface, produce spectral notches that attenuate high frequencies, reducing vertical resolution unless mitigated through deghosting techniques.[131] Inaccurate velocity models during processing distort depth conversions and migration results, yielding mispositioned reflectors and structural inaccuracies, especially in anisotropic media where vertical transverse isotropy exacerbates imaging errors.[132] Near-surface processing challenges, such as residual static corrections and inadequate migration, introduce artifacts in shallow data, propagating errors into deeper interpretations.[133] Crooked-line acquisition geometries, common in rugged terrains, violate straight-line assumptions, fostering boundary artifacts and suboptimal imaging without specialized 3D joint processing.[134] Interpretive pitfalls often involve subjective correlation and modeling errors, including miscorrelation of reflections across faults due to lateral velocity variations or fault shadows, which can invert apparent dips and mislead structural mapping.[135] Overreliance on simplistic geologic models ignores complexity, such as interference from multiples or resolution limits, leading to erroneous horizon picks; experiments reveal interpreter biases influenced by heuristics and image quality, with fault throw uncertainties averaging 10% and heave 13-23% across repeated analyses.[136][137] Amplitude interpretation hazards include mistaking tuning effects for thickness variations or ignoring polarity reversals at interfaces with opposite reflection coefficients, where thin beds amplify perceived anomalies without corresponding lithologic changes.[138] Limited seismic resolution further compounds fault detection errors, as subtle discontinuities may be overlooked or exaggerated, particularly in low-quality images where quantitative analysis shows heightened uncertainty in throw and displacement.[139] Bayesian approaches to depth prediction highlight additional risks from unaccounted pre-stack deghosting failures, amplifying velocity model errors in time-depth conversions.[140]Economic and Operational Constraints
The acquisition of reflection seismic data imposes significant economic burdens, primarily due to the high costs of equipment, personnel, and logistics. For 3D marine surveys, acquisition expenses typically range from $6,000 to $7,500 per square kilometer, influenced by vessel operations and streamer deployment, as reported in industry estimates from 2021.[141] On land, costs escalate to approximately $30,000–$40,000 per square kilometer (equivalent to $75,000–$100,000 per square mile), driven by challenges in deploying geophones and sources across varied terrains.[142] [143] These figures exclude processing, which can add 20–50% to total expenses through computationally intensive migration and inversion techniques, further straining budgets in low-margin exploration projects.[144] Operational constraints exacerbate economic pressures by limiting survey efficiency and scalability. In land environments, difficult terrain such as mountains, swamps, or deserts slows vibrator truck and receiver deployment, often requiring weeks for setups that marine surveys complete in days, while surface noise from cultural sources complicates data quality and necessitates denser arrays that inflate costs.[145] Marine operations face high daily vessel rates averaging $250,000 in 2023, coupled with weather downtime and streamer feathering issues that reduce productive shooting time to 60–80% of vessel uptime.[146] Health, safety, and access permitting further constrain designs, as regulatory approvals for source arrays or line clearances can delay projects by months and add 10–20% to budgets, particularly in remote or populated areas.[147] These factors collectively restrict reflection seismology to targets justifying multimillion-dollar investments, often prioritizing proven basins over frontier exploration unless subsidized by joint ventures or government incentives. Budget limitations historically led to sparse 2D grids in the mid-20th century, but even modern 3D surveys demand trade-offs in fold coverage or bin size to balance resolution against affordability, as evidenced by efforts to develop lower-cost nodal systems for minerals that still exceed $10,000 per square kilometer in rugged settings.[148] [149]Environmental Considerations
Impacts on Terrestrial Ecosystems
Land-based reflection seismology employs vibroseis trucks and heavy vehicle traffic to generate seismic waves, resulting in primary environmental impacts through soil compaction, vegetation compression, and trail formation rather than direct vibrational harm to biota. These activities disturb terrestrial ecosystems by altering microtopography and permafrost stability, particularly in sensitive regions like Arctic tundra. Empirical studies indicate that while short-term disruptions occur, long-term effects vary by terrain and vegetation type, with recovery often incomplete in ice-rich soils.[150] In Arctic coastal plain tundra, 2D seismic surveys conducted in 1984–1985 compressed the vegetation mat by up to 20 cm, deepening the active layer by 10–15 cm and initiating thermokarst subsidence in areas with high ice content (up to 50% volumetric ice). Moist sedge–willow communities, comprising 37% of surveyed areas, exhibited persistent damage, with 3% of trails still visible in 2018—33 years post-disturbance—due to slowed recovery in ice-rich permafrost. Dry and wet vegetation types recovered faster, but overall, 12% of the disturbed area developed irreversible thermokarst features, expanding beyond original trails via altered hydrology including ponding and channel incision. Proposed 3D surveys could generate 63,000 km of trails, potentially affecting 122 km² with medium-to-high disturbance, amplifying risks under climate warming.[150][151] Soil compaction from vibroseis trucks reduces infiltration and root penetration, with vehicle tracks persisting for decades in boreal forests and tundra, facilitating erosion and invasive species ingress. In upland tundra, post-seismic plant communities shifted directionally 20–30 years after exploration, reflecting altered species composition. However, peer-reviewed assessments note negligible cumulative soil impacts in some arid or non-permafrost settings, where natural recovery predominates absent confounding factors like overuse.[152][153] Wildlife responses to land seismic operations show limited empirical evidence of population-level declines, with habitat fragmentation from trails posing greater threats than noise or vibrations. A 2012 study on giant and short-nosed kangaroo rats in California found no significant changes in burrow densities or population metrics during vibroseis surveys, attributing resilience to fossorial habits. In Arctic contexts, trail compression disrupts insect, small mammal, and bird habitats by reducing vegetation diversity, indirectly affecting predators like caribou during calving; yet, direct displacement data remain sparse, with maternal polar bear dens potentially vulnerable to undetected vehicle incursions. Vibrational noise studies suggest possible behavioral alterations in soil fauna, but vibroseis-specific field trials are scarce, precluding firm causal links.[154][150]Marine Surveys: Empirical Effects on Wildlife
Marine seismic surveys utilize airgun arrays to emit pulsed sounds with source levels typically ranging from 230 to 250 dB re 1 μPa at 1 m, propagating through water and eliciting responses in nearby wildlife. Empirical observations from field studies indicate primarily short-term behavioral disruptions rather than permanent injury or mortality in free-ranging populations.[155][156] In marine mammals, cetaceans such as harbor porpoises (Phocoena phocoena) demonstrate avoidance by increasing densities at distances greater than several kilometers from arrays up to 470 in³, with altered click intervals observed during exposure in the North Sea.[155] Humpback whales (Megaptera novaeangliae) off eastern Australia reduced dive durations to 45-60 seconds and increased blow rates by 20% within 3 km of a 3,130 in³ array at received levels exceeding 140 dB re 1 μPa² s⁻¹, with elevated blow rates persisting post-survey.[155] Sperm whales (Physeter macrocephalus) in the Gulf of Mexico showed reduced foraging but no strong avoidance turns during surveys with 1,680-3,090 in³ arrays.[155] Bowhead whales (Balaena mysticetus) decreased call rates near 3,147 in³ operations in the Alaskan Beaufort Sea.[155] Gray whales (Eschrichtius robustus) exhibited no changes in abundance or feeding near Sakhalin Island surveys.[155] These responses often correlate with noise levels but are confounded by factors like prey distribution and sea state, limiting causal attribution.[155] For fish, Atlantic cod (Gadus morhua) in Norwegian waters displayed no displacement from spawning grounds during 40 in³ surveys but showed disrupted diurnal feeding patterns and bradycardia indicative of stress.[155] Field evidence for hearing damage in wild fish remains scarce, with laboratory studies documenting temporary threshold shifts (TTS) that recover, though standardized exposure thresholds are undefined.[156] No significant impacts on demersal fish assemblages or catch rates were observed post-survey.[155] Invertebrates experience more pronounced physiological effects at close range; spiny lobsters (Jasus edwardsii) off Tasmania sustained statocyst damage and impaired righting ability for up to 365 days after 150 in³ exposure, with juveniles showing prolonged intermoult periods.[155] Zooplankton abundances declined by at least 50% with increased mortality following similar exposures in Australian waters.[155] Snow crabs (Chionoecetes opilio) in Newfoundland showed no correlation between catch rates and 4,880 in³ surveys.[155] Long-term population-level effects remain empirically unsupported, with most disruptions reversible upon cessation of operations and no verified links to declines in abundance or reproduction across monitored species.[155][156] Reviews emphasize knowledge gaps, particularly for cumulative or synergistic impacts, underscoring the need for controlled, long-duration studies beyond correlative field data.[157]Empirical Evidence, Mitigation, and Debates
Empirical studies document short-term behavioral responses in marine mammals to seismic airgun noise, such as avoidance by harbor porpoises at distances exceeding 8-12 km and reduced vocalizations in bowhead and humpback whales.[155] Physiological effects include temporary hearing threshold shifts in porpoises and elevated stress indicators, like a 20% increase in humpback whale blow rates.[155] Invertebrates exhibit persistent damage, with spiny lobsters showing statocyst impairment and reduced righting ability lasting up to 365 days post-exposure.[158] Zooplankton mortality reaches ≥50% at ranges of 409-808 m from airguns, potentially disrupting food webs.[159] Fish responses vary, with some species like Atlantic cod showing no spawning ground displacement but potential energy budget disruptions from altered foraging.[155] Mitigation protocols include gradual ramp-up of airgun arrays to allow animals to vacate zones and deployment of protected species observers (PSOs) to enforce shutdowns if marine mammals enter exclusion radii, typically 500-1,500 m depending on species sensitivity.[160] Operations resume after 30 minutes without detections, aiming to prevent injury-level exposures.[160] These measures, mandated by regulators like the U.S. Bureau of Ocean Energy Management (BOEM), prioritize real-time monitoring over predictive modeling.[160] Debates center on effect magnitudes and attribution, with peer-reviewed syntheses highlighting context-dependent responses confounded by factors like prey distribution, while industry analyses cite BOEM conclusions of no measurable population-level harms or injuries to mammals and fish.[155][160] Critics argue for understudied cumulative and long-term impacts, given scarce data on recovery and variability across taxa, whereas proponents emphasize absence of strandings linked to surveys and sound levels below permanent injury thresholds.[161][160] Empirical gaps persist in standardizing exposure metrics and scaling lab findings to wild populations, fueling regulatory tensions between conservation caution and operational feasibility.[156]References
- https://wiki.seg.org/wiki/Reflection_coefficient
- https://wiki.seg.org/wiki/Mathematical_foundation_of_elastic_wave_propagation
- https://wiki.seg.org/wiki/Introduction_to_noise_and_multiple_attenuation
- https://wiki.seg.org/wiki/Wave_types
- https://wiki.seg.org/wiki/Coherent_linear_noise
