Recent from talks
Nothing was collected or created yet.
Remote sensing
View on Wikipedia
This article needs additional citations for verification. (September 2023) |
This article may be too technical for most readers to understand. (September 2025) |

Remote sensing is the acquisition of information about an object or phenomenon without making physical contact with the object, in contrast to in situ or on-site observation. The term is applied especially to acquiring information about Earth and other planets. Remote sensing is used in numerous fields, including geophysics, geography, land surveying and most Earth science disciplines (e.g. exploration geophysics, hydrology, ecology, meteorology, oceanography, glaciology, geology). It also has military, intelligence, commercial, economic, planning, and humanitarian applications, among others.
In current usage, the term remote sensing generally refers to the use of satellite- or airborne-based sensor technologies to detect and classify objects on Earth. It includes the surface and the atmosphere and oceans, based on propagated signals (e.g. electromagnetic radiation). It may be split into "active" remote sensing (when a signal is emitted by a sensor mounted on a satellite or aircraft to the object and its reflection is detected by the sensor) and "passive" remote sensing (when the reflection of sunlight is detected by the sensor).[1][2][3][4][5]
Overview
[edit]Remote sensing can be divided into two types of methods: Passive remote sensing and Active remote sensing. Passive sensors gather radiation that is emitted or reflected by the object or surrounding areas. Reflected sunlight is the most common source of radiation measured by passive sensors. Examples of passive remote sensors include film photography, infrared, charge-coupled devices, and radiometers. Active collection, on the other hand, emits energy in order to scan objects and areas whereupon a sensor then detects and measures the radiation that is reflected or backscattered from the target. RADAR and LiDAR are examples of active remote sensing where the time delay between emission and return is measured, establishing the location, speed and direction of an object.

Remote sensing makes it possible to collect data of dangerous or inaccessible areas. Remote sensing applications include monitoring deforestation in areas such as the Amazon Basin, glacial features in Arctic and Antarctic regions, and depth sounding of coastal and ocean depths. Military collection during the Cold War made use of stand-off collection of data about dangerous border areas. Remote sensing also replaces costly and slow data collection on the ground, ensuring in the process that areas or objects are not disturbed.
Orbital platforms collect and transmit data from different parts of the electromagnetic spectrum, which in conjunction with larger scale aerial or ground-based sensing and analysis, provides researchers with enough information to monitor trends such as El Niño and other natural long and short term phenomena. Other uses include different areas of the earth sciences such as natural resource management, agricultural fields such as land usage and conservation,[6][7] greenhouse gas monitoring,[8] oil spill detection and monitoring,[9] and national security and overhead, ground-based and stand-off collection on border areas.[10]
Types of data acquisition techniques
[edit]The basis for multispectral collection and analysis is that of examined areas or objects that reflect or emit radiation that stand out from surrounding areas. For a summary of major remote sensing satellite systems see the overview table.
Applications of remote sensing
[edit]
Conventional radar is mostly associated with air traffic control, early warning, and certain large-scale meteorological data. Doppler radar is used by local law enforcements' monitoring of speed limits and in enhanced meteorological collection such as wind speed and direction within weather systems in addition to precipitation location and intensity. Other types of active collection includes plasmas in the ionosphere. Interferometric synthetic aperture radar is used to produce precise digital elevation models of large scale terrain (See RADARSAT, TerraSAR-X, Magellan). Laser and radar altimeters on satellites have provided a wide range of data. By measuring the bulges of water caused by gravity, they map features on the seafloor to a resolution of a mile or so. By measuring the height and wavelength of ocean waves, the altimeters measure wind speeds and direction, and surface ocean currents and directions. Ultrasound (acoustic) and radar tide gauges are used to measure sea level, tides and wave direction in coastal and offshore tide gauges.
Light detection and ranging (LiDAR) is used for weapon ranging, laser illuminated homing of projectiles, and to detect and measure the concentration of various chemicals in the atmosphere while airborne LiDAR can be used to measure the heights of objects and features on the ground more accurately than radar technology. LiDAR can be used to detect ground surface changes typically by creating Digital Surface Models (DSMs) or Digital Elevation Models (DEMs).[11] Vegetation remote sensing is a principal application of LIDAR.[12]
The most common instruments in use are radiometers and photometers, which collect reflected and emitted radiation in a wide range of frequencies. The most prevalent of these frequencies are visible and infrared sensors, followed by microwave, gamma-ray, and rarely, ultraviolet. They may also be used to detect the emission spectra of various chemicals, providing data on chemical concentrations in the atmosphere. Radiometers are also used at night, as artificial light emissions are a key signature of human activity.[13] Applications include remote sensing of population, GDP, and damage to infrastructure from war or disasters. Radiometers and radar onboard of satellites can be also used to monitor volcanic eruptions[14][15]

or interfaced with oceanographic research vessels.[16]
Spectropolarimetric Imaging has been reported to be useful for target tracking purposes by researchers at the U.S. Army Research Laboratory. They determined that manmade items possess polarimetric signatures that are not found in natural objects. These conclusions were drawn from the imaging of military trucks, like the Humvee, and trailers with their acousto-optic tunable filter dual hyperspectral and spectropolarimetric VNIR Spectropolarimetric Imager.[17][18]
Stereographic pairs of aerial photographs have often been used to make topographic maps by imagery and terrain analysts in trafficability and highway departments for potential routes, in addition to modelling terrestrial habitat features.[19][20][21]
Simultaneous multi-spectral platforms such as Landsat have been in use since the early 1970s. These thematic mappers take images in multiple wavelengths and are usually found on Earth observation satellites, including (for example) the Landsat program or the IKONOS satellite. Maps of land cover and land use from thematic mapping can be used to prospect for minerals, detect or monitor land usage, detect invasive vegetation, deforestation, and examine the health of indigenous plants and crops (satellite crop monitoring), including entire farming regions or forests.[22] Prominent scientists using remote sensing for this purpose include Janet Franklin and Ruth DeFries. Landsat images are used by regulatory agencies such as KYDOW to indicate water quality parameters including Secchi depth, chlorophyll density, and total phosphorus content. Weather satellites are used in meteorology and climatology.
Hyperspectral imaging produces image cubes where each pixel has full spectral information with imaging narrow spectral bands over a contiguous spectral range. Hyperspectral imagers are used in various applications including mineralogy, biology, defence, and environmental measurements. Within the scope of the combat against desertification, remote sensing allows researchers to follow up and monitor risk areas in the long term, to determine desertification factors, to support decision-makers in defining relevant measures of environmental management, and to assess their impacts.[23] Remotely sensed multi- and hyperspectral images can be used for assessing biodiversity at different spatial scales. Since the spectral properties of different plants species are unique, it is possible to get information about properties that relates to biodiversity such as habitat heterogeneity, spectral diversity and plant functional trait.[24][25][26] Remote sensing has also been used to detect rare plants to aid in conservation efforts. Prediction, detection, and the ability to record biophysical conditions were possible from medium to very high resolutions.[27] Remote sensing is often utilized in the collection of agricultural and environmental statistics, usually combining classified satellite images with ground truth data collected on a sample selected on an area sampling frame[28]
Geodetic
[edit]Geodetic remote sensing can be gravimetric or geometric. Overhead gravity data collection was first used in aerial submarine detection. This data revealed minute perturbations in the Earth's gravitational field that may be used to determine changes in the mass distribution of the Earth, which in turn may be used for geophysical studies, as in GRACE. Geometric remote sensing includes position and deformation imaging using InSAR, LIDAR, etc.[29]
Acoustic and near-acoustic
[edit]Three main types of acoustic and near-acoustic remote sensing exist: Sonar – passive sonar, listening for the sound made by another object (a vessel, a whale etc.); active sonar, emitting pulses of sounds and listening for echoes, used for detecting, ranging and measurements of underwater objects and terrain. Seismograms taken at different locations can locate and measure earthquakes (after they occur) by comparing the relative intensity and precise timings. Ultrasound acoustic sensing is made up of ultrasound sensors that emit high-frequency pulses and listening for echoes, used for detecting water waves and water level, as in tide gauges or for towing tanks.
To coordinate a series of large-scale observations, most sensing systems depend on the following: platform location and the orientation of the sensor. High-end instruments now often use positional information from satellite navigation systems. The rotation and orientation are often provided within a degree or two with electronic compasses. Compasses can measure not just azimuth (i. e. degrees to magnetic north), but also altitude (degrees above the horizon), since the magnetic field curves into the Earth at different angles at different latitudes. More exact orientations require gyroscopic-aided orientation, periodically realigned by different methods including navigation from stars or known benchmarks.
Data characteristics
[edit]The quality of remote sensing data consists of its spatial, spectral, radiometric and temporal resolutions.
- Spatial resolution
- The size of a pixel that is recorded in a raster image – typically pixels may correspond to square areas ranging in side length from 1 to 1,000 metres (3.3 to 3,280.8 ft).
- Spectral resolution
- The bandwidth of the different frequency bands recorded – usually, this is related to the number of frequency bands recorded by the platform. Current Landsat collection is that of seven bands, including several in the infrared spectrum, ranging from a spectral resolution of 0.7 to 2.1 μm. The Hyperion sensor on Earth Observing-1 resolves 220 bands from 0.4 to 2.5 μm, with a spectral resolution of 0.10 to 0.11 μm per band.
- Radiometric resolution
- The number of different intensities of radiation the sensor is able to distinguish. Typically, this ranges from 8 to 14 bits, corresponding to 256 levels of the gray scale and up to 16,384 intensities or "shades" of colour, in each band. It also depends on the instrument noise.
- Temporal resolution
- The frequency of flyovers by the satellite or plane, and is only relevant in time-series studies or those requiring an averaged or mosaic image as in deforesting monitoring. This was first used by the intelligence community where repeated coverage revealed changes in infrastructure, the deployment of units or the modification/introduction of equipment. Cloud cover over a given area or object makes it necessary to repeat the collection of said location.
Data processing
[edit]In order to create sensor-based maps, most remote sensing systems expect to extrapolate sensor data in relation to a reference point including distances between known points on the ground. This depends on the type of sensor used. For example, in conventional photographs, distances are accurate in the center of the image, with the distortion of measurements increasing the farther you get from the center. Another factor is that of the platen against which the film is pressed can cause severe errors when photographs are used to measure ground distances. The step in which this problem is resolved is called georeferencing and involves computer-aided matching of points in the image (typically 30 or more points per image) which is extrapolated with the use of an established benchmark, "warping" the image to produce accurate spatial data. As of the early 1990s, most satellite images are sold fully georeferenced.
In addition, images may need to be radiometrically and atmospherically corrected.
- Radiometric correction
- Allows avoidance of radiometric errors and distortions. The illumination of objects on the Earth's surface is uneven because of different properties of the relief. This factor is taken into account in the method of radiometric distortion correction.[30] Radiometric correction gives a scale to the pixel values, e. g. the monochromatic scale of 0 to 255 will be converted to actual radiance values.
- Topographic correction (also called terrain correction)
- In rugged mountains, as a result of terrain, the effective illumination of pixels varies considerably. In a remote sensing image, the pixel on the shady slope receives weak illumination and has a low radiance value, in contrast, the pixel on the sunny slope receives strong illumination and has a high radiance value. For the same object, the pixel radiance value on the shady slope will be different from that on the sunny slope. Additionally, different objects may have similar radiance values. These ambiguities seriously affected remote sensing image information extraction accuracy in mountainous areas. It became the main obstacle to the further application of remote sensing images. The purpose of topographic correction is to eliminate this effect, recovering the true reflectivity or radiance of objects in horizontal conditions. It is the premise of quantitative remote sensing application.
- Atmospheric correction
- Elimination of atmospheric haze by rescaling each frequency band so that its minimum value (usually realised in water bodies) corresponds to a pixel value of 0. The digitizing of data also makes it possible to manipulate the data by changing gray-scale values.
Interpretation is the critical process of making sense of the data. The first application was that of aerial photographic collection which used the following process; spatial measurement through the use of a light table in both conventional single or stereographic coverage, added skills such as the use of photogrammetry, the use of photomosaics, repeat coverage, Making use of objects' known dimensions in order to detect modifications. Image Analysis is the recently developed automated computer-aided application that is in increasing use.
Object-Based Image Analysis (OBIA) is a sub-discipline of GI-Science devoted to partitioning remote sensing (RS) imagery into meaningful image-objects, and assessing their characteristics through spatial, spectral and temporal scale.
Old data from remote sensing is often valuable because it may provide the only long-term data for a large extent of geography. At the same time, the data is often complex to interpret, and bulky to store. Modern systems tend to store the data digitally, often with lossless compression. The difficulty with this approach is that the data is fragile, the format may be archaic, and the data may be easy to falsify. One of the best systems for archiving data series is as computer-generated machine-readable ultrafiche, usually in typefonts such as OCR-B, or as digitized half-tone images. Ultrafiches survive well in standard libraries, with lifetimes of several centuries. They can be created, copied, filed and retrieved by automated systems. They are about as compact as archival magnetic media, and yet can be read by human beings with minimal, standardized equipment.
Generally speaking, remote sensing works on the principle of the inverse problem: while the object or phenomenon of interest (the state) may not be directly measured, there exists some other variable that can be detected and measured (the observation) which may be related to the object of interest through a calculation. The common analogy given to describe this is trying to determine the type of animal from its footprints. For example, while it is impossible to directly measure temperatures in the upper atmosphere, it is possible to measure the spectral emissions from a known chemical species (such as carbon dioxide) in that region. The frequency of the emissions may then be related via thermodynamics to the temperature in that region.
Data processing levels
[edit]To facilitate the discussion of data processing in practice, several processing "levels" were first defined in 1986 by NASA as part of its Earth Observing System[31] and steadily adopted since then, both internally at NASA (e. g.,[32]) and elsewhere (e. g.,[33]); these definitions are:
| Level | Description |
|---|---|
| 0 | Reconstructed, unprocessed instrument and payload data at full resolution, with any and all communications artifacts (e. g., synchronization frames, communications headers, duplicate data) removed. |
| 1a | Reconstructed, unprocessed instrument data at full resolution, time-referenced, and annotated with ancillary information, including radiometric and geometric calibration coefficients and georeferencing parameters (e. g., platform ephemeris) computed and appended but not applied to the Level 0 data (or if applied, in a manner that level 0 is fully recoverable from level 1a data). |
| 1b | Level 1a data that have been processed to sensor units (e. g., radar backscatter cross section, brightness temperature, etc.); not all instruments have Level 1b data; level 0 data is not recoverable from level 1b data. |
| 2 | Derived geophysical variables (e. g., ocean wave height, soil moisture, ice concentration) at the same resolution and location as Level 1 source data. |
| 3 | Variables mapped on uniform spacetime grid scales, usually with some completeness and consistency (e. g., missing points interpolated, complete regions mosaicked together from multiple orbits, etc.). |
| 4 | Model output or results from analyses of lower level data (i. e., variables that were not measured by the instruments but instead are derived from these measurements). |
A Level 1 data record is the most fundamental (i. e., highest reversible level) data record that has significant scientific utility, and is the foundation upon which all subsequent data sets are produced. Level 2 is the first level that is directly usable for most scientific applications; its value is much greater than the lower levels. Level 2 data sets tend to be less voluminous than Level 1 data because they have been reduced temporally, spatially, or spectrally. Level 3 data sets are generally smaller than lower level data sets and thus can be dealt with without incurring a great deal of data handling overhead. These data tend to be generally more useful for many applications. The regular spatial and temporal organization of Level 3 datasets makes it feasible to readily combine data from different sources.
While these processing levels are particularly suitable for typical satellite data processing pipelines, other data level vocabularies have been defined and may be appropriate for more heterogeneous workflows.
Applications
[edit]Satellite images provide very useful information to produce statistics on topics closely related to the territory, such as agriculture, forestry or land cover in general. The first large project to apply Landsata 1 images for statistics was LACIE (Large Area Crop Inventory Experiment), run by NASA, NOAA and the USDA in 1974–77.[34][35] Many other application projects on crop area estimation have followed, including the Italian AGRIT project and the MARS project of the Joint Research Centre (JRC) of the European Commission.[36] Forest area and deforestation estimation have also been a frequent target of remote sensing projects,[37][38] the same as land cover and land use[39]
Ground truth or reference data to train and validate image classification require a field survey if we are targeting annual crops or individual forest species, but may be substituted by photointerpretation if we look at wider classes that can be reliably identified on aerial photos or satellite images. It is relevant to highlight that probabilistic sampling is not critical for the selection of training pixels for image classification, but it is necessary for accuracy assessment of the classified images and area estimation.[40][41][42] Additional care is recommended to ensure that training and validation datasets are not spatially correlated.[43]
We suppose now that we have classified images or a land cover map produced by visual photo-interpretation, with a legend of mapped classes that suits our purpose, taking again the example of wheat. The straightforward approach is counting the number of pixels classified as wheat and multiplying by the area of each pixel. Many authors have noticed that estimator is that it is generally biased because commission and omission errors in a confusion matrix do not compensate each other[44][45][46]
The main strength of classified satellite images or other indicators computed on satellite images is providing cheap information on the whole target area or most of it. This information usually has a good correlation with the target variable (ground truth) that is usually expensive to observe in an unbiased and accurate way. Therefore, it can be observed on a probabilistic sample selected on an area sampling frame. Traditional survey methodology provides different methods to combine accurate information on a sample with less accurate, but exhaustive, data for a covariable or proxy that is cheaper to collect. For agricultural statistics, field surveys are usually required, while photo-interpretation may better for land cover classes that can be reliably identified on aerial photographs or high resolution satellite images. Additional uncertainty can appear because of imperfect reference data (ground truth or similar).[47][48]
Some options are: ratio estimator, regression estimator,[49] calibration estimators[50] and small area estimators[39]
If we target other variables, such as crop yield or leaf area, we may need different indicators to be computed from images, such as the NDVI, a good proxy to chlorophyll activity.[28]
History
[edit]

The modern discipline of remote sensing arose with the development of flight. The balloonist G. Tournachon (alias Nadar) made photographs of Paris from his balloon in 1858.[51] Messenger pigeons, kites, rockets and unmanned balloons were also used for early images. With the exception of balloons, these first, individual images were not particularly useful for map making or for scientific purposes.
Systematic aerial photography was developed for military surveillance and reconnaissance purposes beginning in World War I.[52] After WWI, remote sensing technology was quickly adapted to civilian applications.[53] This is demonstrated by the first line of a 1941 textbook titled "Aerophotography and Aerosurverying," which stated the following:
"There is no longer any need to preach for aerial photography-not in the United States- for so widespread has become its use and so great its value that even the farmer who plants his fields in a remote corner of the country knows its value."
— James Bagley[53]
The development of remote sensing technology reached a climax during the Cold War with the use of modified combat aircraft such as the P-51, P-38, RB-66 and the F-4C, or specifically designed collection platforms such as the U2/TR-1, SR-71, A-5 and the OV-1 series both in overhead and stand-off collection.[54] A more recent development is that of increasingly smaller sensor pods such as those used by law enforcement and the military, in both manned and unmanned platforms. The advantage of this approach is that this requires minimal modification to a given airframe. Later imaging technologies would include infrared, conventional, Doppler and synthetic aperture radar.[55]
The development of artificial satellites in the latter half of the 20th century allowed remote sensing to progress to a global scale as of the end of the Cold War.[56] Instrumentation aboard various Earth observing and weather satellites such as Landsat, the Nimbus and more recent missions such as RADARSAT and UARS provided global measurements of various data for civil, research, and military purposes. Space probes to other planets have also provided the opportunity to conduct remote sensing studies in extraterrestrial environments, synthetic aperture radar aboard the Magellan spacecraft provided detailed topographic maps of Venus, while instruments aboard SOHO allowed studies to be performed on the Sun and the solar wind, just to name a few examples.[57][58]
Recent developments include, beginning in the 1960s and 1970s, the development of image processing of satellite imagery. The use of the term "remote sensing" began in the early 1960s when Evelyn Pruitt realized that advances in science meant that aerial photography was no longer an adequate term to describe the data streams being generated by new technologies.[59][60] With assistance from her fellow staff member at the Office of Naval Research, Walter Bailey, she coined the term "remote sensing".[61][62] Several research groups in Silicon Valley including NASA Ames Research Center, GTE, and ESL Inc. developed Fourier transform techniques leading to the first notable enhancement of imagery data. In 1999 the first commercial satellite (IKONOS) collecting very high resolution imagery was launched.[63]
Training and education
[edit]Remote sensing has a growing relevance in the modern information society. It represents a key technology as part of the aerospace industry and bears increasing economic relevance – new sensors e.g. TerraSAR-X and RapidEye are developed constantly and the demand for skilled labour is increasing steadily. Furthermore, remote sensing exceedingly influences everyday life, ranging from weather forecasts to reports on climate change or natural disasters. As an example, 80% of the German students use the services of Google Earth; in 2006 alone the software was downloaded 100 million times. But studies have shown that only a fraction of them know more about the data they are working with.[64] There exists a huge knowledge gap between the application and the understanding of satellite images. Remote sensing only plays a tangential role in schools, regardless of the political claims to strengthen the support for teaching on the subject.[65] A lot of the computer software explicitly developed for school lessons has not yet been implemented due to its complexity. Thereby, the subject is either not at all integrated into the curriculum or does not pass the step of an interpretation of analogue images. In fact, the subject of remote sensing requires a consolidation of physics and mathematics as well as competences in the fields of media and methods apart from the mere visual interpretation of satellite images.
Many teachers have great interest in the subject "remote sensing", being motivated to integrate this topic into teaching, provided that the curriculum is considered. In many cases, this encouragement fails because of confusing information.[66] In order to integrate remote sensing in a sustainable manner organizations like the EGU or Digital Earth[67] encourage the development of learning modules and learning portals. Examples include: FIS – Remote Sensing in School Lessons,[68] Geospektiv,[69] Ychange,[70] or Spatial Discovery,[71] to promote media and method qualifications as well as independent learning.
Software
[edit]Remote sensing data are processed and analyzed with computer software, known as a remote sensing application. A large number of proprietary and open source applications exist to process remote sensing data.
Remote Sensing with gamma rays
[edit]There are applications of gamma rays to mineral exploration through remote sensing. In 1972 more than $2 million were spent on remote sensing applications with gamma rays to mineral exploration. Gamma rays are used to search for deposits of uranium. By observing radioactivity from potassium, porphyry copper deposits can be located. A high ratio of uranium to thorium has been found to be related to the presence of hydrothermal copper deposits. Radiation patterns have also been known to occur above oil and gas fields, but some of these patterns were thought to be due to surface soils instead of oil and gas.[72]
Satellites
[edit]
An Earth observation satellite or Earth remote sensing satellite is a satellite used or designed for Earth observation (EO) from orbit, including spy satellites and similar ones intended for non-military uses such as environmental monitoring, meteorology, cartography and others. The most common type are Earth imaging satellites, that take satellite images, analogous to aerial photographs; some EO satellites may perform remote sensing without forming pictures, such as in GNSS radio occultation.
The first occurrence of satellite remote sensing can be dated to the launch of the first artificial satellite, Sputnik 1, by the Soviet Union on October 4, 1957.[73] Sputnik 1 sent back radio signals, which scientists used to study the ionosphere.[74] The United States Army Ballistic Missile Agency launched the first American satellite, Explorer 1, for NASA's Jet Propulsion Laboratory on January 31, 1958. The information sent back from its radiation detector led to the discovery of the Earth's Van Allen radiation belts.[75] The TIROS-1 spacecraft, launched on April 1, 1960, as part of NASA's Television Infrared Observation Satellite (TIROS) program, sent back the first television footage of weather patterns to be taken from space.[73]
In 2008, more than 150 Earth observation satellites were in orbit, recording data with both passive and active sensors and acquiring more than 10 terabits of data daily.[73] By 2021, that total had grown to over 950, with the largest number of satellites operated by US-based company Planet Labs.[76]
Most Earth observation satellites carry instruments that should be operated at a relatively low altitude. Most orbit at altitudes above 500 to 600 kilometers (310 to 370 mi). Lower orbits have significant air-drag, which makes frequent orbit reboost maneuvers necessary. The Earth observation satellites ERS-1, ERS-2 and Envisat of European Space Agency as well as the MetOp spacecraft of EUMETSAT are all operated at altitudes of about 800 km (500 mi). The Proba-1, Proba-2 and SMOS spacecraft of European Space Agency are observing the Earth from an altitude of about 700 km (430 mi). The Earth observation satellites of UAE, DubaiSat-1 & DubaiSat-2 are also placed in Low Earth orbits (LEO) orbits and providing satellite imagery of various parts of the Earth.[77][78]
To get global coverage with a low orbit, a polar orbit is used. A low orbit will have an orbital period of roughly 100 minutes and the Earth will rotate around its polar axis about 25° between successive orbits. The ground track moves towards the west 25° each orbit, allowing a different section of the globe to be scanned with each orbit. Most are in Sun-synchronous orbits.
A geostationary orbit, at 36,000 km (22,000 mi), allows a satellite to hover over a constant spot on the earth since the orbital period at this altitude is 24 hours. This allows uninterrupted coverage of more than 1/3 of the Earth per satellite, so three satellites, spaced 120° apart, can cover the whole Earth. This type of orbit is mainly used for meteorological satellites.See also
[edit]- Coastal management – Preventing flooding and erosion of shorelines
- Geophysical survey – Systematic collection of geophysical data for spatial studies
- Land change science – Interdisciplinary study of changes in climate, land use, and land cover
- List of Earth observation satellites
- Normalized difference water index – Remote sensing-derived indexes related to liquid water
- Orthophoto – Geometrically corrected aerial photograph
- Pictometry – Aerial survey technique
- Radiometry – Techniques for measuring electromagnetic radiation
References
[edit]- ^ Njoku, Eni, ed. (2014). Encyclopedia of remote sensing. The encyclopedia of earth sciences series. New York: Springer Reference. ISBN 978-0-387-36698-2. OCLC 880118890.
- ^ Schowengerdt, Robert A. (2007). Remote sensing: models and methods for image processing (3rd ed.). Academic Press. p. 2. ISBN 978-0-12-369407-2. Archived from the original on 1 May 2016. Retrieved 15 November 2015.
- ^ Schott, John Robert (2007). Remote sensing: the image chain approach (2nd ed.). Oxford University Press. p. 1. ISBN 978-0-19-517817-3. Archived from the original on 24 April 2016. Retrieved 15 November 2015.
- ^ Guo, Huadong; Huang, Qingni; Li, Xinwu; Sun, Zhongchang; Zhang, Ying (2013). "Spatiotemporal analysis of urban environment based on the vegetation–impervious surface–soil model" (PDF). Journal of Applied Remote Sensing. 8 (1) 084597. Bibcode:2014JARS....8.4597G. doi:10.1117/1.JRS.8.084597. S2CID 28430037. Archived (PDF) from the original on 19 July 2018. Retrieved 27 October 2021.
- ^ Liu, Jian Guo & Mason, Philippa J. (2009). Essential Image Processing for GIS and Remote Sensing. Wiley-Blackwell. p. 4. ISBN 978-0-470-51032-2. Archived from the original on 18 April 2023. Retrieved 2 April 2023.
- ^ "Saving the monkeys". SPIE Professional. Archived from the original on 4 February 2016. Retrieved 1 January 2016.
- ^ Howard, A.; et al. (19 August 2015). "Remote sensing and habitat mapping for bearded capuchin monkeys (Sapajus libidinosus): landscapes for the use of stone tools". Journal of Applied Remote Sensing. 9 (1) 096020. doi:10.1117/1.JRS.9.096020. S2CID 120031016.
- ^ Innocenti, Fabrizio; Robinson, Rod; Gardiner, Tom; Finlayson, Andrew; Connor, Andy (2017). "Differential Absorption Lidar (DIAL) Measurements of Landfill Methane Emissions". Remote Sensing. 9 (9): 953. Bibcode:2017RemS....9..953I. doi:10.3390/rs9090953.
- ^ C. Bayindir; J. D. Frost; C. F. Barnes (January 2018). "Assessment and enhancement of SAR noncoherent change detection of sea-surface oil spills". IEEE J. Ocean. Eng. 43 (1): 211–220. Bibcode:2018IJOE...43..211B. doi:10.1109/JOE.2017.2714818. hdl:11729/1296. S2CID 44706251.
- ^ "Science@nasa – Technology: Remote Sensing". Archived from the original on 29 September 2006. Retrieved 18 February 2009.
- ^ Hu, Liuru; Navarro-Hernández, María I.; Liu, Xiaojie; Tomás, Roberto; Tang, Xinming; Bru, Guadalupe; Ezquerro, Pablo; Zhang, Qingtao (October 2022). "Analysis of regional large-gradient land subsidence in the Alto Guadalentín Basin (Spain) using open-access aerial LiDAR datasets". Remote Sensing of Environment. 280 113218. Bibcode:2022RSEnv.28013218H. doi:10.1016/j.rse.2022.113218. hdl:10045/126163. ISSN 0034-4257.
- ^ Zhao, Kaiguang; Suarez, Juan C; Garcia, Mariano; Hu, Tongxi; Wang, Cheng; Londo, Alexis (2018). "Utility of multitemporal lidar for forest and carbon monitoring: Tree growth, biomass dynamics, and carbon flux". Remote Sensing of Environment. 204: 883–897. Bibcode:2018RSEnv.204..883Z. doi:10.1016/j.rse.2017.09.007.
- ^ Levin, Noam; Kyba, Christopher C.M.; Zhang, Qingling; Sánchez de Miguel, Alejandro; Román, Miguel O.; Li, Xi; Portnov, Boris A.; Molthan, Andrew L.; Jechow, Andreas; Miller, Steven D.; Wang, Zhuosen; Shrestha, Ranjay M.; Elvidge, Christopher D. (February 2020). "Remote sensing of night lights: A review and an outlook for the future" (PDF). Remote Sensing of Environment. 237 111443. Bibcode:2020RSEnv.23711443L. doi:10.1016/j.rse.2019.111443. hdl:10871/40052. S2CID 214254543.
- ^ Corradino, Claudia; Ganci, Gaetana; Bilotta, Giuseppe; Cappello, Annalisa; Del Negro, Ciro; Fortuna, Luigi (January 2019). "Smart Decision Support Systems for Volcanic Applications". Energies. 12 (7): 1216. doi:10.3390/en12071216.
- ^ Corradino, Claudia; Ganci, Gaetana; Cappello, Annalisa; Bilotta, Giuseppe; Hérault, Alexis; Del Negro, Ciro (January 2019). "Mapping Recent Lava Flows at Mount Etna Using Multispectral Sentinel-2 Images and Machine Learning Techniques". Remote Sensing. 11 (16): 1916. Bibcode:2019RemS...11.1916C. doi:10.3390/rs11161916.
- ^ Just Sit Right Back and You'll Hear a Tale, a Tale of a Plankton Trip Archived 10 August 2021 at the Wayback Machine NASA Earth Expeditions, 15 August 2018.
- ^ Goldberg, A.; Stann, B.; Gupta, N. (July 2003). "Multispectral, Hyperspectral, and Three-Dimensional Imaging Research at the U.S. Army Research Laboratory" (PDF). Proceedings of the International Conference on International Fusion [6th]. 1: 499–506.
- ^ Makki, Ihab; Younes, Rafic; Francis, Clovis; Bianchi, Tiziano; Zucchetti, Massimo (1 February 2017). "A survey of landmine detection using hyperspectral imaging". ISPRS Journal of Photogrammetry and Remote Sensing. 124: 40–53. Bibcode:2017JPRS..124...40M. doi:10.1016/j.isprsjprs.2016.12.009. ISSN 0924-2716.
- ^ Mills, J.P.; et al. (1997). "Photogrammetry from Archived Digital Imagery for Seal Monitoring". The Photogrammetric Record. 15 (89): 715–724. Bibcode:1997PgRec..15..715M. doi:10.1111/0031-868X.00080. S2CID 140189982.
- ^ Twiss, S.D.; et al. (2001). "Topographic spatial characterisation of grey seal Halichoerus grypus breeding habitat at a sub-seal size spatial grain". Ecography. 24 (3): 257–266. doi:10.1111/j.1600-0587.2001.tb00198.x.
- ^ Stewart, J.E.; et al. (2014). "Finescale ecological niche modeling provides evidence that lactating gray seals (Halichoerus grypus) prefer access to fresh water in order to drink" (PDF). Marine Mammal Science. 30 (4): 1456–1472. Bibcode:2014MMamS..30.1456S. doi:10.1111/mms.12126. Archived (PDF) from the original on 13 July 2021. Retrieved 27 October 2021.
- ^ Zhang, Chuanrong; Li, Xinba (September 2022). "Land Use and Land Cover Mapping in the Era of Big Data". Land. 11 (10): 1692. Bibcode:2022Land...11.1692Z. doi:10.3390/land11101692.
- ^ "Begni G. Escadafal R. Fontannaz D. and Hong-Nga Nguyen A.-T. (2005). Remote sensing: a tool to monitor and assess desertification. Les dossiers thématiques du CSFD. Issue 2. 44 pp". Archived from the original on 26 May 2019. Retrieved 27 October 2021.
- ^ Wang, Ran; Gamon, John A. (15 September 2019). "Remote sensing of terrestrial plant biodiversity". Remote Sensing of Environment. 231 111218. Bibcode:2019RSEnv.23111218W. doi:10.1016/j.rse.2019.111218. ISSN 0034-4257. S2CID 197567301.
- ^ Rocchini, Duccio; Boyd, Doreen S.; Féret, Jean-Baptiste; Foody, Giles M.; He, Kate S.; Lausch, Angela; Nagendra, Harini; Wegmann, Martin; Pettorelli, Nathalie (February 2016). Skidmore, Andrew; Chauvenet, Alienor (eds.). "Satellite remote sensing to monitor species diversity: potential and pitfalls". Remote Sensing in Ecology and Conservation. 2 (1): 25–36. Bibcode:2016RSEC....2...25R. doi:10.1002/rse2.9. hdl:11585/720672. ISSN 2056-3485. S2CID 59446258.
- ^ Schweiger, Anna K.; Cavender-Bares, Jeannine; Townsend, Philip A.; Hobbie, Sarah E.; Madritch, Michael D.; Wang, Ran; Tilman, David; Gamon, John A. (June 2018). "Plant spectral diversity integrates functional and phylogenetic components of biodiversity and predicts ecosystem function". Nature Ecology & Evolution. 2 (6): 976–982. Bibcode:2018NatEE...2..976S. doi:10.1038/s41559-018-0551-1. ISSN 2397-334X. PMID 29760440. S2CID 256718584. Archived from the original on 4 April 2023. Retrieved 4 April 2023.
- ^ Cerrejón, Carlos; Valeria, Osvaldo; Marchand, Philippe; Caners, Richard T.; Fenton, Nicole J. (18 February 2021). "No place to hide: Rare plant detection through remote sensing". Diversity and Distributions. 27 (6): 948–961. Bibcode:2021DivDi..27..948C. doi:10.1111/ddi.13244. ISSN 1366-9516. S2CID 233886263.
- ^ a b Carfagna, E. (2005). "Using remote sensing for agricultural statistics". International Statistical Review. 73 (3): 389–404. doi:10.1111/j.1751-5823.2005.tb00155.x.
- ^ "Geodetic Imaging". Archived from the original on 2 October 2016. Retrieved 29 September 2016.
- ^ Grigoriev А.N. (2015). "Мethod of radiometric distortion correction of multispectral data for the earth remote sensing". Scientific and Technical Journal of Information Technologies, Mechanics and Optics. 15 (4): 595–602. doi:10.17586/2226-1494-2015-15-4-595-602.
- ^ NASA (1986), Report of the EOS data panel, Earth Observing System, Data and Information System, Data Panel Report, Vol. IIa., NASA Technical Memorandum 87777, June 1986, 62 pp. Available at http://hdl.handle.net/2060/19860021622 Archived 27 October 2021 at the Wayback Machine
- ^ C. L. Parkinson, A. Ward, M. D. King (Eds.) Earth Science Reference Handbook – A Guide to NASA's Earth Science Program and Earth Observing Satellite Missions, National Aeronautics and Space Administration Washington, D. C. Available at http://eospso.gsfc.nasa.gov/ftp_docs/2006ReferenceHandbook.pdf Archived 15 April 2010 at the Wayback Machine
- ^ GRAS-SAF (2009), Product User Manual, GRAS Satellite Application Facility, Version 1.2.1, 31 March 2009. Available at http://www.grassaf.org/general-documents/products/grassaf_pum_v121.pdf Archived 26 July 2011 at the Wayback Machine
- ^ Houston, A.H. "Use of satellite data in agricultural surveys". Communications in Statistics – Theory and Methods (23): 2857–2880.
- ^ Allen, J.D. "A Look at the Remote Sensing Applications Program of the National Agricultural Statistics Service". Journal of Official Statistics. 6 (4): 393–409.
- ^ Taylor, J (1997). Regional Crop Inventories in Europe Assisted by Remote Sensing: 1988–1993. Synthesis Report. Luxembourg: Office for Publications of the EC.
- ^ Foody, G.M. (1994). "Estimation of tropical forest extent and regenerative stage using remotely sensed data". Journal of Biogeography. 21 (3): 223–244. Bibcode:1994JBiog..21..223F. doi:10.2307/2845527. JSTOR 2845527.
- ^ Achard, F (2002). "Determination of deforestation rates of the world's humid tropical forests". Science. 297 (5583): 999–1002. Bibcode:2002Sci...297..999A. doi:10.1126/science.1070656. PMID 12169731.
- ^ a b Ambrosio Flores, L (2000). "Land cover estimation in small areas using ground survey and remote sensing". Remote Sensing of Environment. 74 (2): 240–248. Bibcode:2000RSEnv..74..240F. doi:10.1016/S0034-4257(00)00114-0.
- ^ Congalton, Russell G.; Green, Kass (25 January 2019). Assessing the Accuracy of Remotely Sensed Data: Principles and Practices (Third ed.). Boca Raton: CRC Press. doi:10.1201/9780429052729. ISBN 978-0-429-05272-9.
- ^ Stehman, S. (2013). "Estimating Area from an Accuracy Assessment Error Matrix". Remote Sensing of Environment. 132 (132): 202–211. Bibcode:2013RSEnv.132..202S. doi:10.1016/j.rse.2013.01.016.
- ^ Stehman, S. (2019). "Key issues in rigorous accuracy assessment of land cover products". Remote Sensing of Environment. 231 (231) 111199. Bibcode:2019RSEnv.23111199S. doi:10.1016/j.rse.2019.05.018.
- ^ Zhen, Z (2013). "Impact of training and validation sample selection on classification accuracy and accuracy assessment when using reference polygons in object-based classification". International Journal of Remote Sensing. 34 (19): 6914–6930. Bibcode:2013IJRS...34.6914Z. doi:10.1080/01431161.2013.810822.
- ^ Czaplewski, R.L. "Misclassification bias in areal estimates". Photogrammetric Engineering and Remote Sensing (39): 189–192.
- ^ Bauer, M.E. (1978). "Area estimation of crops by digital analysis of Landsat data". Photogrammetric Engineering and Remote Sensing (44): 1033–1043.
- ^ Olofsson, P. (2014). "Good practices for estimating area and assessing accuracy of land change". Remote Sensing of Environment. 148 (148): 42–57. Bibcode:2014RSEnv.148...42O. doi:10.1016/j.rse.2014.02.015.
- ^ Mcroberts, R (2018). "The effects of imperfect reference data on remote sensing-assisted estimators of land cover class proportions". ISPRS Journal of Photogrammetry and Remote Sensing. 142 (142): 292–300. Bibcode:2018JPRS..142..292M. doi:10.1016/j.isprsjprs.2018.06.002.
- ^ Foody, G.M. (2010). "Assessing the accuracy of land cover change with imperfect ground reference data". Remote Sensing of Environment. 114 (10): 2271–2285. Bibcode:2010RSEnv.114.2271F. doi:10.1016/j.rse.2010.05.003. Archived from the original on 17 June 2024. Retrieved 12 August 2024.
- ^ Sannier, C (2014). "Using the regression estimator with landsat data to estimate proportion forest cover and net proportion deforestation in gabon". Remote Sensing of Environment. 151 (151): 138–148. Bibcode:2014RSEnv.151..138S. doi:10.1016/j.rse.2013.09.015.
- ^ Gallego, F.J. (2004). "Remote sensing and land cover area estimation". International Journal of Remote Sensing. 25 (5): 3019–3047. Bibcode:2004IJRS...25.3019G. doi:10.1080/01431160310001619607.
- ^ Maksel, Rebecca. "Flight of the Giant". Air & Space Magazine. Archived from the original on 18 August 2021. Retrieved 19 February 2019.
- ^ IWM, Alan Wakefield
Head of photographs at (4 April 2014). "A bird's-eye view of the battlefield: aerial photography". The Daily Telegraph. ISSN 0307-1235. Archived from the original on 18 April 2014. Retrieved 19 February 2019. - ^ a b Bagley, James (1941). Aerophotography and Aerosurverying (1st ed.). York, PA: The Maple Press Company.
- ^ "Air Force Magazine". www.airforcemag.com. Archived from the original on 19 February 2019. Retrieved 19 February 2019.
- ^ "Military Imaging and Surveillance Technology (MIST)". www.darpa.mil. Archived from the original on 18 August 2021. Retrieved 19 February 2019.
- ^ The Indian Society of International Law – Newsletter: VOL. 15, No. 4, October – December 2016 (Report). Brill. 2018. doi:10.1163/2210-7975_hrd-9920-2016004.
- ^ "In Depth | Magellan". Solar System Exploration: NASA Science. Archived from the original on 19 October 2021. Retrieved 19 February 2019.
- ^ Garner, Rob (15 April 2015). "SOHO – Solar and Heliospheric Observatory". NASA. Archived from the original on 18 September 2021. Retrieved 19 February 2019.
- ^ Campbell, James B.; Wynne, Randolph H. (21 June 2011). Introduction to Remote Sensing (5th ed.). New York London: The Guilford Press. ISBN 978-1-60918-176-5.
- ^ Ryerson, Robert A. (2010). Why 'where' matters : understanding and profiting from GPS, GIS, and remote sensing: practical advice for individuals, communities, companies and countries. Internet Archive. Manotick, ON : Kim Geomatics Corp. ISBN 978-0-9866376-0-5.
- ^ Fussell, Jay; Rundquist, Donald; Harrington, John A. (September 1986). "On defining remote sensing" (PDF). Photogrammetric Engineering and Remote Sensing. 52 (9): 1507–1511. Archived from the original (PDF) on 4 October 2021.
- ^ Pruitt, Evelyn L. (1979). "The Office of Naval Research and Geography". Annals of the Association of American Geographers. 69 (1): 103–108. doi:10.1111/j.1467-8306.1979.tb01235.x. ISSN 0004-5608. JSTOR 2569553.
- ^ Colen, Jerry (8 April 2015). "Ames Research Center Overview". NASA. Archived from the original on 28 September 2021. Retrieved 19 February 2019.
- ^ Ditter, R., Haspel, M., Jahn, M., Kollar, I., Siegmund, A., Viehrig, K., Volz, D., Siegmund, A. (2012) Geospatial technologies in school – theoretical concept and practical implementation in K-12 schools. In: International Journal of Data Mining, Modelling and Management (IJDMMM): FutureGIS: Riding the Wave of a Growing Geospatial Technology Literate Society; Vol. X
- ^ Stork, E.J., Sakamoto, S.O., and Cowan, R.M. (1999) "The integration of science explorations through the use of earth images in middle school curriculum", Proc. IEEE Trans. Geosci. Remote Sensing 37, 1801–1817
- ^ Bednarz, S.W.; Whisenant, S.E. (July 2000). "Mission Geography: Linking national geography standards, innovative technologies and NASA". IGARSS 2000. IEEE 2000 International Geoscience and Remote Sensing Symposium. Taking the Pulse of the Planet: The Role of Remote Sensing in Managing the Environment. Proceedings (Cat. No.00CH37120). Vol. 6. pp. 2780–2782 vol.6. doi:10.1109/IGARSS.2000.859713. ISBN 0-7803-6359-0. S2CID 62414447.
- ^ "Digital Earth". Archived from the original on 10 September 2015.
- ^ "FIS – Remote Sensing in School Lessons". Archived from the original on 26 October 2012. Retrieved 25 October 2012.
- ^ "geospektiv". Archived from the original on 2 May 2018. Retrieved 1 June 2018.
- ^ "YCHANGE". Archived from the original on 17 August 2018. Retrieved 1 June 2018.
- ^ "Landmap – Spatial Discovery". Archived from the original on 29 November 2014. Retrieved 27 October 2021.
- ^ Grasty, R (1976). Applications of Gamma Radiation in Remote Sensing (1st ed.). Berlin: Springer-Verlag. p. 267. ISBN 978-3-642-66238-6.
- ^ a b c Tatem, Andrew J.; Goetz, Scott J.; Hay, Simon I. (2008). "Fifty Years of Earth-observation Satellites". American Scientist. 96 (5): 390–398. doi:10.1511/2008.74.390. PMC 2690060. PMID 19498953.
- ^ Kuznetsov, V.D.; Sinelnikov, V.M.; Alpert, S.N. (June 2015). "Yakov Alpert: Sputnik-1 and the first satellite ionospheric experiment". Advances in Space Research. 55 (12): 2833–2839. Bibcode:2015AdSpR..55.2833K. doi:10.1016/j.asr.2015.02.033.
- ^ "James A. Van Allen". nmspacemuseum.org. New Mexico Museum of Space History. Retrieved 14 May 2018.
- ^ "How many Earth observation satellites are orbiting the planet in 2021?". 18 August 2021.
- ^ "DubaiSat-2, Earth Observation Satellite of UAE". Mohammed Bin Rashid Space Centre. Archived from the original on 17 January 2019. Retrieved 4 July 2016.
- ^ "DubaiSat-1, Earth Observation Satellite of UAE". Mohammed Bin Rashid Space Centre. Archived from the original on 4 March 2016. Retrieved 4 July 2016.
External links
[edit]
Media related to Remote sensing at Wikimedia Commons
Remote sensing
View on GrokipediaFundamentals
Definition and Principles
Remote sensing constitutes the acquisition of information about physical objects, areas, or phenomena by measuring reflected or emitted electromagnetic radiation from a distance, without physical contact between the sensor and the target.[1] This process fundamentally depends on the interaction of electromagnetic waves with matter, where incident radiation from sources such as the Sun or artificial emitters interacts with atmospheric constituents and surface materials through mechanisms including reflection, absorption, transmission, and emission, altering the wave's properties based on the target's composition, geometry, and state.[7] Central principles encompass multiple dimensions of resolution that govern data quality and interpretability. Spatial resolution defines the finest resolvable detail, typically expressed as the ground sample distance corresponding to one pixel, enabling detection of features from meters to kilometers depending on sensor altitude and optics. Spectral resolution specifies the sensor's capacity to distinguish wavelengths, quantified by the number and bandwidth of spectral bands, which allows differentiation of materials based on unique reflectance signatures across the electromagnetic spectrum. Temporal resolution measures revisit frequency, critical for monitoring dynamic processes like vegetation growth or urban expansion, often constrained by orbital mechanics or flight schedules. Radiometric resolution quantifies the number of detectable intensity levels, typically in bits per pixel, influencing sensitivity to subtle variations in radiance.[7] Retrieving target properties from observed signals poses an inverse problem, wherein forward models simulate radiance from assumed surface states, but ill-posedness arises as multiple configurations—such as varying soil moisture or aerosol loads—can yield indistinguishable measurements, necessitating regularization techniques and prior knowledge for unique solutions like estimating land cover fractions.[8] Causal factors including sensor-target distance and atmospheric path introduce signal degradation; for example, gaseous absorption by water vapor and oxygen attenuates microwave signals, with empirical models showing losses of about 0.01 dB/km in dry air at 22 GHz, accumulating to 1 dB over a 100 km slant path, thereby reducing signal-to-noise ratios and biasing retrievals without correction.[9]Physical Basis and Electromagnetic Interactions
Remote sensing operates on the principle that electromagnetic radiation interacts with atmospheric constituents and surface materials through absorption, reflection, transmission, and emission, altering the radiation's intensity, direction, and spectral composition before detection by sensors. All objects with temperatures above absolute zero emit electromagnetic radiation, with the emitted spectrum approximating a blackbody curve shifted by emissivity, which measures radiative efficiency and varies by wavelength and material (ranging from 0 to 1). For opaque targets, the relationship between absorption (A) and reflection (R) follows A + R = 1, while transmission (T) is negligible; Kirchhoff's law equates absorptivity to emissivity at thermal equilibrium, enabling passive thermal infrared sensing of surface temperatures.[10][11][12] The electromagnetic spectrum relevant to remote sensing spans ultraviolet (below 0.4 μm), visible (0.4–0.7 μm), near-infrared (0.7–1.3 μm), shortwave infrared (1.3–3 μm), thermal infrared (3–100 μm), and microwaves (above 1 mm), with interactions determined by molecular structure, electronic transitions, and geometry. In the visible and near-infrared, vegetation exhibits strong absorption in red wavelengths (around 0.65 μm) due to chlorophyll pigments capturing photons for photosynthesis, contrasted by high reflection (up to 50–60%) in near-infrared from internal leaf scattering in mesophyll cells, which lack absorption centers at those wavelengths. Soil and water show opposite patterns, with water absorbing strongly beyond 0.7 μm due to molecular vibrations, while bare soils reflect more uniformly but with lower near-infrared values than vegetation. Microwave interactions involve dielectric properties, where surface roughness and moisture content influence backscattering via volume and surface mechanisms.[13][14][15] Atmospheric effects modify upwelling radiation through gaseous absorption by species like water vapor, carbon dioxide, and ozone (peaking at specific bands, e.g., 9.6 μm for O3), and scattering: Rayleigh scattering by molecules dominates shorter wavelengths (intensity proportional to λ^{-4}, explaining blue sky dominance), while Mie scattering by aerosols and cloud droplets affects visible to infrared with less wavelength dependence. These processes attenuate signals and add path radiance, necessitating corrections via radiative transfer models like MODTRAN, which simulates layered atmospheric transmission, molecular/particle absorption-emission, and multiple scattering for wavelengths from ultraviolet to far-infrared.[16][17] Longer wavelengths like microwaves penetrate clouds effectively because cloud droplet diameters (typically 10–50 μm) are much smaller than radar wavelengths (centimeters), rendering Rayleigh scattering cross-sections negligible (σ ∝ (2πa/λ)^4 a^2, where a is droplet radius), with minimal attenuation compared to optical bands where droplet sizes approximate wavelengths, causing strong forward scattering and obscuration. This contrasts with visible/near-infrared limitations, where cumulative scattering and absorption by hydrometeors block surface signals, underscoring wavelength-scale dependencies in propagation physics.[15]Platforms
Spaceborne Platforms
Spaceborne platforms enable remote sensing at global scales through satellites in low Earth orbit (LEO) and geostationary orbit (GEO), leveraging orbital mechanics for extensive coverage independent of terrestrial constraints. LEO altitudes, typically 500-800 km for Earth observation missions, position satellites close enough for detailed imaging while allowing sun-synchronous paths to minimize illumination variability across revisits.[19] [20] However, the rapid orbital velocity—approximately 7.8 km/s—necessitates multiple passes or constellations to achieve practical temporal resolution, as single satellites cover only narrow swaths per orbit. GEO platforms, stationed at 35,786 km, match Earth's rotation for stationary viewpoints over equatorial regions, providing uninterrupted hemispheric views but with inherent resolution limits due to greater distance.[21] The Landsat series illustrates LEO capabilities, with Landsat 9, launched September 27, 2021, operating at 705 km in a near-polar, sun-synchronous orbit, yielding a 185 km swath width and 16-day revisit interval that halves to 8 days when paired with Landsat 8's offset phasing.[22] [23] [24] The Copernicus Sentinel constellation, commencing with Sentinel-1A on April 3, 2014, deploys pairs in 693-700 km LEO orbits 180° apart, enabling 6-12 day global revisits scalable with additional units for enhanced temporal density.[25] [26] Commercial LEO fleets, such as Planet Labs' Dove nanosatellites, amplify coverage via large constellations exceeding 150 units at varied altitudes under 600 km, delivering near-daily imaging of all land surfaces since achieving full deployment around 2017.[27] [28] These systems exploit orbital trade-offs: proximity in LEO boosts ground resolution for fixed apertures but constrains instantaneous field-of-view to tens of kilometers, demanding high orbital inclination for pole-to-pole access and increasing vulnerability to atmospheric drag that shortens mission life without propulsion.[29] Elevated altitudes expand swaths for broader synoptic data but dilute resolution proportionally to distance, elevating requirements for larger apertures or enhanced signal processing to counter diminished received power from inverse-square attenuation.[30] GEO satellites like the GOES-R series prioritize persistence, imaging full Western Hemisphere disks every 5-15 minutes from fixed positions, ideal for real-time monitoring of transient events such as storms, though pixel scales degrade to 0.5-4 km owing to the 36,000 km vantage.[21] [31] This configuration avoids revisit gaps but limits utility for fine-scale terrestrial features, underscoring the causal interplay where altitude inversely scales resolution and power budgets while directly enhancing coverage continuity.[32]Airborne Platforms
![USAF U-2 aircraft, precursor to NASA's ER-2][float-right] Airborne platforms encompass manned aircraft and unmanned aerial vehicles (UAVs) deployed for remote sensing, operating at altitudes from tens of meters to over 20 km to deliver enhanced spatial resolution and deployment flexibility relative to spaceborne systems. These platforms facilitate rapid response missions and repeated observations over targeted regions, with manned high-altitude aircraft like NASA's ER-2 flying at approximately 21 km (70,000 feet) to simulate satellite perspectives while minimizing atmospheric interference, as the aircraft operates above 95% of the Earth's atmosphere.[33][34] The ER-2 supports up to 12-hour flights equipped with diverse sensors for Earth observation, including radar and hyperspectral instruments.[34] UAVs, such as the DJI Matrice series, enable low-altitude targeted surveys, integrating hyperspectral sensors for detailed spectral analysis in applications like environmental monitoring.[35] For instance, the DJI Matrice 300 RTK has been used to acquire hyperspectral data over protected areas, offering precise control over flight paths and sensor orientation.[35] At altitudes around 100 m, these systems achieve ground sample distances (GSD) in the sub-meter range, far surpassing typical satellite resolutions for fine-scale features.[36] Key advantages of airborne platforms include superior revisit frequency for dynamic regional monitoring and reduced costs compared to satellite operations for localized tasks, allowing on-demand data collection without orbital constraints.[5][37] Recent developments in 2024-2025 emphasize UAV-satellite data fusion techniques, such as pixel-based and feature-based integration, to combine high-resolution UAV imagery with broader satellite coverage for multi-scale analysis in areas like crop stress detection.[38][39] This fusion enhances temporal and spatial data complementarity, addressing limitations in individual platform revisit times and coverage.[40]Ground-Based and Proximal Platforms
Ground-based remote sensing employs sensors mounted on static structures like tripods, towers, or scaffolds, or mobile platforms such as vehicles, to collect data directly from terrestrial surfaces at short ranges, typically enabling resolutions down to centimeters.[41] These platforms facilitate detailed measurements of surface properties, including spectral reflectance via tripod-mounted spectrometers and structural features through terrestrial laser scanning (TLS) systems, which emit laser pulses to map vegetation height or terrain topography with sub-centimeter precision.[42] Proximal sensing, a subset operating at distances under a few meters, often integrates optical sensors like hyperspectral radiometers or proximal LiDAR to capture near-field data on soil moisture, crop canopies, or atmospheric profiles, minimizing path length effects inherent in elevated or orbital systems.[43] In practice, these platforms serve as critical tools for ground truthing, where proximal spectrometers measure in-situ reflectance spectra to calibrate and validate models derived from airborne or spaceborne imagery, ensuring spectral signatures align with empirical surface interactions rather than distorted proxies.[44] For instance, vehicle-mounted proximal sensors, such as those combining electromagnetic induction and optical spectroscopy, provide simultaneous soil property profiles during field campaigns, correlating proximal data with laboratory analyses to refine remote sensing algorithms for variables like organic carbon content.[45] Acoustic sensors, deployed ground-based for near-surface applications, detect subsurface features via sound wave propagation, complementing optical methods in environments with high particulate interference.[46] The causal advantage of ground-based and proximal approaches lies in their negligible atmospheric traversal, which empirically reduces signal attenuation from absorption and scattering—effects quantified in proximal soil sensing studies as lowering error variances by up to 20-30% compared to aerial equivalents due to direct surface-to-sensor coupling.[47] This proximity preserves raw electromagnetic or acoustic signatures, enabling higher fidelity in causal inference for local phenomena, such as vegetation water content via proximal fluorescence measurements, without the confounding variables of tropospheric water vapor or aerosols prevalent in longer-range acquisitions.[48] Consequently, these methods underpin precision agriculture and site-specific validation, where empirical datasets from proximal platforms anchor broader remote sensing interpretations against overgeneralized atmospheric models.[49]Sensing Technologies
Passive Sensing Methods
Passive remote sensing methods detect electromagnetic radiation emitted or reflected by natural sources, such as solar illumination on Earth's surface or thermal emissions from terrestrial objects, without the sensor providing its own energy source. These techniques rely on the physical principles of radiative transfer, where sensors measure radiance arriving from the target scene after interaction with the atmosphere. Common implementations include optical systems for reflected sunlight and radiometers for emitted thermal infrared, operating primarily in the visible to shortwave infrared (0.4–2.5 μm) and thermal infrared (8–14 μm) spectral regions, respectively.[50][3] Optical passive sensors, such as multispectral cameras, capture reflected solar radiation in discrete wavelength bands to quantify surface reflectance properties, enabling material identification through spectral contrast. For instance, the Thematic Mapper instrument on Landsat 5, launched on March 1, 1984, acquired data in seven bands with 30-meter spatial resolution, supporting long-term monitoring of land cover changes despite its decommissioning in January 2013.[51] Advanced hyperspectral variants extend this to hundreds of contiguous narrow bands for finer spectral resolution; the PRISMA satellite, launched March 22, 2019, by the Italian Space Agency, images in over 200 bands from 400 to 2500 nm at 30-meter resolution, enhancing discrimination of subtle biochemical signatures in vegetation and minerals.[52] Signal-to-noise ratio (SNR) in these systems is fundamentally limited by photon arrival statistics, detector noise, and atmospheric attenuation, with empirical data showing SNR degradation under low solar zenith angles due to reduced incident flux.[53] Thermal infrared radiometers measure blackbody-like emissions from surfaces, governed by Planck's law and Stefan-Boltzmann relation, where radiance correlates with kinetic temperature raised to the fourth power, modulated by emissivity. These sensors detect heat contrasts day or night, independent of sunlight, but remain constrained by atmospheric absorption in water vapor bands and cloud opacity, which blocks surface emissions entirely.[54] SNR in thermal systems varies with target temperature differential and integration time, often achieving 100–300 in clear conditions for mid-resolution sensors, though empirical tests reveal drops below 50 under partial cloud interference from scattered thermal noise.[55] Overall, passive methods' efficacy hinges on external illumination or emission strength, imposing inherent temporal and weather dependencies absent in active counterparts, as validated by field-calibrated datasets showing null returns in darkness for reflective bands.[56]Active Sensing Methods
Active remote sensing methods employ sensors that actively transmit electromagnetic energy toward a target and detect the backscattered signal to derive information about the target's properties, distance, and motion. Unlike passive methods reliant on natural illumination, active techniques operate independently of solar or ambient light, enabling continuous data acquisition during darkness or in shadowed areas. Microwave-based systems, such as radar, additionally penetrate atmospheric clouds, rain, and vegetation to varying degrees depending on wavelength, providing all-weather capabilities essential for consistent monitoring.[57] Radar systems, operating in the microwave portion of the spectrum (wavelengths from millimeters to meters), transmit pulses or continuous waves and measure the time delay and phase shift of echoes for ranging and imaging. Synthetic Aperture Radar (SAR) enhances resolution by simulating a large aperture through platform motion, achieving ground resolutions down to meters from spaceborne platforms. For instance, the European Space Agency's Sentinel-1 satellites, equipped with C-band SAR (wavelength approximately 5.6 cm), provide interferometric wide-swath imaging at resolutions of 5 m by 20 m, supporting applications requiring high temporal revisit rates of 6-12 days. Longer wavelengths, such as L-band (around 23 cm), exhibit greater penetration into vegetation canopies, with empirical studies demonstrating signal interaction with underlying terrain in forested areas up to several meters deep, as evidenced by backscatter analyses from spaceborne missions. Interferometric SAR (InSAR) exploits phase differences between multiple acquisitions to generate digital elevation models with centimeter-level accuracy over large areas.[58][59] Doppler radar variants utilize the frequency shift in returned signals caused by relative motion between the sensor and target, enabling velocity measurements with precisions on the order of millimeters per second. This effect arises from the compression or extension of wavefronts, directly quantifying radial speeds for detecting dynamic phenomena like surface deformation or fluid flows. In remote sensing contexts, Doppler processing in SAR modes supports motion estimation, complementing amplitude-based imaging.[60] Light Detection and Ranging (LiDAR) systems transmit short laser pulses, typically in the near-infrared spectrum (e.g., 1064 nm), and compute distances from the round-trip time of flight, yielding high-precision three-dimensional point clouds. Spaceborne LiDAR, such as NASA's ICESat-2 mission launched on September 15, 2018, employs the Advanced Topographic Laser Altimeter System (ATLAS) to measure surface elevations with vertical accuracies better than 10 cm along strong beam tracks spaced 17 m apart. While LiDAR offers sub-meter horizontal resolutions and dense sampling for topographic mapping, its penetration is limited to translucent media like sparse vegetation or snow, unlike radar's broader subsurface access in certain bands. Active methods' self-illumination principle ensures direct causal measurement of target response, minimizing dependencies on external variables like solar geometry.[61]Multispectral, Hyperspectral, and Radar Techniques
Multispectral remote sensing acquires reflectance data across a limited number of discrete, relatively broad spectral bands, typically 3 to 10, enabling differentiation of surface materials by exploiting distinct reflectance patterns in visible, near-infrared, and sometimes thermal wavelengths.[7] The Moderate Resolution Imaging Spectroradiometer (MODIS), deployed on NASA's Terra and Aqua satellites since 1999 and 2002 respectively, exemplifies this approach with 36 bands spanning 0.4 to 14.5 μm and nadir resolutions of 250 m (bands 1-2), 500 m (bands 3-7), and 1 km (bands 8-36).[62] This configuration balances coverage and computational feasibility but limits fine-grained material identification due to coarser spectral sampling. Hyperspectral remote sensing advances material discrimination by capturing data in hundreds of contiguous narrow bands, often 200 or more, yielding continuous spectra that reveal subtle absorption features tied to molecular composition.[63] The EnMAP satellite, launched April 1, 2022, by the German Aerospace Center, delivers 246 bands from 420 to 2450 nm at 30 m spatial resolution, calibrated for quantitative spectroscopic analysis.[64] Assessments as recent as October 2025 confirm EnMAP's utility in deriving detailed endmember libraries for sub-pixel material mapping, with innovations in preprocessing enhancing signal-to-noise ratios for low-reflectance targets.[65] Techniques like spectral unmixing further exploit this density by linearly decomposing mixed pixels into pure endmember spectra and abundance fractions, assuming pixels comprise convex combinations of spectrally distinct components, thus enabling resolution of heterogeneity below the native pixel scale.[66] Radar techniques complement optical methods through active microwave illumination, penetrating clouds and operating day or night to probe surface geometry via backscattering. Synthetic aperture radar (SAR) polarimetry quantifies roughness by transmitting and receiving in orthogonal polarizations (e.g., HH, VV, HV), yielding a covariance matrix decomposable into surface, volume, and double-bounce scattering contributions per the Pauli or Freeman-Durden models.[67] Entropy-alpha decomposition, for instance, parameterizes roughness via the alpha angle derived from eigenvector analysis of the coherency matrix, with higher entropy indicating diffuse scattering from irregular surfaces.[68] Fully polarimetric data at X-band (8-12 GHz), as in airborne systems, resolve roughness variations on scales comparable to wavelength, distinguishing smooth from corrugated terrains through cross-polarization ratios exceeding -20 dB for rough interfaces.[69]Data Management
Data Acquisition Characteristics
Remote sensing data acquisition yields raw datasets characterized primarily by four resolution types: spatial, spectral, radiometric, and temporal. Spatial resolution determines the smallest discernible feature on the ground, typically measured in meters per pixel, with values ranging from sub-meter for high-end commercial satellites to hundreds of meters for coarse sensors like MODIS.[3][70] Spectral resolution specifies the number and width of electromagnetic bands captured, enabling differentiation of materials based on reflectance signatures, as in multispectral systems with 4-10 bands or hyperspectral with hundreds.[3][71] Radiometric resolution, quantified by bit depth (e.g., 8-bit yielding 256 gray levels or 12-bit offering 4096), governs the sensor's ability to distinguish subtle intensity variations, with higher depths preserving fidelity in low-contrast scenes but increasing data size.[3][72] Temporal resolution reflects revisit frequency, often 1-16 days for sun-synchronous orbits like Landsat, constrained by orbital mechanics and swath width.[3][73] Raw data is commonly stored in self-describing formats like HDF5, which supports hierarchical structures for multidimensional arrays, metadata, and extensibility, as used in NASA Earth Observing System satellites for efficient handling of petabyte-scale archives.[74][75] Accompanying metadata includes geolocation coordinates, acquisition timestamps, sensor orientation, and platform ephemeris, with absolute geolocation accuracy varying from meters in optical systems to sub-meter in SAR due to precise range-azimuth measurements.[76][77] Geometric distortions inherent to raw acquisitions arise from platform motion, off-nadir viewing, and terrain relief, manifesting as relief displacement in optical imagery or foreshortening and layover in side-looking radar, independent of post-acquisition correction.[78][79] These effects scale with elevation and incidence angle, potentially shifting features by tens of pixels in uncorrected data from airborne or agile satellites.[80] Acquisition from satellite constellations generates vast volumes, often exceeding petabytes annually—e.g., NASA's Earth science archive at 40 PB as of 2020—balancing extensive coverage against per-scene quality trade-offs like reduced signal-to-noise in miniaturized CubeSats versus dedicated platforms.[81][82] Higher-resolution data amplifies volume exponentially, necessitating onboard compression or selective downlinking to manage bandwidth limits.[83][84]Preprocessing and Calibration
Preprocessing in remote sensing involves initial corrections to raw sensor data to mitigate distortions arising from instrumental, environmental, and platform-specific factors, enabling conversion of digital numbers (DN) to physically meaningful quantities such as radiance or reflectance.[85] Key error sources include sensor calibration inaccuracies and drift due to degradation over time, which can introduce systematic biases if unaddressed from first principles of radiative transfer.[85] These steps precede higher-level analysis and focus on empirical validation against ground truth to achieve sub-pixel accuracy where feasible.[86] Radiometric calibration standardizes sensor response by transforming raw DN values into at-sensor radiance or top-of-atmosphere reflectance, often using pre-launch laboratory measurements adjusted via in-flight vicarious methods. Vicarious calibration employs stable ground sites, such as the Dunhuang test site in China's Gobi Desert, where simultaneous surface reflectance measurements from field instruments validate satellite data; for instance, experiments on December 14, 2021, at Dunhuang assessed multispectral imager accuracy to within 5% for select bands.[87] Networks like RadCalNet provide automated, global vicarious reference for absolute calibration, reducing reliance on manufacturer coefficients prone to post-launch drift.[86] Destriping addresses striping artifacts from detector non-uniformity or calibration errors in pushbroom scanners, employing variational models that minimize stripe directionality while preserving edges, as demonstrated in hyperspectral data where stripe noise arises from sensor response variations.[88] Geometric rectification corrects spatial distortions from sensor viewing geometry, platform motion, and terrain relief, typically through orthorectification that projects imagery onto a map grid using digital elevation models (DEMs) and ground control points (GCPs). This process removes relief displacement, achieving accuracies often below 1 pixel RMSE when validated against independent GCPs; for example, assessments of orthorectified products report RMSE values of 0.5-2 pixels depending on DEM resolution and sensor type.[89] Empirical validation via RMSE quantifies residual errors, with lower values indicating effective tie-point distribution and model fidelity, though unmodeled platform variations can propagate if not accounted for in bundle adjustment.[90] Atmospheric correction compensates for scattering and absorption effects that attenuate and alter upwelling signals, converting at-sensor radiance to surface reflectance via radiative transfer models. The FLAASH algorithm, based on MODTRAN4, performs this for visible to shortwave infrared hyperspectral and multispectral data by inverting path radiance and transmittance along the line-of-sight, incorporating aerosol optical depth estimates from image histograms or ancillary data; it handles adjacency effects and nonuniform atmospheres, yielding corrections accurate to 2-5% in validation against in-situ spectra.[91] Such methods prioritize causal error propagation from molecular and particulate scattering, validated empirically rather than assumed neutral, to ensure downstream usability.[92]Analysis Pipelines and Levels
Remote sensing data analysis pipelines follow a hierarchical structure, transforming raw observations into actionable insights through sequential processing stages. These pipelines typically adhere to standardized levels defined by agencies like NASA, where Level 0 (L0) consists of reconstructed, unprocessed instrument data at full resolution, including both signal and ancillary data without calibration.[93] Level 1 (L1) data incorporate radiometric and geometric corrections, yielding calibrated and geolocated instrument measurements suitable for initial analysis.[93] Higher levels build upon these: Level 2 (L2) derives specific geophysical variables, such as surface reflectance or vegetation indices, from L1 inputs using algorithms tailored to sensor characteristics; Level 3 (L3) aggregates L2 data onto uniform spatiotemporal grids for statistical analysis; and Level 4 (L4) integrates model assimilations or simulations, producing synthesized outputs like climate forecasts that combine remote sensing with ground truth or numerical models.[93][94] Core methods within these pipelines include pixel-based or object-based classification to categorize land cover or features, employing supervised techniques (e.g., maximum likelihood or support vector machines trained on labeled datasets) or unsupervised approaches (e.g., clustering via k-means) to partition imagery. Change detection pipelines compare multi-temporal datasets to identify alterations, such as post-classification comparison or spectral differencing, often benchmarked by metrics like the Kappa coefficient, which measures agreement between classified maps beyond chance, with values above 0.8 indicating strong performance in validated studies.[95] These methods propagate through levels, ensuring derived products at L2 and above retain traceability to raw inputs via metadata on processing history and algorithmic parameters.[96] Recent advancements incorporate artificial intelligence to automate and enhance pipeline efficiency, particularly in onboard processing for real-time applications; for instance, zero-shot AI models enable automated image segmentation without extensive retraining, reducing computational demands for large-scale remote sensing datasets as demonstrated in 2025 frameworks.[97] Deep learning architectures, such as convolutional neural networks, have been integrated into classification and change detection at L2 stages, improving accuracy in complex scenes like urban expansion monitoring by handling nonlinear feature interactions that traditional methods overlook.[98] Uncertainty propagation remains integral, employing first-order error analysis or Monte Carlo simulations to quantify how radiometric noise or geometric distortions at L0 amplify into L4 model outputs, thereby supporting causal inference in downstream applications like environmental modeling.[99][100] This rigorous handling ensures derived products include error bounds, with peer-reviewed benchmarks showing propagated uncertainties typically under 5-10% for well-calibrated sensors in L2 vegetation indices.[101]Applications
Environmental Monitoring and Earth Science
Remote sensing provides empirical observations of Earth's dynamic environmental systems, enabling the quantification of changes in land, ocean, and atmospheric variables over decadal scales. Satellite platforms deliver repeatable, global coverage that surpasses ground-based networks in spatial extent, supporting causal analyses of phenomena like vegetation dynamics and hydrological cycles. For instance, time-series data from missions such as Landsat have documented forest cover losses, while altimetry and ocean color sensors track sea level and productivity shifts, offering baselines for validating process-based models.[102][103] In climate tracking, radar altimetry from the TOPEX/Poseidon and Jason series satellites has measured global mean sea level rise at 111 mm from 1993 to 2023, with the rate doubling from 2.1 mm per year initially to 4.5 mm per year by 2024, attributed to thermal expansion and ice melt contributions discernable through precise orbit and instrument calibrations.[104][105] Landsat-derived indices, such as normalized difference vegetation index (NDVI) time-series, have quantified deforestation rates in the Amazon, where annual losses exceeded 10,000 km² between 2019 and 2022, informing policy responses despite variability from seasonal clouding and selective logging detection limits.[106] These datasets underpin IPCC reports by providing observational constraints on essential climate variables, such as land degradation neutrality progress, though integration requires cross-validation with in-situ measurements to mitigate algorithmic assumptions.[107] Oceanographic applications leverage passive sensors like MODIS on the Aqua satellite to estimate surface chlorophyll-a concentrations via bio-optical algorithms, proxying phytoplankton biomass and revealing spatiotemporal patterns in marine productivity linked to nutrient upwelling and temperature stratification.[103] Such data highlight global baselines for biodiversity hotspots, yet optical methods suffer from cloud cover biases that skew tropical and high-latitude sampling, potentially underestimating variability by up to 15-20% in discharge or productivity estimates without active radar supplementation.[108] Empirical strengths lie in long-term consistency, as evidenced by multi-decadal archives, but causal inferences demand caution against overreliance, given propagation of preprocessing errors into downstream analyses and the need for ground-truthed calibration to distinguish signal from noise in heterogeneous terrains.[6]Military and Intelligence Operations
Remote sensing has been integral to military reconnaissance since the Cold War era, enabling surveillance over denied territories without risking personnel. The Corona program, initiated by the U.S. in 1959, launched its first successful mission on August 18, 1960, from Vandenberg Air Force Base, capturing photographic imagery via film-return satellites that produced over 800,000 images across 145 missions until 1972, providing critical intelligence on Soviet capabilities.[109][110] Declassified in 1995, these images demonstrated remote sensing's capacity for strategic monitoring, filling gaps left by U-2 overflights after the 1960 U-2 incident. The U-2 aircraft, operational since 1956, conducted high-altitude missions up to 70,000 feet, employing optical and radar sensors for signals intelligence and imagery in operations like the 1991 Gulf War, where it delivered near-real-time data to commanders.[111] Synthetic aperture radar (SAR) enhances military operations by providing all-weather, day-night imaging capable of penetrating camouflage and foliage to detect concealed targets, such as vehicle movements or underground structures. Deployed on platforms from aircraft to satellites, SAR supports target acquisition, surveillance, and battle damage assessment, as evidenced by its use in tracking enemy positions and infrastructure in modern conflicts.[112][113] While susceptible to jamming, empirical successes in operations underscore its strategic value, offering superior situational awareness over optical methods limited by weather.[114] In contemporary intelligence, remote sensing verifies arms control treaties through national technical means, including satellite monitoring of nuclear sites and missile deployments, as protected under agreements like the 1972 SALT I treaty.[115] During the 2022 Russia-Ukraine conflict, commercial providers like Maxar supplied high-resolution imagery under U.S. government contracts, enabling real-time analysis of troop movements, infrastructure damage, and debunking propaganda, with datasets confirming widespread destruction via change detection algorithms.[116][117] These integrations highlight hybrid commercial-military models, where firms secure multimillion-dollar defense deals for geospatial intelligence, augmenting national assets despite vulnerabilities like signal interference.[118]Agriculture, Resource Management, and Disaster Response
Remote sensing enables precision agriculture by providing data for site-specific crop management, such as using the Normalized Difference Vegetation Index (NDVI) derived from satellite imagery to predict yields. For instance, time-integrated NDVI from Landsat imagery has been modeled to forecast wheat yields through linear mixed-effects approaches, correlating vegetation health over growing seasons with harvest outcomes.[119] Empirical studies demonstrate that integrating multispectral remote sensing with machine learning improves yield estimation accuracy, allowing farmers to optimize fertilizer and water application, thereby reducing input costs by up to 20-30% while maintaining or increasing productivity.[120][121] Variable rate technology guided by these data minimizes nutrient losses and soil compaction, enhancing long-term soil resilience without yield penalties.[122] In resource management, remote sensing supports mining surveys by mapping surface alterations and vegetation recovery post-extraction. Satellite and aerial imagery assess land disturbance, with hyperspectral data identifying mineral compositions for exploration efficiency.[123] Case studies from the U.S. Geological Survey illustrate how multi-temporal remote sensing tracks mine site reclamation, quantifying vegetation regrowth rates and erosion risks to inform regulatory compliance and sustainable practices.[124] This approach reduces exploratory drilling needs by prioritizing high-potential areas, though accuracy depends on resolution matching terrain variability. For disaster response, remote sensing facilitates rapid damage assessment and early warnings, as seen in the Copernicus Emergency Management Service's activation for the February 2023 Turkey-Syria earthquakes (magnitudes 7.8 and 7.5), where Sentinel-1 synthetic aperture radar generated displacement maps across affected zones within days.[125][126] The Famine Early Warning Systems Network (FEWS NET) employs satellite-derived vegetation indices to monitor drought impacts on crops, enabling predictions of food insecurity phases that guide aid distribution in regions like East Africa.[127] However, limitations include data latency from processing delays, which can hinder real-time acute event response, and reduced efficacy in rugged terrains where cloud cover or topographic shadows obscure optical sensors.[128] These constraints underscore the need for complementary active sensing methods like radar to ensure reliable coverage.Urban Planning, Infrastructure, and Commercial Uses
Remote sensing technologies, including multispectral imaging and LiDAR, enable detailed mapping of urban land use and expansion patterns, supporting planners in assessing growth trajectories and zoning decisions. For instance, satellite-derived data processed through platforms like Google Earth Engine have been used to quantify urban sprawl in regions such as Ambon City, Indonesia, by analyzing Landsat imagery from 1990 to 2020 to detect impervious surface increases and inform sustainable development strategies.[129] Similarly, in Jakarta, Earth Engine applications have mapped sprawl over three decades, revealing annual expansion rates exceeding 5% in peri-urban areas through classification of built-up versus vegetated lands.[130] These tools provide repeatable, large-scale analyses that traditional ground surveys cannot match in scope or frequency, though initial data processing requires computational expertise.[131] In infrastructure management, LiDAR systems facilitate non-contact inspections of critical assets like bridges, generating high-resolution 3D point clouds to detect deformations, cracks, and corrosion without halting traffic. A Transportation Research Board study demonstrated mobile LiDAR's efficacy in capturing structural geometries during routine scans, achieving sub-centimeter accuracy for change detection over multiple inspections.[132] For example, drone-mounted LiDAR has been applied to assess bridge decks and towers, reducing inspection times from days to hours while minimizing worker exposure to hazards, as evidenced in U.S. Department of Transportation pilots.[133] Such applications enhance predictive maintenance but face limitations from high equipment costs, often exceeding $100,000 per system, and atmospheric interference in adverse weather.[134] Traffic analysis in urban settings benefits from remote sensing via satellite and aerial imagery, allowing extraction of vehicle counts, speeds, and flow patterns across entire cities. High-resolution WorldView satellite data, combined with deep learning algorithms, has enabled monitoring of traffic volumes at scales beyond fixed sensor networks, as shown in studies of UK urban intersections where daily vehicle densities were estimated with 85-90% accuracy.[135] Thermal and optical remote sensing further quantifies congestion impacts, such as heat island effects from roadways during low-traffic periods like COVID-19 lockdowns, aiding in infrastructure capacity planning.[136] These methods offer efficiency gains over manual counts but are constrained by cloud cover obscuring optical sensors and the need for ground-truth validation to mitigate algorithmic errors in complex scenes.[137] Commercially, remote sensing drives revenue through services like hyperspectral detection for hydrocarbon exploration and spill response, where spectral signatures distinguish oil from water with over 90% classification accuracy in controlled tests.[138] The global remote sensing technology market, encompassing these applications, is projected to reach $21.11 billion in 2025, fueled by demand in urban and industrial sectors for efficient, scalable data over labor-intensive alternatives.[139] Despite advantages in rapid deployment—such as UAV hyperspectral surveys covering kilometers in minutes—adoption barriers include data processing expenses and regulatory hurdles for commercial satellite operations.[140]Historical Development
Pre-20th Century Origins
The conceptual precursors to remote sensing emerged from 17th- and 19th-century advancements in optics and aerial observation, enabling distant acquisition of environmental data without physical contact. Isaac Newton's 1672 experiments with prisms demonstrated that white light disperses into a spectrum of colors, revealing the heterogeneous nature of sunlight and establishing foundational principles of refraction and spectral decomposition that underpin later spectroscopic identification of materials via reflected or emitted radiation.[141] These optical insights, grounded in empirical refraction measurements, facilitated causal understanding of how electromagnetic interactions with matter produce detectable signatures, a core mechanism in remote sensing.[142] By the mid-19th century, the invention of photography intersected with ballooning to produce the first elevated imagery. In 1858, French photographer Gaspard Félix Tournachon (Nadar) captured the earliest known aerial photograph from a tethered hot-air balloon over the Bièvre Valley near Paris at an altitude of about 1,200 feet (365 meters), using wet-collodion plates to record landscape features from afar. This marked an initial application of non-contact imaging for topographic depiction, though limited by exposure times and balloon stability, with Nadar's subsequent tethered ascents in 1859-1860 aimed at systematic land surveying despite technical challenges like motion blur.[4] Military contexts adapted these elevation techniques for reconnaissance during conflicts. In the American Civil War, starting in 1861, Union aeronaut Thaddeus S. C. Lowe conducted balloon ascents—such as his June 18 demonstration over Washington, D.C., at 500 feet (152 meters)—transmitting real-time visual observations of terrain and troop movements via telegraph to ground commanders, providing strategic overviews unattainable from surface positions.[143] Lowe's balloons, inflated with coal gas and tethered for controlled observation, supported over 3,000 ascents by war's end, emphasizing causal advantages in visibility for artillery spotting and enemy positioning without direct exposure, though reliant on human visual interpretation rather than recorded imagery.[144] These efforts highlighted remote sensing's potential for operational intelligence, predating photographic integration in warfare.[145]Mid-20th Century Advancements
During World War II, military demands accelerated remote sensing through enhanced aerial photography and radar systems for reconnaissance, with Allied forces employing oblique and vertical photography to map enemy positions and infrastructure.[146] Postwar, the U.S. utilized captured German V-2 rockets for suborbital sounding missions from White Sands Proving Ground starting in 1946, equipping them with 35mm motion picture cameras to capture the first ground images from altitudes exceeding 100 km, demonstrating the feasibility of space-based observation.[147] In the 1950s, the U.S. military developed side-looking airborne radar (SLAR) systems, such as those pioneered by Westinghouse, enabling all-weather terrain imaging from high-altitude aircraft for mapping and surveillance, with operational tests occurring by the mid-decade.[148] The Lockheed U-2, operational from 1956, extended these capabilities with high-resolution photography from 70,000 feet, proving pivotal in the 1962 Cuban Missile Crisis when imagery from October 14 missions revealed Soviet medium-range ballistic missile sites in western Cuba, informing U.S. naval quarantine decisions and averting escalation.[149][150] The Corona satellite program, initiated in 1960 under CIA auspices, introduced orbital remote sensing with film-return capsules, successfully recovering the first images on August 19, 1960, and producing over 800,000 photographs by its 1972 conclusion, primarily for strategic intelligence during the Cold War; the imagery remained classified until declassification in 1995.[151] Paralleling military advances, civilian applications emerged in the 1960s through NASA and USGS aircraft-based multispectral scanning experiments, which tested wavelength-specific sensors for resource identification, directly informing the design of the Earth Resources Technology Satellite-1 (ERTS-1), launched July 23, 1972, as the first satellite multispectral imager.[152]Late 20th to Early 21st Century Expansion
The launch of the IKONOS satellite on September 24, 1999, marked the advent of commercial high-resolution remote sensing, delivering panchromatic imagery at 1-meter resolution and multispectral data at 4 meters globally.[153] This development privatized access to sub-meter detail previously limited to government programs, spurring applications in mapping and urban analysis while challenging regulatory frameworks on data export.[154] Concurrently, the 1990s saw widespread integration of GPS with remote sensing for precise georeferencing, enabling overlay of satellite imagery with ground-truthed coordinates to correct distortions and enhance feature extraction accuracy in GIS environments.[155] In 2008, the U.S. Geological Survey opened the Landsat archive to free public access, releasing over 2 million scenes from Landsat 1 through 7 dating back to 1972, which democratized petabyte-scale datasets for global users and accelerated longitudinal studies of land cover change.[156] This policy shift, effective by December 2008, reduced barriers to entry and fostered international collaboration, with download volumes surging from thousands to millions of scenes annually.[157] The 2010s witnessed explosive growth in small satellite constellations, exemplified by CubeSat deployments for frequent Earth revisits, such as Planet Labs' Dove fleet providing daily global coverage at 3-meter resolution starting around 2014.[158] These low-cost, proliferated systems—numbering hundreds by mid-decade—enabled near-real-time monitoring, contrasting earlier infrequent orbits.[159] Remote sensing data volumes escalated from terabytes to approaching exabytes cumulatively by the late 2010s, driven by higher-resolution sensors and denser orbital networks, necessitating advances in cloud-based processing.[160] During the 2020 COVID-19 outbreak, such capabilities facilitated rapid mapping of mobility patterns and urban density shifts via integrated satellite and derived datasets.[161] ![A-Train satellite constellation][float-right] This era's globalization extended to multinational missions, including Europe's Sentinel series from 2014, enhancing data interoperability and coverage equity beyond U.S.-centric archives.[162]Challenges and Limitations
Technical and Operational Constraints
Remote sensing systems face fundamental physical constraints on spatial resolution due to wave diffraction, where the minimum resolvable angle is approximated by the Rayleigh criterion, θ ≈ 1.22 λ / D, with λ as the wavelength and D as the aperture diameter. For visible-light sensors (λ ≈ 500 nm) on satellites with apertures of 0.5–2 m, this yields angular resolutions of 0.3–1 arcseconds, translating to ground resolutions of several meters at low Earth orbit altitudes of 500–800 km, though practical limits are often coarser due to pixel sampling and atmospheric turbulence.[163][164] Atmospheric interference severely limits optical remote sensing, as clouds, aerosols, and water vapor attenuate or scatter signals, rendering passive visible and near-infrared imagery unusable over 50–70% of Earth's surface on average, with tropical regions experiencing persistent cloud cover exceeding 80% during certain seasons. Synthetic aperture radar (SAR) mitigates some weather effects but suffers from signal decorrelation in vegetated or dynamic surfaces and speckle noise, reducing effective resolution. Empirical studies report classification error rates for land cover mapping from optical data at 10–30%, depending on vegetation heterogeneity and sensor resolution, with finer classes like shrubs or crops often misclassified due to spectral similarities and mixed pixels.[165][166][167] Inverting remote sensing measurements to retrieve geophysical parameters—such as surface reflectance or biomass from radiance data—constitutes an ill-posed inverse problem, where multiple surface states can produce identical observations due to non-uniqueness and sensitivity to noise, necessitating prior assumptions or regularization that introduce model-dependent biases. Atmospheric path radiance and bidirectional reflectance effects exacerbate this, with retrieval uncertainties often exceeding 20% for key variables like leaf area index without ground validation.[168][169] Operational logistics impose additional constraints, including high costs for satellite deployment and maintenance; small remote sensing satellites cost approximately $100–150 per kg to orbit, with full missions exceeding $50–100 million including launches, while data downlink and processing add recurring expenses of millions annually. Low Earth orbit platforms, essential for high-resolution imaging, experience atmospheric drag-induced orbital decay, with satellites below 600 km altitude deorbiting within 1–5 years absent propulsion, limiting mission lifetimes and requiring frequent replacements.[170][171][172]Ethical, Privacy, and Surveillance Controversies
Remote sensing technologies, particularly high-resolution commercial satellite imagery, have sparked significant ethical debates over privacy erosion, as persistent monitoring capabilities enable the tracking of individual movements without consent. A 2023 study surveying 99 participants highlighted public concerns that commercial satellites' high temporal and spatial resolution—such as daily imaging from constellations like those operated by Planet Labs—could facilitate granular surveillance of private activities, including vehicle tracking and behavioral pattern analysis, potentially conflicting with expectations of seclusion in yards or homes.[173] This capability raises legal and ethical challenges, as unfettered access to such data by private entities or governments could exacerbate national security threats or enable misuse, though few respondents favored unrestricted availability despite its utility.[174] Balancing these risks, proponents argue that anonymization and regulatory frameworks could mitigate harms while preserving societal benefits from Earth observation.[175] In surveillance applications, remote sensing for ceasefire monitoring has been critiqued for inadvertently incentivizing noncompliance, as lower detection costs for minor violations may encourage parties to test boundaries or escalate subtly. A September 2025 analysis in Surveillance & Society examined how remote sensing technology (RST) in monitored ceasefires—intended to enhance compliance—can motivate new violence through mechanisms like cheaper probing actions, devaluing traditional verification methods, and creating informational asymmetries that provoke retaliation.[176] Empirical evidence from conflict zones suggests that while RST augments observational power, it often fails to deter behavioral changes, potentially undermining fragile truces rather than ensuring peace.[177] This challenges overly optimistic views of surveillance as a panacea, emphasizing causal pathways where monitoring alters incentives in ways that amplify rather than suppress violations, though security gains in verified compliance persist in select cases.[178] Counterbalancing these concerns, remote sensing has demonstrably advanced human rights accountability by exposing atrocities that ground access might obscure. For instance, the Australian Strategic Policy Institute's 2018 report utilized commercial satellite imagery to map over 380 suspected internment facilities in Xinjiang, China, revealing the scale of Uyghur detention camps through structural analysis and temporal changes, corroborated by open-source intelligence.[179] Organizations like Amnesty International have employed such data since 2007 to document abuses, integrating imagery with witness testimony to validate mass graves and conflict incidents, thereby providing verifiable evidence for international tribunals.[180] These applications underscore remote sensing's role in causal realism for justice—enabling empirical verification of hidden violations—while ethical guidelines for data use in investigations address veracity risks from private providers.[181] Despite institutional biases in some advocacy sources, the technology's evidentiary value holds when grounded in multi-sourced analysis.[182]Geopolitical and Accessibility Barriers
Geopolitical barriers to remote sensing arise from national assertions of data sovereignty, which often restrict the collection, dissemination, or use of satellite imagery over sensitive territories. Under the Outer Space Treaty, no state can claim sovereignty over space itself, yet nations impose domestic regulations limiting foreign remote sensing activities; for instance, the United States enforces the Kyl-Bingaman Amendment, prohibiting licenses for high-resolution commercial satellite imagery of Israel to protect allied security interests. Similarly, export controls under the International Traffic in Arms Regulations (ITAR) and Export Administration Regulations (EAR) classify high-resolution imaging technologies as dual-use items, constraining transfers to non-allied nations and maintaining U.S. strategic advantages in reconnaissance capabilities. These measures, while aimed at preventing proliferation, can hinder global scientific collaboration and data sharing for non-military applications. A pronounced North-South divide exacerbates accessibility issues, with developing countries in the Global South experiencing empirical gaps in remote sensing coverage despite acute needs for monitoring agriculture, disasters, and resources. Studies indicate that intergovernmental factors, including limited technical capacity and high costs of data processing, impede adoption in these regions, where local infrastructure often lacks the computing power or expertise to utilize advanced imagery effectively. For example, while Northern nations dominate satellite constellations and analysis, Southern counterparts rely heavily on imported data, facing delays and incomplete datasets that widen disparities in applications like environmental management. This divide persists amid uneven global satellite orbits and licensing, leaving vast areas underserved and perpetuating reliance on foreign providers subject to geopolitical strings. Military-commercial entanglements further complicate access, as private satellite firms increasingly supply data for defense purposes, blurring lines between civilian and strategic uses. During the 2022 Russian invasion of Ukraine, Ukraine's government requested and received high-resolution imagery from at least eight commercial providers, including Maxar and Planet Labs, which aided targeting and situational awareness but raised concerns over data weaponization and potential retaliatory restrictions from adversaries. Such integrations demonstrate how commercial remote sensing supports hybrid warfare, prompting nations like Russia to jam signals or develop countermeasures, thereby indirectly limiting peacetime data flows and heightening tensions over dual-use technologies. These dynamics underscore causal risks where strategic dependencies on private actors can politicize ostensibly open data markets.Future Directions
Technological Innovations
Recent advancements in hyperspectral imaging emphasize sensor miniaturization to enable deployment on smaller platforms, reducing component costs and facilitating broader applications in remote sensing. Developments as of late 2024 target compact designs suitable for unmanned aerial systems (UAS) and low-Earth orbit satellites, improving spectral resolution for material identification without sacrificing portability.[183] [184] Quantum sensors represent a nascent hardware frontier, leveraging atomic-level precision for enhanced remote sensing measurements, including Rydberg-based systems for hyperspectral data acquisition. NASA's exploratory efforts demonstrate prototypes integrating these sensors to achieve finer detection limits in environmental and atmospheric monitoring, outperforming classical optics in signal fidelity under varying conditions.[185] Market analyses project quantum sensor adoption in remote platforms growing significantly by 2035, driven by sensitivity gains in magnetic and gravitational field mapping.[186] Multi-sensor fusion techniques have advanced to streamline UAS-satellite data pipelines, combining high-resolution aerial imagery with orbital multispectral inputs for pixel- and feature-level integration. A 2025 review highlights optimized workflows yielding improved temporal coverage and accuracy in land-use mapping, with fusion algorithms processing complementary datasets to mitigate individual sensor gaps like cloud interference in satellites or limited swath in UAS.[38] [187] The trend toward smaller satellites, as outlined in Lockheed Martin's 2025 space technology outlook, supports proliferated constellations for persistent remote sensing coverage. Platforms like the LM 50 and LM 400 series enable rapid deployment of Earth observation payloads, with production scaling to meet demands for frequent revisits in disaster monitoring and resource surveying.[188] [189] Satellite swarms offer empirical pathways to sub-meter resolutions, approaching centimeter-scale through coordinated multi-view imaging and interferometric synthesis. Conceptual designs project swarms achieving 30 cm ground sampling distance via dense orbital arrays, enhancing stereo reconstruction for topographic and change detection tasks beyond single-satellite limits.[190] [191]Integration with AI and Emerging Systems
Artificial intelligence enhances remote sensing by automating anomaly detection and multisource data fusion, enabling the identification of subtle patterns in large datasets that exceed human capabilities. In anomaly detection, unsupervised AI methods applied to Landsat-8 imagery have successfully pinpointed mineral deposits like iron ore by isolating deviations from baseline spectral signatures, demonstrating superior performance over traditional statistical detectors such as RX in empirical tests on hyperspectral data.[192][193] Data fusion integrates complementary remote sensing modalities—e.g., optical and SAR—for improved inference, as reviewed in studies showing AI models achieving over 90% accuracy in flood mapping by combining Sentinel-1 SAR with optical data, outperforming threshold-based approaches reliant on manual feature selection.[194] NASA's Dynamic Targeting technology exemplifies AI-driven autonomy in remote sensing, allowing Earth-observing satellites to analyze lookahead sensor data in under 90 seconds and reorient primary instruments toward high-value targets without ground intervention. Tested successfully in July 2025 on orbit, this system processes real-time imagery to prioritize dynamic events like wildfires or storms, enhancing data yield by focusing acquisitions causally linked to observed precursors rather than predefined schedules.[195][196] Integration with unmanned aerial vehicles (UAVs) via AI-enabled 6G satellite communications supports real-time remote sensing for applications requiring low-latency processing, such as urban monitoring or disaster response. AI optimizes UAV trajectories and spectrum allocation in satellite-UAV networks, enabling edge-computed fusion of onboard multispectral data with satellite feeds to achieve near-instantaneous anomaly alerts, though spectral efficiency gains depend on predictive interference mitigation models.[197] Empirical evaluations report classification accuracies improving by 15-20% over non-AI baselines in land-use mapping when AI handles UAV-satellite data streams, attributed to reduced noise from adaptive fusion rather than raw sensor upgrades.[194][198] Challenges persist in AI opacity, where deep learning models function as "black boxes," obscuring causal pathways from inputs to outputs and complicating validation in geoscientific contexts like remote sensing interpretation.[199][200] Despite this, explainable AI techniques, such as attention mechanisms in convolutional networks, mitigate risks by highlighting influential spectral bands, fostering trust through verifiable decision traces. Emerging systems prioritize causal inference models to supplant correlative patterns, potentially diminishing interpretive biases inherent in human-led analysis by enforcing physically grounded priors over data-driven approximations alone.[198]References
- https://www.[mdpi](/page/MDPI).com/2073-4433/11/5/517