Hubbry Logo
Geographic data and informationGeographic data and informationMain
Open search
Geographic data and information
Community hub
Geographic data and information
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Geographic data and information
Geographic data and information
from Wikipedia

Geographic data and information is defined in the ISO/TC 211 series of standards as data and information having an implicit or explicit association with a location relative to Earth (a geographic location or geographic position).[1][2] It is also called geospatial data and information,[citation needed] georeferenced data and information,[citation needed] as well as geodata and geoinformation.[citation needed]

Geographic data and information is stored in geographic databases and geographic information systems (GIS). There are many different formats of geodata, including vector files, raster files, web files, and multi-temporal data.

Spatial data or spatial information is broader class of data whose geometry is relevant but it is not necessarily georeferenced, such as in computer-aided design (CAD), see geometric modeling.

Fields of study

[edit]

Geographic data and information are the subject of a number of overlapping fields of study, mainly:

"Geospatial technology" may refer to any of "geomatics", "geomatics", or "geographic information technology."

The above is in addition to other related fields, such as:

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Geographic data and information refers to the collection of raw and processed elements that describe the spatial location, shape, attributes, and relationships of features on Earth's surface, such as coordinates, boundaries, elevations, and associated characteristics like or . This encompasses both spatial data, which captures positional aspects through points, lines, polygons, or grids, and attribute data, which provides descriptive details about those features. At its core, geographic data serves as the foundational input for deriving meaningful geographic information through , enabling insights into patterns and trends that inform across various domains. In practice, geographic data is primarily modeled in two formats: vector data, which represents discrete features using geometric primitives like points for locations (e.g., cities), lines for paths (e.g., roads), and polygons for areas (e.g., administrative boundaries), offering high precision for scalable representations; and raster data, which structures continuous phenomena as a grid of cells with assigned values, commonly used for imagery, elevation models, or environmental variables like temperature. These models allow for efficient storage, manipulation, and visualization in systems like Geographic Information Systems (GIS), where data from sources such as satellite , ground surveys, and GPS are integrated to produce maps and analytical outputs. For instance, vector formats excel in topological analysis, while raster suits surface modeling and large-scale simulations. The significance of geographic data and information lies in its role in enabling to uncover relationships not evident in tabular data alone, supporting applications in , , , and . By linking location-specific attributes—such as , , or —to broader contexts like or demographics, it facilitates predictive modeling and formulation; for example, identifying flood-prone areas by overlaying rainfall patterns with topographic data. Advances in technologies, including and high-resolution , continue to enhance accuracy and accessibility, making geographic information indispensable for addressing global challenges like and .

Overview and Fundamentals

Definition and Scope

Geographic and refers to and having an implicit or explicit association with a location relative to the . This includes location-based representations of spatial features—such as points, lines, polygons, and surfaces—along with their associated attributes and interrelationships on the Earth's surface. Raw geographic typically consists of unprocessed observations, such as coordinates for specific sites or values from surveys, while derived geographic involves contextualized outputs like digital maps or spatial models that integrate multiple data layers for analysis. The scope of geographic data and information encompasses physical, human, and environmental dimensions of the planet. Physical elements include natural landforms, , and ; human elements cover settlements, transportation networks, and demographic patterns; and environmental elements address ecosystems, , and atmospheric conditions. This domain excludes purely aspatial data, such as standalone numerical statistics on population counts or economic outputs without any tied locational references, as these lack the spatial component essential for geographic analysis. Central to this field is the differentiation between spatial and aspatial . Spatial explicitly incorporates positional elements, often through coordinates or , allowing for the examination of geographic distributions, proximities, and interactions. In contrast, aspatial provides non-locational attributes, such as types or values, which gain full utility only when linked to spatial references. Geographic also possesses inherent properties like scale, spanning extents from local neighborhoods to global phenomena, and resolution, which dictates the precision and of spatial detail, influencing the applicability of for various analytical purposes. A key conceptual distinction lies between raw geographic and processed geographic . represents fundamental, unrefined observations gathered through or sensing, whereas arises from applying analytical methods, standards, and to this , transforming it into interpretable knowledge for applications like or .

Historical Development

The origins of geographic trace back to ancient civilizations, where systematic efforts to map and record spatial laid the groundwork for modern practices. In the AD, the Greek scholar Claudius Ptolemy compiled Geographia, a seminal work that introduced a coordinate-based system using and to describe nearly 8,000 locations across , enabling more precise representations of . This approach marked a shift from qualitative descriptions to quantitative spatial referencing, influencing for centuries. Early cartographic tools, such as astrolabes developed from the BCE onward, facilitated position determination by measuring altitudes of celestial bodies, supporting the creation of accurate maps and nautical charts essential for and trade. The 19th and early 20th centuries saw institutional advancements in geographic data collection through the establishment of national mapping agencies, which standardized and topographic mapping on a large scale. For instance, the (USGS) was founded in 1879 by an to systematically document the nation's landscape, natural resources, and geology, producing foundational datasets that informed resource management and infrastructure development. A pivotal innovation during this period was the introduction of in the 1910s, particularly during , when it was first integrated into map compilation processes to capture detailed terrain views from aircraft, dramatically improving the efficiency and accuracy of topographic surveys. The mid-20th century ushered in the digital era of geographic data with the advent of computerized systems for storage, analysis, and visualization. In the , geographer led the development of the Canada Geographic Information System (CGIS), the world's first operational GIS, commissioned by the to inventory land resources across vast territories using overlay analysis of thematic maps digitized from aerial photographs. This breakthrough enabled complex spatial queries and resource planning, setting the stage for broader GIS adoption. Complementing this, the launched its first satellite in 1972, providing the initial systematic collection of multispectral Earth imagery from space, which generated petabytes of open-access data for monitoring changes and environmental dynamics. In recent decades, geographic data has evolved toward openness and automation, driven by collaborative and computational innovations. The launch of in 2004 democratized mapping by creating a crowdsourced, editable global database of geographic features, fostering initiatives that have amassed billions of data points contributed by volunteers worldwide. Post-2010, the integration of , particularly techniques like convolutional neural networks, has enabled automated feature extraction from imagery and vector data, accelerating tasks such as land-use classification and in geospatial datasets.

Types of Geographic Data

Vector Data Models

Vector data models in geographic information systems (GIS) represent discrete spatial features using geometric primitives that capture the location, shape, and attributes of real-world entities such as landmarks, transportation networks, and administrative boundaries. Developed as a foundational approach in early GIS systems, this model emerged in the with the Canada Geographic Information System (CGIS), pioneered by , which utilized vector-based representations to overlay and analyze land resource data for management purposes. Unlike raster models, which suit continuous surfaces like , vector models excel in depicting sharp, discrete boundaries with high precision. The core components of vector data models include points, lines, and polygons, each corresponding to different dimensions of spatial features. Points are zero-dimensional objects defined by a single pair of X and Y coordinates, suitable for representing discrete locations such as individual buildings, wells, or sampling sites. Lines, or polylines, are one-dimensional features formed by sequences of connected points (vertices), ideal for linear entities like roads, rivers, or utility lines, and they inherently possess measurable length. Polygons are two-dimensional closed rings of lines that enclose areas, used for bounded regions such as lakes, land parcels, or country borders, with calculable area and perimeter attributes. Topology is a critical aspect of vector models, encoding spatial relationships and connectivity among features to enable efficient analysis and maintain . It encompasses precepts like arc-node topology for line connectivity (where nodes represent endpoints and arcs the segments between them), polygon-arc topology for defining enclosed areas, and contiguity for shared boundaries between adjacent polygons. This structure allows GIS software to detect and correct errors, such as undershoots or slivers, and supports operations like network routing or adjacency queries without redundant coordinate storage. In terms of , vector models separate geometric representations from descriptive attributes, typically linked through unique identifiers. For instance, the widely adopted format exemplifies this by storing geometry in a binary .shp file—containing shape types (e.g., point, polyline, ) and coordinate sequences—and attributes in a .dbf file using dBASE format, with records aligned one-to-one by order for each feature. This georelational approach facilitates querying and visualization while keeping files compact for sparse distributions. Vector data models offer several advantages, particularly for scalable and precise representations of discrete phenomena. They provide exact coordinate-based definitions, ensuring clarity at any zoom level without , and result in smaller file sizes compared to equivalent raster data due to efficient encoding of only relevant vertices. The inherent simplifies complex spatial operations, such as overlay analysis or proximity calculations, making them suitable for applications like or . However, vector models have limitations, including their unsuitability for modeling continuous fields like gradients, where transitions lack discrete boundaries. Creating and maintaining topological can be computationally intensive for large, complex datasets, potentially leading to increased processing times and storage demands when topologies become intricate. A practical example is modeling a city's road network, where linear features represent streets as polylines with attributes such as speed limits, traffic volume, and surface type stored in an associated table, enabling analyses like optimal or assessments.

Raster Data Models

Raster data models represent geographic phenomena using a of cells, often referred to as , where each cell holds a value corresponding to an attribute such as , , or reflectance. This grid-based structure divides the Earth's surface into discrete units, with the resolution determined by the size of each cell; for instance, a cell size of 30 meters by 30 meters means each represents a 900 square meter area on the ground. Smaller cell sizes yield higher resolution and greater detail but exponentially increase volume, as halving the cell size quadruples the number of cells needed to cover the same area. Common applications include digital elevation models (DEMs), where cell values denote height above , enabling the modeling of continuous surfaces like . Data organization in raster models typically involves storing pixel values in a sequential matrix format, accompanied by a header that specifies geographic properties such as the coordinate , extent, and cell dimensions. Single-band rasters contain one value per cell, suitable for representations like data, while multi-band rasters stack multiple layers, as in RGB or multispectral satellite data where each band captures a different . To manage storage demands, compression techniques such as (RLE) are employed, which exploit spatial by encoding consecutive identical values efficiently; for example, ARC GRID uses adaptive RLE on block-structured tiles to reduce file sizes without loss of data fidelity. Raster models offer several advantages for handling continuous geographic data, including simplicity in structure that facilitates uniform processing across large areas and efficient of surfaces, such as deriving from grids. They are particularly well-suited for overlay operations and statistical computations on phenomena that vary gradually, like rainfall or properties, due to the grid's inherent regularity. However, limitations include high storage requirements for high-resolution datasets, which can result in files gigabytes in size, and potential spatial inaccuracies or effects at coarser scales, where fine details are averaged or lost within larger cells. A prominent example is , such as Landsat data, where each stores reflectance values across spectral bands to enable land cover classification into categories like forest, urban, or water; the U.S. Geological Survey (USGS) has used such raster-based approaches to map changes across regions like by interpreting patterns and validating with ancillary data.

Data Acquisition Methods

Remote Sensing Techniques

Remote sensing techniques enable the acquisition of geographic data from airborne or spaceborne platforms, capturing or other signals to map and monitor Earth's surface features without direct contact. These methods are fundamental for generating large-scale datasets on , , , and environmental changes, supporting applications in and . Sensors detect energy across various wavelengths, producing and derived products that form the basis of many geographic information systems. Remote sensing is categorized into passive and active systems based on energy sources. Passive systems, such as optical sensors on satellites like Landsat, capture sunlight reflected or emitted by the Earth's surface in multispectral bands spanning visible, near-infrared, and thermal infrared wavelengths, enabling detection of vegetation health and patterns. In contrast, active systems generate their own energy pulses; for instance, (SAR) transmits microwaves to penetrate clouds and vegetation, providing all-weather, day-night imaging through measurements for applications like flood mapping and terrain analysis. Platforms for vary by orbit and altitude to balance coverage, resolution, and revisit frequency. Polar-orbiting satellites, including Landsat with an 8-day revisit time at 30-meter resolution and MODIS offering near-daily global coverage at 250-1000 meters, traverse from pole to pole to achieve comprehensive . Geostationary satellites, positioned at about 36,000 kilometers altitude, maintain fixed positions over the for continuous regional monitoring but with coarser resolutions due to . Unmanned aerial vehicles (UAVs or drones) serve as low-altitude platforms for high-resolution local , achieving sub-centimeter spatial detail over targeted areas like agricultural fields or coastal zones. Key techniques enhance the specificity and dimensionality of geographic data collection. Hyperspectral imaging divides the spectrum into hundreds of narrow contiguous bands, allowing precise material identification on the surface, such as distinguishing types or crop stresses based on unique spectral signatures. Light Detection and Ranging () employs pulses emitted at rates exceeding 150 kHz, recording up to multiple returns per pulse to generate dense 3D point clouds that model terrain elevation and vegetation structure with vertical accuracies under 10 centimeters. Data products from remote sensing range from raw sensor imagery to processed outputs tailored for geographic analysis. Orthorectified products geometrically correct for sensor orientation, terrain relief, and Earth curvature, yielding distortion-free maps suitable for overlay with vector data; these are commonly distributed in raster formats for pixel-based representation. A prominent example is the MODIS vegetation indices, which have monitored global phenology and biomass since the launch of the Terra satellite in 1999 and the Aqua satellite in 2002, providing time-series data on normalized difference vegetation index (NDVI) at 250-meter resolution. Challenges in include atmospheric interference from scattering, absorption, and , which distort signal intensity and fidelity. Corrections often employ models to simulate paths through the atmosphere, estimating and subtracting path radiance for accurate surface retrieval; for instance, the 6S model integrates optical depth and profiles to achieve corrections with errors below 5% in clear conditions.

Field Survey Methods

Field survey methods involve direct, on-site collection of geographic by personnel using specialized instruments to measure positions, elevations, and features with high precision. These techniques are essential for establishing that complements or validates other acquisition methods, ensuring accurate mapping in areas where may be limited by or . Traditional and modern approaches emphasize human and to capture spatial relationships and attributes. Traditional field survey methods rely on optical and mechanical instruments for precise measurements. uses theodolites to measure angles from known baselines, enabling the calculation of distances and positions across networks of points without direct measurement to each location. Theodolites, which measure horizontal and vertical angles to seconds of arc, are fundamental for establishing control points in baseline surveys, as seen in early geodetic networks. Leveling, another core technique, employs levels and rods to determine elevation differences along profiles, providing vertical control for topographic mapping by sighting on benchmarks to compute height differences incrementally. These methods formed the basis of national survey frameworks, such as those developed by the U.S. Geological Survey in the . Modern tools have enhanced efficiency and accuracy in field surveys through electronic integration. (GPS) receivers, particularly systems, achieve centimeter-level horizontal accuracy by correcting satellite signals using a , making them ideal for real-time positioning in diverse terrains. Total stations combine electronic distance measurement (EDM) via infrared or laser with angular capabilities from theodolites, allowing simultaneous recording of distances (accurate to millimeters plus parts per million) and angles for three-dimensional point capture. These instruments automate data logging and reduce , supporting rapid surveys over larger areas. Protocols in field surveys ensure data reliability through standardized procedures. Ground control points (GCPs) are established as fixed, surveyed markers with known coordinates to georeference measurements, improving absolute accuracy by tying local data to global reference frames. Crowdsourced via mobile applications, such as , enables public participation in recording locations and attributes, generating vast datasets for monitoring when validated against professional surveys. Survey results from these methods are often represented as vector data models, storing points and lines for subsequent analysis. Specific examples illustrate the application of field survey methods. Hydrographic surveys employ systems, such as multibeam echosounders, to measure by emitting acoustic pulses and recording return times from the seafloor, achieving resolutions down to centimeters for nautical charting. Ecological transects involve walking linear paths to sample , recording species occurrences and environmental variables at intervals to map habitat distributions and assess ecosystem health. These approaches provide detailed, verifiable data for . Accuracy in field surveys is influenced by various error sources and mitigation strategies. In GPS-based methods, multipath errors arise when signals reflect off surfaces like buildings or , causing pseudorange distortions that can degrade position accuracy by meters in obstructed environments. Post-processing with Real-Time Kinematic (RTK) techniques refines raw GPS data by resolving carrier-phase ambiguities using base station corrections, achieving centimeter-level precision retrospectively for applications requiring high fidelity. Protocols often include , such as multiple instrument readings, to quantify and minimize these errors.

Data Representation and Standards

Coordinate Reference Systems

Coordinate reference systems (CRS) provide a framework for defining and representing locations on the Earth's surface, accounting for its irregular shape and curvature. A CRS typically consists of a horizontal component for positioning on the surface and, optionally, a vertical component for elevation. These systems enable the integration of geographic data from various sources, such as GPS measurements, by standardizing how positions are expressed relative to a common reference. The core components of a CRS include a datum and a set of coordinates. A datum is a reference model that approximates the Earth's shape using an or , defining the origin and orientation for measurements. For instance, the World Geodetic System 1984 (WGS84) is a geocentric datum based on the GRS 1980 , with parameters such as a semi-major axis of 6,378,137 meters and a flattening of 1/298.257223563, designed for global applications like . Coordinates in a (GCS), which is a type of CRS, are expressed as latitude and longitude in degrees, with ranging from -90° to 90° relative to the and from -180° to 180° relative to the at Greenwich. To represent the curved on flat maps or screens, CRS often incorporate map projections that transform spherical coordinates into planar ones, inevitably introducing some distortion in shape, area, distance, or direction. Cylindrical projections, such as the , wrap a around the at the , preserving angles (conformal) for navigation purposes but distorting areas, especially at high latitudes where appears larger than . Conic projections, like the Albers equal-area conic, are suitable for mid-latitudes and use a cone or secant to the globe at one or two standard parallels, minimizing area distortion across regions such as the . Transformations within CRS allow conversion between different systems, such as from geographic coordinates (latitude/longitude) to projected coordinates (e.g., easting/northing in meters). The Universal Transverse Mercator (UTM) system exemplifies this by dividing the Earth into 60 longitudinal zones, each 6° wide, and applying a within each zone to achieve low distortion (scale factor of 0.9996 at the central meridian). Datum shifts, such as between the North American Datum 1983 (NAD83) and WGS84, involve parameter-based methods like 3-parameter geocentric translations (shifts in X, Y, Z) or grid-based models like NADCON, with differences typically under 1 meter in . Vertical datums extend CRS to include height or depth relative to a reference surface. The mean sea level (MSL) datum defines elevations based on averaged tidal observations at gauges, serving as a practical reference for coastal and engineering applications. More advanced geoid models, such as the Earth Gravitational Model 2008 (EGM2008), provide a global surface approximating MSL with 5 arc-minute resolution and accuracies of about 15-20 cm over land, enabling conversion between ellipsoidal heights (from GPS) and orthometric heights (above MSL). A practical example of CRS application is transforming GPS-derived latitude and longitude in WGS84 to Web Mercator coordinates for online mapping. Web Mercator (EPSG:3857), a spherical variant of the , uses a pseudo-Mercator formula to project coordinates onto a square grid in meters, facilitating tiled web maps where straight lines represent rhumb lines and distortion is accepted for global visualization in services like . This transformation ensures seamless zooming and panning but requires awareness of area distortions at high latitudes.

Spatial Data Formats and Standards

Spatial data formats provide structured ways to store, manage, and exchange geographic information, encompassing both vector and raster representations that include details tied to coordinate reference systems. These formats ensure that spatial relationships, attributes, and geometries are preserved during data handling, facilitating across software and platforms. Common vector formats include the , developed by as a binary format for storing point, line, and features along with associated attributes in multiple files, widely adopted due to its simplicity and broad support in GIS applications. Another prominent vector format is , an based on that encodes geographic features like points, lines, and polygons in a lightweight, human-readable structure suitable for web-based mapping and APIs. For raster data, extends the TIFF image format by embedding georeferencing information, such as coordinate transformations and projections, directly into the file metadata, making it ideal for , elevation models, and other gridded datasets. The (Network Common Data Form) format, developed by Unidata, supports multidimensional arrays for scientific data, particularly climate and atmospheric variables, with built-in metadata for dimensions, variables, and attributes to handle time-series and spatiotemporal data efficiently. Open standards from the Open Geospatial Consortium (OGC) promote interoperability through specifications like , an XML-based encoding for geographic features that enables the exchange of complex spatial data models, including topologies and coverages, across heterogeneous systems. Web services standards such as provide HTTP interfaces for retrieving georeferenced map images from distributed servers, while allows querying and updating vector feature data over the web, supporting transactions for editing geographic information. Metadata standards are essential for describing spatial data's content, quality, and usability. The ISO 19115 standard defines a for geographic metadata, covering elements like lineage, quality assessments, spatial extent, and identification to support data discovery and evaluation in catalogs. For simpler cataloging, Dublin Core offers a minimal set of 15 elements, including coverage for spatial and temporal extents, often used in conjunction with geographic thesauri to describe resources like maps and datasets. Despite these advancements, interoperability challenges persist, particularly with proprietary formats from vendors like , such as File Geodatabase, which can lead to lock-in and complicate without licensed software. Tools like the Geospatial Data Abstraction Library (GDAL) address these issues by providing open-source translation capabilities for over 200 raster and vector formats, enabling seamless conversion and access without altering underlying data structures. An illustrative example is , an OGC standard for encoding geographic visualizations, which allows users to create interactive 3D tours and overlays in applications like by combining placemarks, paths, and imagery in an XML format.

Processing and Analysis

Geospatial Analysis Techniques

Geospatial analysis techniques encompass a range of computational methods designed to extract meaningful patterns and relationships from geographic data, enabling the integration, transformation, and interpretation of spatial information. These techniques operate on vector and raster data models to perform operations such as overlaying layers, deriving surface properties, quantifying spatial dependencies, and optimizing paths across networks. Fundamental to , they facilitate decision-making in diverse contexts by revealing hidden spatial structures without relying on integrated software systems. Overlay analysis involves combining multiple spatial layers to identify areas of , union, or difference, often using or raster cells to generate new datasets that highlight spatial coincidences or conflicts. For instance, intersecting land use polygons with environmental hazard layers can delineate risk zones where specific conditions overlap. This method, rooted in map algebra concepts, supports suitability modeling by applying logical operators like AND, OR, and NOT to thematic layers, producing outputs that aggregate or filter geographic features. Pioneered in early GIS frameworks, overlay operations ensure topological consistency through edge-matching and resolution reconciliation. Surface analysis derives terrain characteristics from digital elevation models (DEMs), employing approximations to compute metrics such as and aspect, which quantify steepness and orientation, respectively. is typically calculated using a third-order estimator across a 3x3 neighborhood, where the rate of elevation change is approximated as the between adjacent cells weighted by distance. Aspect, representing the downhill direction, is derived from the arctangent of the east-west and north-south s, often refined via vector normalization to handle flat areas. These computations, originally formalized for photogrammetric applications, enable the modeling of erosional processes and solar exposure. In hydrology modeling, flow accumulation extends surface analysis by tracing upslope contributing areas to each cell, simulating drainage patterns through deterministic algorithms like the D8 method, which assigns flow to one of eight cardinal directions based on steepest descent. This technique aggregates cell counts or weights to delineate stream networks from DEMs, critical for watershed simulation. Spatial statistics techniques assess the degree of clustering or dispersion in geographic data, with serving as a key measure of global spatial that evaluates whether similar values tend to occur near one another. The statistic is computed as: I=nS0ijwij(xixˉ)(xjxˉ)i(xixˉ)2I = \frac{n}{S_0} \frac{\sum_i \sum_j w_{ij} (x_i - \bar{x})(x_j - \bar{x})}{\sum_i (x_i - \bar{x})^2} where nn is the number of observations, xix_i and xjx_j are attribute values at locations ii and jj, xˉ\bar{x} is the mean, wijw_{ij} are spatial weights (often inverse ), and S0=ijwijS_0 = \sum_i \sum_j w_{ij}. Values range from -1 (perfect dispersion) to +1 (perfect clustering), with significance tested against a of . Introduced in the context of processes, underpins exploratory spatial by detecting non-random patterns in point or areal data. Network analysis optimizes connectivity in linear features like roads or rivers, employing shortest path algorithms to determine minimal-cost routes between origins and destinations based on impedance factors such as distance or travel time. , a foundational greedy method, iteratively selects the lowest-cost unvisited node from a , propagating distances until the target is reached, assuming non-negative edge weights. Applied to graph representations of transportation networks, it computes optimal paths by relaxing adjacent edges in a breadth-first manner, with O((V+E)logV)O((V+E) \log V) using a heap. This approach, originally developed for communication , has been adapted for geospatial to minimize cumulative costs across interconnected features. An illustrative application of these techniques is hotspot detection in , where the Getis-Ord Gi* statistic identifies statistically significant clusters of high or low values by comparing local sums to global means, adjusted for spatial dependence. Defined for a ii as Gi=jwijxjxˉjwijsjwij2(jwij)2n1G_i^* = \frac{\sum_j w_{ij} x_j - \bar{x} \sum_j w_{ij}}{s \sqrt{\frac{\sum_j w_{ij}^2 - (\sum_j w_{ij})^2}{n-1}}}
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.