Hubbry Logo
MetashapeMetashapeMain
Open search
Metashape
Community hub
Metashape
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Metashape
Metashape
from Wikipedia
Metashape
DeveloperAgisoft LLC
Initial release2010 (2010)
Stable release
Version 2.1.2 build 18358 / 20 June 2024
Operating systemMicrosoft Windows
Linux
macOS
Type3D computer graphics software
LicenseProprietary
Websitewww.agisoft.com

Agisoft Metashape (previously known as Agisoft PhotoScan[1]) is a tool for a photogrammetry pipeline. The software is available in Standard and Pro versions, the standard version is sufficient for interactive media tasks, while the Pro version is designed for authoring GIS content. The software is developed by Agisoft LLC located in St. Petersburg in Russia.

It is widely used by archaeologists.[2][3][4][5][6] Many UAV companies are also using it.[7][8][9]

The software can run on any of these operating systems: Microsoft Windows, macOS or Linux.

Use in industry

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Agisoft Metashape is a stand-alone software product developed by Agisoft LLC for performing photogrammetric processing of digital images, generating 3D spatial data such as point clouds, textured polygonal models, and digital models for use in geographic information systems (GIS), documentation, and production. Previously known as Agisoft Photoscan, it was renamed in 2019 to reflect advancements in its capabilities. The software supports processing of images from RGB, multispectral, or thermal cameras, including those from multi-camera systems, and is optimized for accuracy, speed, and hardware efficiency through and GPU acceleration. Available in Standard and Professional editions, Metashape caters to a range of users from hobbyists to professionals in fields like surveying, mapping, and 3D reconstruction. The Standard edition focuses on core functionalities such as photogrammetric triangulation, dense point cloud generation, 3D model texturing, and panorama stitching, making it suitable for applications like digitizing artifacts, architectural elements, and small-scale scenes. In contrast, the Professional edition extends these with advanced geospatial analysis tools, including georeferenced orthomosaic generation, DSM/DTM creation and editing, vegetation classification, and support for ground control points, enabling large-area mapping and precise measurements. Founded in 2006 as a research company specializing in and digital , Agisoft LLC designed Metashape to provide cost-effective solutions for 3D visualization, indirect object , and across scales from microscopic to planetary. Its typically involves image alignment, dense cloud editing, , and texture application, producing outputs compatible with formats like OBJ, , and LAS for integration into CAD, animation, or GIS software. Widely adopted in , forensics, and , Metashape emphasizes intelligent automation to handle complex datasets efficiently while maintaining high fidelity in results.

History and development

Founding of Agisoft

Agisoft LLC was established in 2006 in St. Petersburg, , as an innovative research company dedicated to advancing technology. The founding marked the beginning of efforts to develop cutting-edge solutions in image processing and spatial data generation, positioning the company within the burgeoning field of applications. From its inception, Agisoft placed a strong initial emphasis on research and development (R&D) in and algorithms. This focus allowed the company to explore the integration of techniques for creating accurate three-dimensional models from two-dimensional images, addressing needs in industries such as , preservation, and . Key milestones in Agisoft's early years included intensive R&D that built substantial expertise in these areas, culminating in the creation of the first software prototypes. These prototypes laid the groundwork for commercial tools that would later evolve into Metashape, demonstrating the company's commitment to translating research innovations into practical applications.

Release history

Agisoft PhotoScan, the predecessor to Metashape, was first commercially released on December 5, 2010, as version 0.7.0, introducing core capabilities such as camera alignment, 3D model generation from images, and export options in formats like , , , and Arc/Info ASCII Grid. This debut marked Agisoft's entry into accessible photogrammetric software, building on earlier beta versions from 2009 that focused on structure-from-motion techniques for dense reconstruction. In late 2018, with the preview release of version 1.5.0 on October 6, the software was renamed Agisoft Metashape to encompass its broadening scope beyond basic photo scanning, including advanced and geospatial applications; the full stable release followed in early 2019. This version introduced key enhancements like photo-invariant camera calibration for greater flexibility in lens modeling, improved depth map-based model generation for high-detail outputs in the Standard edition, and advanced editing tools such as marker residual displays and refined texture generation algorithms. Major updates continued to evolve the software's core algorithms for better accuracy and efficiency. , previewed in October 2022 and fully released by December 2022, added support for multispectral imagery processing to generate orthomosaics and indices, alongside aerial integration, external scan registration, and accelerated dense building via GPU optimizations. Subsequent releases, including in December 2023, enhanced tie point and elevation-based views, while in December 2024 introduced CSV imports and planetary coordinate systems for Mars and data. By , updates in versions 2.2.x and 2.3.0 (October 4, ) further refined core reconstruction algorithms, with improvements in orthomosaic blending, GNSS bias handling for precise , and parallel processing for faster alignments in large datasets compared to prior iterations. These releases also integrated a Timeline pane for 4D dynamic model reconstruction, enabling temporal analysis of changing scenes, and deepened processing integration via Agisoft Cloud for scalable, remote computation without local hardware limits. Over time, algorithmic advancements, such as refined structure-from-motion solvers and adaptive depth filtering, have consistently boosted reconstruction accuracy to sub-millimeter levels in controlled tests while optimizing memory usage for handling datasets exceeding 10,000 images.

Software features

Photogrammetry pipeline

Metashape employs structure-from-motion (SfM) as its core technique to reconstruct three-dimensional scenes from two-dimensional overlapping images, estimating camera positions and orientations while generating geometric data such as point clouds and meshes. This process begins with the analysis of image sequences captured from various viewpoints, leveraging and common features to infer spatial relationships without requiring prior knowledge of camera locations. The pipeline's initial key stage involves feature detection and matching, where distinctive keypoints—such as corners or edges—are identified in each image using algorithms akin to the (SIFT), which detect interest points robust to scale, rotation, and illumination changes. These keypoints are then matched across overlapping images to establish correspondences, forming tie points that represent projected 3D scene points; parameters like key point limits (typically up to 40,000 per image) and tie point limits (around 10,000) control the density and accuracy of these matches to balance computational efficiency and precision. Following matching, optimizes the initial estimates through a least-squares minimization of reprojection errors, simultaneously refining 3D point coordinates, camera interior (, principal point) and exterior (position, ) parameters, and models using equations central to photogrammetric aerotriangulation. This step ensures consistency across the entire dataset, reducing geometric inconsistencies and achieving sub-pixel accuracy in camera pose estimation. For dense reconstruction, Metashape generates depth maps from the aligned images via multi-view stereo matching, computing per-pixel disparities between overlapping views to estimate surface depths, which are then filtered for and merged into a high-resolution . The software handles diverse image types effectively: RGB images for general with color ; multispectral imagery for applications like vegetation analysis, incorporating band-specific alignments and reflectance calibration; and thermal images for heat signature-based reconstructions, such as in fire monitoring, while maintaining compatibility with multi-camera systems and non-standard projections like fisheye or spherical lenses.

Supported outputs and formats

Metashape generates a variety of 3D and spatial data outputs derived from photogrammetric processing, enabling integration with downstream applications in geospatial analysis, , and mapping. Point clouds, both sparse (tie points) and dense, represent reconstructed 3D point data and are exported in formats suitable for geospatial workflows, such as LAS and LAZ for LiDAR-compatible systems, as well as PLY, E57, PTS, and XYZ for broader compatibility. These outputs preserve coordinate information and attributes like color or intensity, facilitating uses in and environmental modeling. Textured polygonal meshes, which form detailed 3D surface models, are created from dense point clouds and exported in standard 3D formats including OBJ, PLY, STL for additive manufacturing and CAD integration, along with , DAE, and GLB for animation and web-based visualization. These meshes include high-resolution textures generated from input imagery, supporting applications in documentation and . Orthomosaic maps, which provide georeferenced aerial imagery corrected for distortion, and digital elevation models (DEMs), representing terrain heights as raster grids, are exported primarily in for geospatial software compatibility, with additional options like for visual overviews and XYZ or BIL for elevation data interchange. DEMs utilize floating-point TIFF to maintain precise values, essential for topographic analysis. Additional outputs include panoramic images for 360-degree views, exported in formats such as , TIFF, , and to support tiled mapping in tools like ; video animations of fly-throughs in AVI, MOV, MP4, or WMV for presentation purposes; and GIS-compatible vector layers, such as shapefiles for polygons, polylines, and markers, enabling overlay in systems like .

Versions and licensing

Standard edition

The Standard edition of Metashape provides essential tools tailored for hobbyists, educators, and users focused on basic tasks, enabling the processing of digital images into three-dimensional models without the need for advanced professional workflows. It supports core functionalities such as photogrammetric for camera alignment, dense generation and editing, and 3D model creation with , allowing users to generate textured meshes and panoramic images from still photographs or video sources. These capabilities make it suitable for non-professional applications like creating 3D content for personal projects, educational demonstrations, or simple visualizations derived from consumer-grade imagery. Unlike the Professional edition, the Standard edition lacks support for advanced geospatial features, including georeferenced (DEM) and orthomosaic exports, as well as tools for vegetation indices, classification, and hierarchical tiled model processing. It also does not include Python scripting for , headless operation, limiting its use to smaller-scale projects without integration into geospatial analysis pipelines or custom scripting environments. Basic elements, such as ground control points and markers, are available but insufficient for professional surveying or GIS applications. Metashape Standard operates under a perpetual licensing model priced at $179 for a node-locked , with no renewal fees required and free updates to all subsequent versions, including major releases. A free 30-day version is available for , providing full access to the edition's features during the trial period. This pricing structure positions it as an accessible entry point for users not requiring the Professional edition's extensive toolkit.

Professional edition

The Professional edition of Metashape is designed for advanced users in geospatial and applications, offering exclusive tools that extend beyond the basic capabilities shared with the Standard edition. It enables high-precision photogrammetric processing tailored to professional workflows, including support for and essential for survey-grade outputs. A core strength of the edition lies in its full geospatial capabilities, which include ground control point (GCP) import for accurate and error control, as well as integration with real-time kinematic (RTK) and post-processed kinematic (PPK) data through metadata, flight logs, or GCPs to achieve centimeter-level accuracy in projects. Additionally, it supports multi-channel for hyperspectral and multispectral imagery, allowing the generation of multichannel orthomosaics, vegetation indices like NDVI, and exports compatible with specialized analysis in environmental and agricultural monitoring. These features facilitate compliance with professional standards for measurement accuracy, such as those involving EPSG coordinate systems like WGS84 and UTM, ensuring reliable results in regulated settings. The edition also provides advanced tools for handling complex datasets, including hierarchical chunking, which enables efficient modeling of large-scale projects like city environments while preserving original image resolution. For automation and scalability, it includes a Python scripting API that allows users to customize workflows, integrate with external pipelines, and perform batch operations on projects, chunks, and processing parameters. Network processing further enhances performance by distributing computations across a local computer network, optimizing resource use of CPU, GPU, and RAM for massive datasets. Licensing for the edition is available as a perpetual node-locked priced at $3,499 USD, with free updates for existing users transitioning from prior versions like PhotoScan. Educational institutions and qualified users can access discounted rates, though subscription options are not offered; upgrades from the Standard edition are possible at an additional cost.

Processing workflow

Data input and preparation

Metashape accepts overlapping photographs as primary input for photogrammetric reconstruction, typically captured using sources such as drones for aerial surveys, conventional cameras for terrestrial , or laser scanners for point cloud integration. Supported image formats include , TIFF, DNG (for RAW files), , and others like and , allowing flexibility in while prioritizing high-resolution files of at least 5 megapixels for optimal results. A minimum of 60-80% overlap between adjacent images is recommended to ensure sufficient feature matching, with aerial datasets ideally achieving 60% side overlap and 80% forward overlap to facilitate accurate 3D model generation. Preparation begins with initial quality checks to assess , focus sharpness, and lighting consistency, excluding low-quality images that may score below 0.5 units via the software's estimation tool to prevent alignment failures. Camera calibration is performed either automatically using embedded data or manually through the Camera Calibration dialog, supporting models like frame, fisheye, and spherical lenses; for precise setups, users can employ patterns or precalibrated parameters from formats such as XML or TXT. Masking unwanted areas, such as backgrounds or edges, is applied via tools like automatic AI generation, manual selection with intelligent scissors, or import from alpha-channel PNG files, which helps focus processing on relevant features and reduces noise in the dataset. Metadata handling involves extracting EXIF tags for essential parameters like , sensor size, and GPS coordinates, which aid in initial camera positioning and subsequent alignment; additional GNSS data can be imported via CSV files in the Reference pane, including accuracy estimates for georeferenced projects. For multispectral or inputs, calibration uses radiometric panels or metadata to normalize exposure variations. These steps ensure before proceeding to alignment, where prepared inputs enable robust feature detection. Best practices emphasize even coverage around the subject to minimize blind zones, where points must be visible in at least two images for reconstruction; for small objects, datasets should include multiple viewpoints to avoid gaps, while avoiding ultra-wide lenses or uniform textures that hinder feature detection. Users are advised to plan captures in controlled lighting to maintain consistency and incorporate ground control points (GCPs) early for scaled accuracy, particularly in geospatial applications.

Alignment and reconstruction

The alignment and reconstruction phase in Agisoft Metashape constitutes the core computational pipeline for transforming input photographs into initial 3D models, leveraging (SfM) techniques to estimate scene geometry and camera parameters. This process is divided into photo alignment, which generates a sparse , followed by dense reconstruction to produce a high-fidelity and optional . SfM here involves incremental camera pose estimation and to minimize reprojection errors across images, ensuring robust 3D recovery even from unordered photo sets. Photo alignment commences with the detection of keypoints, distinctive feature points identified via scale-invariant algorithms that apply at multiple pyramid levels to handle varying image resolutions. A configurable key point limit—typically set to 40,000 for efficiency—caps the number of detected features per image to balance computational load and matching accuracy. Subsequent matching identifies correspondences between keypoints in overlapping images, forming tie points; this step employs preselection strategies such as reference-based or sequential modes to prioritize likely pairs and reduce exhaustive comparisons, with matching performed across scales to enhance robustness against viewpoint changes. Camera pose estimation then refines these tie points through aerial and bundle block adjustment, solving for exterior orientations (three translation components and three : yaw, pitch, roll) and interior parameters using equations, optionally incorporating XMP metadata for initial guesses. The resulting alignment yields estimated camera positions visualized in the software, with reprojection errors reported to assess quality. To refine alignment accuracy further, particularly for georeferenced projects, users can incorporate manual markers (also known as ground control points or GCPs). Markers are placed on recognizable features in multiple images and assigned real-world coordinates via the Reference pane, where they contribute to the bundle adjustment by minimizing their reprojection errors alongside those of tie points. High reprojection errors for markers (typically RMS >0.5–1 pixel) often indicate inaccurate manual placement or poor visibility in some images. To address this:
  • Open the Reference pane and review the Error (pix) column for each marker.
  • For markers with high error, double-click the marker in the Reference pane to inspect projections in each photo, zoom in, and manually adjust the marker position for better accuracy.
  • Use "Show Marker Residuals" in Photo view to visualize errors.
  • After refinements, run Workflow > Optimize Cameras to update the alignment.
  • If error remains high, uncheck the marker (treat as check point to exclude it from influencing alignment) or remove it if unreliable.
  • Ensure markers have projections on at least 2–3 well-distributed images and are clearly visible.
From the aligned cameras and tie points, Metashape generates a sparse , a low-density representation of the scene comprising triangulated 3D coordinates of matched features, often numbering in the tens of thousands for typical datasets. This initial , displayed as colored tie points in the model view, serves as a foundational estimate of the 3D structure without further densification, enabling quick validation of alignment before proceeding to denser modeling; it can be exported independently for applications requiring only coarse geometry. Dense reconstruction builds upon this sparse foundation using multi-view stereo () to create a high-density and . It begins with calculation for each aligned image or overlapping pairs, employing dense stereo matching algorithms that propagate tie points to estimate per-pixel depths relative to the camera viewpoint, filtered for and optionally stored for reuse. These s are then fused across multiple views via , merging partial point clouds into a unified dense cloud with sampled colors and normals from source images; the process supports GPU acceleration on compatible and hardware to expedite matching and filtering. For , the dense cloud is processed into a polygonal surface via or similar methods, though this remains an initial model prior to refinement. Fine-level task subdivision allows handling large datasets by processing in manageable chunks, mitigating memory constraints during these steps. Quality settings in Metashape allow users to detail against processing efficiency, with options spanning Lowest to Ultra high that downscale input images for dense reconstruction—for instance, Ultra high processes at full original resolution for maximum sharpness, High at half resolution, Medium at quarter, and Low at one-eighth linear resolution for rapid previews. Quality presets (Lowest, Low, Medium, High, Ultra high) primarily affect resolution and density, where Ultra high yields the finest detail but demands substantial resources; on standard hardware with 16 GB RAM, aligning and reconstructing 300–500 images at high can take several hours, escalating to days for ultra settings on larger sets without GPU support or distributed processing. These configurations ensure adaptability to project scale, with higher tiers recommended for research-grade outputs despite the extended computation time.

Post-processing and export

After initial reconstruction, Metashape enables detailed post-processing to refine 3D models, perform measurements, apply georeferencing, and prepare outputs for various applications. This stage builds on the dense point cloud, mesh, and texture generated earlier, allowing users to optimize models for efficiency and accuracy before export. Mesh editing tools in Metashape facilitate optimization of reconstructed models by reducing complexity and improving quality. Decimation reduces the polygon count through the Tools > Decimate Mesh command, with presets like High (retaining 1/5 of points), Medium (1/15), and Low (1/45), or custom targets such as 100,000–200,000 faces for export suitability; this process targets faces based on point cloud density while preserving geometry. Smoothing refines surface irregularities via interpolation modes (Disabled, Enabled, or Extrapolated) with adjustable strength, applicable to selected faces for targeted enhancement. Hole filling addresses gaps in the mesh using the Close Holes... command with a maximum size slider or during mesh building with Extrapolated mode, ensuring complete surfaces. Texture baking transfers high-resolution details to optimized maps, including diffuse, normal, occlusion, and displacement types, often followed by atlas rebuilding and resizing through Tools > Resize Texture to balance detail and file size. These edits collectively produce lightweight, high-fidelity models suitable for rendering or analysis. Measurement tools provide precise analysis capabilities directly within the software interface. The tool measures point-to-point distances on 3D models or digital elevation models (DEMs), while scale bars between points or cameras enable relative scaling. Area calculations use polygons in Model or Ortho views, or derive from DSM/DTM surfaces for geospatial extents. Volume estimation computes enclosed spaces via Measure Area and ... on closed meshes or DEM bases, supporting applications like stockpile assessment. Annotations add markers, polylines, or polygons with labels and attributes via the Properties panel, facilitating documentation and collaboration without altering the model. These tools require a scaled or georeferenced project for absolute units. Georeferencing integrates real-world coordinates to align models accurately. Ground control points (GCPs) and GPS data from images or external files (e.g., CSV, ) are imported via File > Import Cameras or manual marker placement, with at least 10 GCPs recommended for high precision, distributed evenly across the area; markers are placed on multiple images to identify GCP locations. Coordinate systems like WGS84 or custom EPSG projections are selected, incorporating geoid data for elevation accuracy. Optimization refines alignment through in Tools > Optimize Cameras, adjusting camera parameters and incorporating GCPs for sub-centimeter error reduction in surveyed projects. High reprojection errors for markers (typically >0.5–1 pixel RMS) usually indicate inaccurate manual placement or poor visibility in some images. To address this, review the Error (pix) column in the Reference pane, inspect marker projections in each photo by double-clicking the marker to jump to images, zoom in and manually adjust positions for better accuracy, use "Show Marker Residuals" in Photo view to visualize errors, and rerun Workflow > Optimize Cameras. If errors remain high, uncheck the marker (treat as check point) to assess without influencing alignment, or remove it if unreliable. Markers should have projections on at least 2–3 well-distributed images and be clearly visible. This process scales and orients the model to global references, essential for mapping and outputs. Export workflows streamline output preparation with and versatile formats. The Workflow > Batch Process automates tasks across chunks using templates (e.g., {chunklabel}), supporting network distribution for large datasets split into blocks. Supported formats include OBJ, PLY, and for meshes; and LAS/LAZ for orthomosaics, DEMs, and point clouds; plus STL, , , KML, and CSV for specialized needs. Compression options like LZW, (with quality sliders), Packbits, or reduce file sizes without significant loss, while metadata such as , , processing parameters, and error reports are embedded—often via accompanying World files or project summaries. Video exports include adjustable texture sizes for animations, ensuring compatibility with downstream software like GIS or CAD systems. As of version 2.2.2 (2025), with minor enhancements in pre-release 2.3.0.

Applications

Geospatial and surveying

Metashape facilitates the generation of orthophotos, digital elevation models (DEMs), and contour lines from aerial and drone imagery, enabling detailed topographic mapping for geospatial applications. Orthophotos are created by projecting the processed 3D model onto a horizontal plane, producing georeferenced, distortion-free aerial images suitable for mapping large areas. DEMs represent terrain surfaces as raster grids of elevation values, derived from dense point clouds or mesh models, while contour lines are generated directly from DEMs to visualize elevation changes at specified intervals, supporting applications in land surveying and environmental analysis. These outputs are produced through the software's built-in tools, allowing users to customize resolution and projection for accuracy in topographic surveys.%20-%20Orthophoto,%20DEM%20(with%20GCP).pdf) The software supports high-precision surveying when integrated with real-time kinematic (RTK)-enabled drones, achieving centimeter-level accuracy in 3D reconstructions without extensive ground control points (GCPs). By importing geotagged images from RTK drones, Metashape optimizes camera alignment using the precise positioning data, typically reducing errors to 1-3 cm horizontally and vertically when combined with post-processing kinematic (PPK) corrections. This capability is particularly valuable for large-scale surveys where traditional methods are time-consuming, allowing for rapid data capture and processing to produce reliable geospatial models. Users can further refine accuracy by adjusting camera position uncertainties in the reference settings to match RTK specifications, such as 0.015 m horizontal and 0.03 m vertical. Metashape integrates seamlessly with GIS software like and through standard export formats such as for orthophotos and DEMs, and DXF or SHP for contour lines, facilitating further analysis in and . Exported data retains , enabling overlay with vector layers for tasks like land-use assessment or flood modeling. This supports workflows where Metashape handles photogrammetric processing, while GIS tools perform spatial queries and visualization. In coastal erosion studies, has been used to generate DEMs and orthophotos from drone surveys, allowing quantification of shoreline changes over time; for instance, researchers in northwest Iberia applied the software to multi-temporal for precise rate calculations, achieving sub-meter accuracy in volume loss estimates. For , integration enables the creation of orthomosaics that detect anomalies like leaks or structural weaknesses.

Cultural heritage and archaeology

Metashape has been widely adopted in and for creating detailed 3D models of monuments, , and artifacts through close-range , enabling virtual archiving and preservation of irreplaceable sites. In projects like the Yale Monastic Archaeology Project at Atripe, , researchers used Metashape to process thousands of photographs captured during excavations, generating photorealistic 3D reconstructions of structures such as the Repit temple and Six-Pillared Hall from 2016 to 2019. These models serve as digital repositories, allowing non-destructive documentation of fragile archaeological remains and supporting long-term analysis without physical handling. Texture mapping in Metashape enhances these models by applying high-resolution photographic textures, producing photorealistic representations that capture surface details, colors, and material properties essential for restoration planning and (VR) experiences. Similarly, studies on immersive photogrammetry have demonstrated Metashape's utility in generating VR-compatible models, such as walkthroughs of historical interiors, by integrating video frames into dense point clouds for interactive tours that educate and engage audiences remotely. Collaborative initiatives leverage Metashape to share 3D heritage models through online platforms, fostering international access and . The project (2022) digitized over 100 artifacts from the Kaptol–Čemernica in using Metashape, processing 24,848 photographs to produce 123 textured models uploaded to for public viewing and integration into virtual museum exhibitions via Unity. This approach supports cross-border collaboration among museums and archaeologists, enabling shared databases that enhance global heritage preservation efforts. Compared to traditional methods like manual surveying or , Metashape offers non-invasive that avoids physical contact with delicate items, while being cost-effective through its reliance on standard cameras rather than expensive equipment. In low-cost setups, such as those using consumer-grade Nikon and devices in the project, mask preparation times were optimized to approximately 25 seconds per image using , yielding high-fidelity models suitable for fragile artifacts without risking damage. This accessibility democratizes 3D documentation, allowing smaller institutions to contribute to heritage conservation on a .

Other specialized uses

In forensic analysis, Metashape facilitates the of crime scenes from photographic evidence, enabling precise documentation and of spatial relationships without disturbing the site. Researchers have demonstrated its effectiveness in capturing small-scale scenes and detailed , such as footprints or tool marks, using structure-from-motion algorithms to generate accurate models with sub-millimeter precision when calibrated properly. For instance, studies comparing to LiDAR highlight Metashape's role in producing high-fidelity point clouds from digital camera images, particularly for non-reflective surfaces, though it requires multiple overlapping photos to mitigate errors from shiny or transparent objects. In the film and gaming industries, Metashape supports asset creation by processing multi-camera rig data to reconstruct 3D models of real-world objects, which serve as bases for , virtual sets, and game environments. This workflow is particularly valuable for volumetric filmmaking and virtual production, where scanned models integrate seamlessly into tools like for cinematic VR or interactive experiences. The software's ability to generate textured meshes from photogrammetric data reduces manual modeling time, allowing artists to import high-detail scans directly into production pipelines for enhanced realism in animations and . For biomedical imaging, Metashape aids in modeling anatomical structures and prosthetics from multi-view photographs, supporting personalized device fabrication through accurate 3D surface reconstruction. In orthotic and prosthetic applications, it processes photogrammetric data to create patient-specific scans, complementing other modalities like structured light for non-invasive limb or facial modeling with resolutions suitable for additive manufacturing. Such models enable clinicians to visualize and refine designs, as seen in orbital prosthesis rehabilitation where structure-from-motion techniques yield printable meshes that match patient anatomy closely. In agricultural monitoring, Metashape processes multispectral imagery to generate orthomosaics for assessing plant health, identifying stress indicators like deficiencies or through vegetation indices derived from aligned image bands. This capability supports precision farming by mapping crop vigor and yield potential over large areas, with workflows that handle and near-infrared data to detect subtle variations in canopy . For example, UAV-captured datasets processed in Metashape produce georeferenced maps that quantify biomass and health metrics, aiding targeted interventions without exhaustive ground surveys.

Use in industry

Notable projects and case studies

In 2021, the (USGS) utilized Agisoft Metashape Professional Edition (version 1.6) to process shoreline aerial imagery for creating three-dimensional spatial products essential to coastal change assessments, including erosion mapping along vulnerable coastlines. This workflow enabled the generation of orthomosaics, digital elevation models, and point clouds from repeated image collections, supporting real-time hazard monitoring and recovery planning in dynamic coastal environments. The project's application of structure-from-motion techniques highlighted Metashape's role in producing high-accuracy geospatial data for environmental management. International archaeological teams have employed Metashape in the digitization of Pompeii, , to generate interactive 3D models of excavated structures and stratigraphic units, enhancing and preservation efforts. For instance, the Pompeii I.14 Project, led by in collaboration with global partners, integrated drone-captured imagery processed through Metashape to build location-aware digital twins of excavation sites, linking 3D reconstructions to geospatial data for efficient on-site analysis. Similarly, the Skydio X10 drone initiative at Pompeii used Metashape to process high-resolution and images into detailed 3D models, facilitating the study of inaccessible areas and public accessibility via virtual platforms. These efforts underscore Metashape's contribution to non-invasive heritage recording, preserving Pompeii's ruins for scholarly and educational purposes. In the entertainment industry, Metashape has supported visual effects workflows for high-profile productions, including prop scanning to create digital assets from physical models. Notably, EA DICE integrated Metashape into its pipeline for Star Wars Battlefront, using to scan real-world props and terrains into detailed 3D models for game environments, ensuring photorealistic integration within the Star Wars universe. This approach allowed for efficient asset creation, blending scanned elements with virtual production techniques to enhance immersive storytelling in the franchise's gaming extensions. Drone-based forest inventory projects in have leveraged Metashape for biomass estimation, providing precise 3D reconstructions to support sustainable . In , , researchers applied UAV multispectral imagery processed with Metashape Pro to develop species-specific models for forest health monitoring, enabling accurate volume and calculations across inventory plots. This demonstrated Metashape's utility in generating canopy height models and point clouds from drone data, correlating structural metrics with field measurements to estimate aboveground with high reliability in mixed European woodlands. Such applications aid in carbon stock assessment and , illustrating Metashape's scalability for large-scale ecological surveys. In 2025, Metashape was used for 3D reconstruction of cell towers in infrastructure inspections, processing drone imagery to create detailed models for maintenance and safety assessments.

Integration with hardware and software

Metashape demonstrates strong compatibility with various hardware devices commonly used in photogrammetry workflows, enabling seamless data capture and processing. It supports input from DJI drones equipped with RTK modules, such as the Phantom 4 RTK and Matrice 300 RTK, where RTK coordinates from the drone's GNSS receiver are directly imported to enhance georeferencing accuracy during alignment. For close-range applications, the software processes images from DSLR cameras, including models like Canon EOS and Nikon D series, leveraging their high-resolution sensors for detailed 3D reconstructions without requiring specialized hardware interfaces. Additionally, Metashape facilitates hybrid data fusion by importing LiDAR point clouds from sensors on drones or terrestrial systems, such as those integrated with DJI platforms, allowing users to align and merge LiDAR data with photogrammetric models for improved density and accuracy in challenging environments. On the software side, Metashape integrates with and game engines through export formats that support asset import into tools like and Unity. Users can export textured meshes in OBJ, PLY, or formats, which are natively compatible with for further editing and animation, or with Unity for real-time rendering and VR applications. For CAD workflows, the software provides linkages via Python scripting to automate data transfer to tools like , including export to DXF format for importing meshes, point clouds, and vector data directly into CAD environments. Collaborative processing is enhanced through integration with , a service that allows uploading projects for distributed computation on remote servers, enabling team-based workflows without local resource strain. Automation features in Metashape are powered by its Python scripting interface, which supports of multiple projects and custom pipelines for enterprise-scale operations. Scripts can automate tasks like image alignment, dense cloud generation, and , with official examples available for integrating external data sources or optimizing parameters. This scripting capability, compatible with Python 3.9, extends to bindings for broader interoperability in automated environments. Metashape's system requirements emphasize GPU acceleration for efficient processing across operating systems. It leverages for on Windows, , and macOS, with recommended hardware including GeForce RTX series GPUs featuring at least 4 GB VRAM to handle large datasets. GPUs are supported via , though provides the most optimized performance for core algorithms like depth estimation and . Minimum specifications include 16 GB RAM and a (e.g., 4+ cores at 2.0+ GHz, such as i5 or equivalent), but 32 GB RAM and multi-core CPUs are advised for professional use.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.