Hubbry Logo
X-ray microtomographyX-ray microtomographyMain
Open search
X-ray microtomography
Community hub
X-ray microtomography
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
X-ray microtomography
X-ray microtomography
from Wikipedia
3D rendering of a micro CT of a treehopper.
3D rendering of a μCT scan of a leaf piece, resolution circa 40 μm/voxel.
Two phase μCT analysis of Ti2AlC/Al MAX phase composite[1]

In radiography, X-ray microtomography uses X-rays to create cross-sections of a physical object that can be used to recreate a virtual model (3D model) without destroying the original object. It is similar to tomography and X-ray computed tomography. The prefix micro- (symbol: μ) is used to indicate that the pixel sizes of the cross-sections are in the micrometre range.[2] These pixel sizes have also resulted in creation of its synonyms high-resolution X-ray tomography, micro-computed tomography (micro-CT or μCT), and similar terms. Sometimes the terms high-resolution computed tomography (HRCT) and micro-CT are differentiated,[3] but in other cases the term high-resolution micro-CT is used.[4] Virtually all tomography today is computed tomography.

Micro-CT has applications both in medical imaging and in industrial computed tomography. In general, there are two types of scanner setups. In one setup, the X-ray source and detector are typically stationary during the scan while the sample/animal rotates. The second setup, much more like a clinical CT scanner, is gantry based where the animal/specimen is stationary in space while the X-ray tube and detector rotate around. These scanners are typically used for small animals (in vivo scanners), biomedical samples, foods, microfossils, and other studies for which minute detail is desired.

The first X-ray microtomography system was conceived and built by Jim Elliott in the early 1980s. The first published X-ray microtomographic images were reconstructed slices of a small tropical snail, with pixel size about 50 micrometers.[5]

Working principle

[edit]

Imaging system

[edit]

Fan beam reconstruction

[edit]

The fan-beam system is based on a one-dimensional (1D) X-ray detector and an electronic X-ray source, creating 2D cross-sections of the object. Typically used in human computed tomography systems.

Cone beam reconstruction

[edit]

The cone-beam system is based on a 2D X-ray detector (camera) and an electronic X-ray source, creating projection images that later will be used to reconstruct the image cross-sections.

Open/Closed systems

[edit]

Open X-ray system

[edit]

In an open system, X-rays may escape or leak out, thus the operator must stay behind a shield, have special protective clothing, or operate the scanner from a distance or a different room. Typical examples of these scanners are the human versions, or designed for big objects.

Closed X-ray system

[edit]

In a closed system, X-ray shielding is put around the scanner so the operator can put the scanner on a desk or special table. Although the scanner is shielded, care must be taken and the operator usually carries a dosimeter, since X-rays have a tendency to be absorbed by metal and then re-emitted like an antenna. Although a typical scanner will produce a relatively harmless volume of X-rays, repeated scannings in a short timeframe could pose a danger. Digital detectors with small pixel pitches and micro-focus x-ray tubes are usually employed to yield in high resolution images.[6]

Closed systems tend to become very heavy because lead is used to shield the X-rays. Therefore, the smaller scanners only have a small space for samples.

3D image reconstruction

[edit]
Series of projections scanning a webcam with a cone beam μCT system at 100kV tube voltage and 12W tube power.

The principle

[edit]

Because microtomography scanners offer isotropic, or near isotropic, resolution, display of images does not need to be restricted to the conventional axial images. Instead, it is possible for a software program to build a volume by 'stacking' the individual slices one on top of the other. The program may then display the volume in an alternative manner.[7]

Image reconstruction software

[edit]

For X-ray microtomography, powerful open source software is available, such as the ASTRA toolbox.[8][9] The ASTRA Toolbox is a MATLAB and python toolbox of high-performance GPU primitives for 2D and 3D tomography, from 2009 to 2014 developed by iMinds-Vision Lab, University of Antwerp and since 2014 jointly developed by iMinds-VisionLab, UAntwerpen and CWI, Amsterdam. The toolbox supports parallel, fan, and cone beam, with highly flexible source/detector positioning. A large number of reconstruction algorithms are available, including FBP, ART, SIRT, SART, CGLS.[10]

For 3D visualization, tomviz is a popular open-source tool for tomography.[citation needed]

Volume rendering

[edit]

Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set, as produced by a microtomography scanner. Usually these are acquired in a regular pattern, e.g., one slice every millimeter, and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel.

Image segmentation

[edit]

Where different structures have similar threshold density, it can become impossible to separate them simply by adjusting volume rendering parameters. The solution is called segmentation, a manual or automatic procedure that can remove the unwanted structures from the image.[11][12]

Typical use

[edit]

Archaeology

[edit]

Biomedical

[edit]
  • Both in vitro and in vivo small animal imaging
  • Neurons[14]
  • Human skin samples
  • Bone samples, including teeth,[15] ranging in size from rodents to human biopsies
  • Lung imaging using respiratory gating
  • Cardiovascular imaging using cardiac gating
  • Imaging of the human eye, ocular microstructures and tumors[16]
  • Tumor imaging (may require contrast agents)
  • Soft tissue imaging[17]
  • Insects[18] – Insect development[19][20]
  • Parasitology – migration of parasites,[21] parasite morphology[22][23]
  • Tablet consistency checks[24]

Developmental biology

  • Tracing the development of the extinct Tasmanian tiger during growth in the pouch[25]
  • Model and non-model organisms (elephants,[26] zebrafish,[27] and whales[28])

Electronics

[edit]
  • Small electronic components. E.g. DRAM IC in plastic case.

Microdevices

[edit]

Composite materials and metallic foams

[edit]
  • Ceramics and Ceramic–Metal composites.[1] Microstructural analysis and failure investigation
  • Composite material with glass fibers 10 to 12 micrometres in diameter

Polymers, plastics

[edit]

Diamonds

[edit]
  • Detecting defects in a diamond and finding the best way to cut it.

Food and seeds

[edit]
  • 3-D imaging of foods[29]
  • Analysing heat and drought stress on food crops[30]
  • Bubble detection in squeaky cheese[31]

Wood and paper

[edit]

Building materials

[edit]

Geology

[edit]

In geology it is used to analyze micro pores in the reservoir rocks,[32][33] it can be used in microfacies analysis for sequence stratigraphy. In petroleum exploration it is used to model the petroleum flow under micro pores and nano particles.

It can give a resolution up to 1 nm.

Fossils

[edit]

Microfossils

[edit]
X-ray microtomography of a radiolarian, Triplococcus acanthicus
This is a microfossil from the Middle Ordovician with four nested spheres. The innermost sphere is highlighted red. Each segment is shown at the same scale.[37]
  • Benthonic foraminifers

Palaeography

[edit]

Space

[edit]

Stereo images

[edit]
  • Visualizing with blue and green or blue filters to see depth

Others

[edit]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
X-ray microtomography, also known as micro-computed tomography (micro-CT or μCT), is a non-destructive technique that utilizes s to generate high-resolution three-dimensional (3D) reconstructions of a sample's internal microstructure, typically achieving voxel resolutions between 1 and 100 micrometers without requiring physical sectioning. The method captures hundreds to thousands of two-dimensional (2D) projection images by rotating the sample relative to an source and detector, then computationally reconstructs these projections into a volumetric model using algorithms such as filtered back-projection. Developed in the early 1980s as an extension of clinical computed tomography (CT) principles pioneered in the 1970s, micro-CT adapted X-ray technology for microscopic scales, with initial applications in biological imaging such as the visualization of snail anatomy. Key advancements include the use of microfocus X-ray tubes with focal spots as small as 5–50 micrometers and high-resolution detectors featuring pixel sizes of 10–55 micrometers, enabling geometric magnification and reduced blurring for enhanced spatial resolution. Systems can operate in laboratory settings with tube voltages up to 240 kV or at synchrotron facilities for even higher resolution down to the nanometer scale using hard X-rays. The technique's core components—a microfocus source, precision rotating sample stage, and 2D detector—facilitate contrast based on X-ray absorption, phase contrast, or , making it suitable for diverse materials from soft biological tissues to dense geological samples. Resolution is influenced by factors such as source spot size, , and sample dimensions, with optimal performance when the size is approximately 1000 times smaller than the sample width. Micro-CT finds broad applications across fields including for 3D visualization of bones, lungs, vasculature, and neuronal structures in small animals or organisms; for analyzing composites, foams, and defects; and geosciences or for non-invasive study of fossils and rock textures. Its primary advantages lie in providing quantitative, artifact-free internal imaging that supports further analyses like for education or finite element modeling for mechanical properties, surpassing traditional methods like serial sectioning in speed and fidelity.

History

Early Development

The origins of X-ray microtomography trace back to the early 1980s, when James C. Elliott, a at Queen Mary College (now ), invented the first such system to investigate the mineral distribution in biological hard tissues like teeth and . Motivated by the need for non-destructive three-dimensional mapping of absorption in small samples, Elliott collaborated with S.D. Dover to develop this pioneering technology, marking a significant advancement over earlier two-dimensional techniques. Elliott's prototype utilized a 40 kV microfocus with a 20 μm focal spot size to generate projections at 1° intervals over 180°, recorded on and subsequently digitized using a scanning microdensitometer for computer reconstruction. This setup achieved a of approximately 15 μm, enabling micron-scale imaging suitable for detailed analysis of internal structures in compact specimens. The system's design emphasized quantitative measurement of content, with initial experiments demonstrating reconstructed images of objects and a snail shell (Biomphalaria glabrata), the first published biological application, thus establishing its utility in dental for studying enamel and composition. These early applications extended to studies, providing insights into gradients without sample destruction. Despite its groundbreaking potential, early X-ray microtomography systems faced substantial challenges, including limited resolution typically ranging from 15 to 100 μm due to source and detector constraints, which restricted visualization of finer sub-micron features. Acquisition times were also protracted, often requiring several hours per scan owing to manual film handling, processes, and the need for numerous projections to ensure reconstruction accuracy. These limitations, while hindering widespread adoption, underscored the foundational innovations that paved the way for subsequent improvements in resolution and speed.

Key Milestones and Commercialization

The introduction of synchrotron-based X-ray microtomography in the 1990s at facilities like the European Synchrotron Radiation Facility (ESRF) represented a pivotal advancement, enabling sub-micron spatial resolutions for non-destructive 3D imaging of complex microstructures in materials and biological samples. Beamline ID19 at ESRF, operational since 1996, facilitated early experiments that demonstrated the technique's potential for high-brilliance X-ray sources to achieve resolutions below 1 μm, far surpassing conventional laboratory capabilities at the time. This development built on prior synchrotron work but emphasized parallel-beam geometries optimized for microtomography, driving applications in fields like geology and biomedicine. In the late , the commercialization of laboratory-based systems accelerated adoption by reducing dependency on large-scale facilities and making high-resolution imaging more accessible and cost-effective. Scanco Medical AG, founded in 1988 as a spin-off from ETH Zürich, developed its first micro-CT system (µCT 20) in 1995 and expanded commercial offerings for and material analysis throughout the decade. Similarly, SkyScan (acquired by in 2012) shipped its first commercial microCT system in 1997, focusing on desktop units for 3D imaging in life sciences and materials research with resolutions approaching 5-10 μm initially. These systems democratized the technology, enabling routine use in academic and industrial labs without the logistical challenges of access. By 2000, laboratory microCT systems had achieved a milestone of 1-micron nominal resolution, allowing detailed visualization of sub-millimeter features in diverse samples like trabecular and composites, which significantly broadened applications. Around , the integration of phase-contrast into these systems enhanced sensitivity to variations, improving contrast for low-attenuating materials such as polymers and soft tissues without requiring chemical . This advancement, leveraging propagation-based or grating-interferometry methods, was particularly impactful in biomedical , as demonstrated in early studies of small animal models. Parallel to hardware progress, software standardization emerged in the early , with open-source tools like plugins enabling efficient reconstruction, segmentation, and quantification of microCT datasets. The BoneJ plugin, first released in 2010 for skeletal analysis, exemplified this trend by providing accessible algorithms for trabecular metrics and 3D morphometry, fostering community-driven improvements in . These developments collectively propelled X-ray microtomography toward widespread commercial and academic use by the mid-2000s.

Fundamentals

Physical Principles

X-ray microtomography relies on the of s as they pass through a sample, which provides the contrast necessary for imaging internal structures at micron-scale resolutions. The fundamental principle governing this is the , expressed as I=I0eμxI = I_0 e^{-\mu x}, where II is the transmitted intensity, I0I_0 is the incident intensity, μ\mu is the linear , and xx is the path length through the material. This describes how X-ray intensity decreases exponentially to absorption and interactions within the sample. At micron scales, the μ\mu is particularly sensitive to the material's and , with higher s leading to greater absorption (proportional to approximately Z4Z^4, where ZZ is the ) and denser materials exhibiting stronger overall . These dependencies enable differentiation between materials like (high ZZ and ) and in biological samples, or metals and polymers in , though challenges such as beam hardening arise from energy-dependent variations in μ\mu. In addition to absorption contrast, X-ray microtomography can utilize phase contrast, which arises from the phase shift of X-rays passing through the sample due to variations. This is particularly useful for imaging low-density materials with weak absorption, such as soft biological tissues, where and internal details are revealed without additional . Phase contrast is achieved through propagation-based, grating-based, or analyzer-based methods, often requiring coherent sources like synchrotrons for optimal sensitivity. can also provide contrast by detecting emitted characteristic X-rays from excited atoms, enabling chemical mapping in 3D, though it typically requires longer acquisition times. Projection in X-ray microtomography involves acquiring multiple two-dimensional radiographs of the sample from different angles, typically over a 180° or 360° rotation, to capture the line integrals of along various paths. These projections form the basis for three-dimensional reconstruction via the , which mathematically represents the integral of the object's attenuation function along straight lines at specified angles, producing a sinogram dataset. The inverse , often implemented through filtered back-projection algorithms, then reconstructs the 3D map from these projections, as first formalized by in 1917 and applied to X-ray in seminal work on synchrotron-based microtomography. This process is central to microtomography, allowing volumetric without destructive sectioning, though it requires hundreds to thousands of projections for sufficient angular sampling at high resolutions. Unlike macro-scale computed tomography (CT) used in medical imaging, which achieves resolutions of hundreds of microns to millimeters with larger samples, X-ray microtomography targets sub-millimeter features with voxel sizes typically ranging from 0.5 to 50 microns, enabling detailed visualization of microstructures like pores or trabeculae. A key distinction lies in beam characteristics: laboratory micro-CT systems often employ polychromatic X-ray beams from microfocus tubes, which span a broad energy spectrum (e.g., 20-100 keV) and can introduce artifacts like beam hardening due to preferential absorption of lower-energy photons, whereas synchrotron sources provide monochromatic beams for more uniform attenuation and higher fidelity. Resolution in microtomography is fundamentally limited by the X-ray source spot size, typically 1-10 microns in microfocus tubes, which determines the geometric unsharpness and sets the practical limit for achievable detail before detector pixel size or sample constraints dominate. Voxel size directly influences contrast and noise; smaller voxels improve spatial resolution but reduce signal-to-noise ratio, while factors like partial volume effects at edges can enhance boundary contrast through apparent sharpening, aiding feature delineation in low-contrast samples.

X-ray Sources and Detection Systems

X-ray microtomography relies on specialized sources to generate s with sufficient intensity and collimation for high-resolution . Laboratory-based systems commonly employ microfocus tubes, which produce a small focal spot size of 1-5 microns, enabling spatial resolutions down to sub-micron levels through geometric . These tubes operate with polychromatic in a cone-beam and are valued for their and cost-effectiveness in routine applications. In contrast, sources provide significantly brighter beams—several orders of magnitude higher flux than tube sources—with parallel, highly collimated, and often monochromatic , facilitating superior contrast and artifact-free at micron resolutions, though at the expense of limited availability and higher operational complexity. Detection systems in X-ray microtomography typically utilize indirect flat-panel detectors that couple scintillators to or CCD arrays for converting X-rays into measurable electrical signals. Common scintillators, such as CsI:Tl, absorb X-rays and emit visible , which is then captured by the array, achieving pixel sizes typically ranging from 20-200 microns, with high-resolution variants down to 5-10 microns in specialized -based systems suitable for micron-scale . These detectors offer high dynamic ranges of 12-16 bits, allowing capture of subtle intensity variations across a wide , with technologies like guides enhancing spatial resolution and efficiency by minimizing spread. Critical performance parameters for these sources and detectors include flux, typically measured in photons per second, which governs and scan speed; energy range spanning 10-100 keV to penetrate diverse samples while maintaining resolution; and inherent trade-offs where achieving finer resolutions (e.g., via smaller focal spots or pixels) demands higher flux, thereby increasing dose to the sample. For instance, microfocus tubes balance flux limitations by using larger detector pixels (around 50 microns) to accumulate sufficient photons, but this can elevate dose in high-resolution modes, necessitating careful optimization for dose-sensitive applications like biological imaging. setups mitigate this through elevated flux, enabling lower doses at comparable resolutions. To ensure data quality, detectors undergo calibration procedures such as , which normalizes pixel responses using uniform exposures without a sample to account for variations in thickness or defective elements. This method effectively suppresses ring artifacts—concentric distortions in reconstructed images arising from inconsistent detector sensitivities—by applying pixel-wise corrections via techniques like wavelet decomposition or , improving overall image fidelity in microtomography datasets.

System Configurations

Scanning Geometries

In X-ray microtomography, scanning geometries define the arrangement of the source, sample, and detector to capture projection images during rotation, enabling three-dimensional reconstruction. These configurations balance resolution, , and acquisition speed, with cone-beam setups dominating modern systems due to their efficiency for volumetric imaging. Fan-beam geometry employs a point X-ray source and a linear (1D) detector array to acquire two-dimensional projections in a diverging fan-shaped beam, often suitable for slice-by-slice imaging of smaller samples. This setup typically involves rotating the sample through 180° to 360° to collect parallel-like projections, approximating uniform beam paths for reduced distortion in compact objects. While less common in full three-dimensional microtomography compared to clinical applications, fan-beam configurations can be stacked for multi-slice acquisition, providing for targeted regions with sub-millimeter features. Cone-beam geometry, the predominant approach in microtomography, utilizes a point X-ray source and a two-dimensional area detector to capture full three-dimensional projections in a diverging cone-shaped beam, allowing direct volumetric data acquisition over a single rotation. This enables faster scans—often completing in minutes—for samples up to several centimeters, though it introduces cone-angle artifacts that require specialized corrections; rotations span 180° to 360° to ensure complete angular coverage. Cone-beam systems excel for smaller samples by leveraging the full detector area, achieving isotropic resolutions down to micrometers without needing multiple slice acquisitions. Sample positioning in these geometries relies on precision rotation stages mounted between the source and detector, offering sub-micron accuracy to maintain stability during spins and minimize motion blur. , typically ranging from 10× to 100×, is controlled by adjusting the source-to-sample distance relative to the sample-to-detector distance, enhancing resolution for fine structures while keeping the sample within the beam's . Acquisition parameters are tailored to the and sample, with 1000 to 3000 projections commonly collected per full to balance data density and scan duration. Angular steps between projections range from 0.1° to 0.5°, ensuring sufficient sampling for artifact-free reconstruction, while exposure times per projection vary from 0.1 to 10 seconds depending on source intensity and desired .

Laboratory versus Synchrotron Setups

Laboratory-based X-ray microtomography setups typically employ compact systems with sources, utilizing cone-beam geometry and often featuring rotating gantries that allow for open configurations. These designs enable easy sample access and manipulation, making them suitable for routine, in-house experiments without the need for specialized facilities. The rotating gantry, where the source and detector encircle a stationary sample, facilitates continuous imaging during in-situ processes, such as those involving fluid flow or mechanical loading, by accommodating connections like tubing or sensors without interrupting the scan. However, the lower X-ray flux from tube sources results in longer acquisition times, typically ranging from minutes to hours for high-resolution scans, due to the polychromatic nature of the beam which can introduce artifacts like beam hardening. In contrast, synchrotron setups are beamline-based installations with fixed, high-brilliance sources providing parallel, monochromatic beams of exceptional flux, often orders of magnitude higher than laboratory systems. These closed configurations, housed within shielded hutches, prioritize and beam stability, supporting ultra-fast scans that can complete in seconds, enabling dynamic of rapid processes. The fixed beam path allows for advanced experimental modes, such as under mechanical load or environmental conditions, with sub-micron spatial resolutions below 1 μm routinely achievable due to the coherent and tunable beam properties. Sample rotation occurs within a precisely controlled stage, but the enclosed environment limits mid-scan access compared to open designs. The distinction between open and closed systems underscores key practical differences: laboratory open configurations excel in flexibility for in-situ experiments requiring ongoing sample interaction, while synchrotron closed setups offer superior shielding, , and beam consistency for high-precision, artifact-free imaging. Regarding accessibility and , laboratory systems are highly practical for widespread use, with commercial units priced between approximately $150,000 and $500,000, allowing dedicated installation in research labs for frequent, non-competitive operation. Synchrotron access, however, relies on competitive beamtime proposals at national facilities, incurring no direct purchase but involving scheduling constraints and travel, while delivering unmatched performance for specialized, high-impact studies.
AspectLaboratory SetupsSynchrotron Setups
ConfigurationOpen, rotating gantry; cone-beam geometryClosed, fixed ; parallel-beam geometry
Flux and Scan TimeLower ; minutes to hoursHigh ; seconds
ResolutionTypically 1–10 μmSub-1 μm possible
In-Situ SuitabilityHigh flexibility for sample accessAdvanced modes with stability
Cost/Accessibility$150k–$500k; routine lab useBeamtime proposals; competitive access

Image Acquisition and Reconstruction

Data Acquisition Process

The data acquisition process in X-ray microtomography involves a systematic to collect a series of two-dimensional projection images, known as radiographs, which form the raw dataset for subsequent reconstruction. This process typically occurs in laboratory or setups, where the sample is precisely positioned relative to the source and detector to capture data across multiple angles. The emphasizes stability and to ensure high-fidelity projections, with the entire sequence often lasting from minutes to hours depending on resolution and sample size. Pre-scan preparation begins with sample mounting, where specimens are secured using low-density, non-attenuating materials such as cardboard tubes, florist foam, or adhesive to prevent movement during rotation; for example, biological samples like small animals are often encased in foam to maintain orientation, while rock cores are placed on a rotary stage between the source and detector. Alignment follows, positioning the sample at the rotation center to minimize artifacts from offset, typically verified using low-exposure scout scans or optical aids. Parameter selection is critical and includes determining voxel size (aiming for approximately 1/1000 of the sample dimension, such as 6–75 μm for millimeter-scale objects), X-ray energy (30–150 kV, adjusted for sample density to optimize contrast, e.g., 140 keV for dense rocks), number of projections (often 1800–3600 over 180°–360° rotation, depending on geometry), exposure time per projection (e.g., 500 ms), and magnification to achieve the desired field of view. The scanning sequence commences with calibration references: dark-field images (acquired with the beam off to capture detector noise) and flat-field images (beam on, sample removed, to account for beam inhomogeneities) are recorded for normalization. The sample then undergoes automated rotation in either stepwise (pausing at each angle) or continuous mode, capturing 2D projection images at incremental angles; for instance, a typical high-resolution scan might involve 3200 angular positions with 2–5 averaged exposures per position to reduce noise, using tube currents of 140–250 μA. Filters (e.g., 0.15 mm ) may be applied to harden the beam and mitigate low-energy artifacts, with the entire rotation spanning 360° in cone-beam geometries or 180° in parallel-beam setups to cover the necessary angular range. Raw data output consists of sinograms, which organize the projection images into a two-dimensional array where each row corresponds to a projection angle and each column to a detector pixel's ray path through the sample. These are commonly stored as stacks of 16-bit TIFF files or NetCDF formats for efficient handling, with typical file sizes ranging from several gigabytes per scan (e.g., 6.3 GB for a 3200-projection dataset at 75 μm resolution). The projections preserve quantitative attenuation values, enabling later processing into 3D volumes. Quality checks are integrated throughout to detect issues like motion blur, which arises from sample instability and is assessed by reviewing projection sharpness during or post-scan, often mitigated by re-mounting. Beam instability, such as fluctuations in intensity, is monitored via real-time logs or reference images, addressed by source warm-up (e.g., 30–60 minutes) or filament checks; visual inspection of sinogram rows for streaks or inconsistencies ensures before proceeding.

Reconstruction Algorithms and Software

Reconstruction in X-ray microtomography transforms a series of two-dimensional projection images, acquired over a range of angles, into a three-dimensional volumetric representation of the sample's attenuation density. This process mathematically inverts the , which models the forward projection of X-rays through the object, and is essential for generating artifact-free volumes suitable for subsequent analysis. Common approaches include analytical methods like filtered back-projection (FBP) for speed and iterative techniques for improved handling of noise and incomplete data, with selections depending on , , and computational resources. The filtered back-projection (FBP) serves as the foundational analytical method, particularly effective for parallel- and fan-beam geometries in microtomography. It applies a ramp filter to each projection to counteract the blurring from simple back-projection, followed by weighted summation across all angles. For cone-beam setups prevalent in laboratory systems, the Feldkamp-Davis-Kress (FDK) extension of FBP accounts for the diverging beam by incorporating a cosine weighting factor and . The core FBP formulation for parallel-beam reconstructs the image f(x,y)f(x, y) as f(x,y)=120πq(θ,xcosθ+ysinθ)dθ,f(x, y) = \frac{1}{2} \int_0^\pi q(\theta, x \cos \theta + y \sin \theta) \, d\theta, where q(θ,s)q(\theta, s) is the filtered projection obtained by convolving the raw projection p(θ,s)p(\theta, s) with the ramp filter h(s)h(s), whose Fourier transform is ω|\omega| to amplify high frequencies proportionally to their spatial extent. This ramp filter, often implemented with apodization to suppress noise, enables rapid computation but can amplify inconsistencies in projections from polychromatic sources. Iterative reconstruction methods address limitations of FBP by solving the discretized linear system Ax=bAx = b, where AA represents the projection operator, xx the densities, and bb the measured projections, through repeated forward and backward projections. The algebraic reconstruction technique (), pioneered by Gordon, Bender, and Herman in 1970, performs sequential updates by projecting the current estimate onto hyperplanes defined by individual ray equations, promoting fast initial convergence ideal for sparse angular sampling in microtomography. In contrast, the simultaneous iterative reconstruction technique (SIRT), introduced by Gilbert in 1972, averages corrections across all rays in each iteration, yielding smoother results with reduced noise propagation and better artifact suppression, though at higher computational cost. These methods excel in noisy microtomography datasets by incorporating regularization, such as penalties, to stabilize solutions. Dedicated software streamlines these algorithms for practical use. Bruker's proprietary NRecon implements the FDK-based FBP with GPU acceleration via cards, achieving up to 100-fold speedups through optimized interpolation and supports region-of-interest reconstruction for large datasets. Open-source tools like TomoPy, a Python library, provide FBP, gridrec, and iterative solvers including ART and SIRT, tailored for and lab-based data with built-in normalization and alignment utilities. Complementing this, the ASTRA Toolbox offers GPU-accelerated primitives for FBP, SIRT, and conjugate gradient (CGLS) in 2D/3D cone-beam geometries, enabling custom algorithm development and efficient handling of high-resolution microtomography volumes. Micro-scale challenges like beam hardening and ring artifacts necessitate pre-reconstruction corrections applied as filters to the sinogram data. Beam hardening, resulting from preferential absorption of low-energy photons in polychromatic beams, distorts profiles and is mitigated in FBP and iterative pipelines via or linearization corrections calibrated against known materials like or aluminum, restoring quantitative accuracy without altering core algorithms. Ring artifacts, caused by detector pixel variations, manifest as concentric rings in reconstructed slices and are removed using sinogram-domain pre-filters such as or transforms to suppress radial stripes, preserving edge details in high-contrast microtomography samples.

Data Processing and Visualization

Image Segmentation

Image segmentation in X-ray microtomography involves partitioning the reconstructed 3D grayscale volumes into distinct regions corresponding to materials, phases, or features of interest, enabling subsequent quantitative analysis. This process is essential for isolating structures such as pores, particles, or tissues from the surrounding matrix, often starting from the output of reconstruction algorithms that produce voxel-based images with varying intensity levels reflecting X-ray attenuation differences. Thresholding methods form the foundation of binary segmentation by classifying voxels based on their grayscale values relative to a selected threshold, separating foreground objects from the background. Global thresholding applies a single uniform value across the entire volume, with being a widely adopted automatic technique that determines the optimal threshold by maximizing the between-class variance in the , assuming a bimodal distribution of intensities. Adaptive thresholding, in contrast, computes local thresholds for different regions to account for illumination variations or inhomogeneous , improving accuracy in samples with spatially varying densities. Advanced techniques address limitations of simple thresholding, such as over-segmentation or failure in complex textures. The watershed treats the image as a topographic surface, flooding from minima to delineate boundaries and separate touching objects, often enhanced with markers to control over-segmentation in noisy data. Region-growing methods initiate from seed points and iteratively expand regions by incorporating neighboring voxels that satisfy similarity criteria, such as intensity gradients, making them effective for delineating connected structures like aggregates or trabecular . Machine learning-based approaches, particularly deep convolutional neural networks like , enable semantic segmentation by learning hierarchical features from annotated training data, achieving high precision in multi-class labeling of microtomography volumes for applications in materials and biomedical . From segmented volumes, quantitative metrics provide insights into microstructural properties. Porosity is calculated as the ratio of void to total , revealing open or closed pore networks critical for fluid flow studies. Connectivity assesses the degree of interconnection among segmented features, often using or analysis to quantify and . Particle size distribution derives from watershed-separated or labeled objects, yielding statistics like mean or fractions that inform heterogeneity and mechanical behavior. At micro-scale resolutions, segmentation faces challenges from partial volume effects, where voxels at interfaces average multiple material densities, blurring boundaries and leading to inaccurate feature delineation. Noise in low-contrast regions, arising from or beam hardening artifacts, further complicates thresholding and growing algorithms, necessitating preprocessing like Gaussian filtering or advanced denoising to preserve edges without introducing bias.

Volume Rendering and Analysis

Volume rendering and analysis in X-ray microtomography involve techniques to visualize and quantify three-dimensional (3D) datasets reconstructed from tomographic projections, enabling the exploration of internal structures at micrometer resolutions. These methods process the voxel-based volume data to generate interactive 3D representations, facilitating the interpretation of complex architectures such as porous materials or biological tissues. Unlike two-dimensional slice views, volume rendering preserves the full information, allowing users to adjust visualizations dynamically for enhanced insight into density variations and spatial relationships. Direct volume rendering employs ray-casting algorithms to create photorealistic images by simulating light propagation through the . In this approach, rays are cast from the viewpoint through each pixel of the , sampling values along the path and accumulating color and opacity contributions until the ray exits the or reaches maximum opacity. Transfer functions map scalar intensities—representing —to like color, opacity, and emission, enabling selective highlighting of features based on thresholds; for instance, higher opacities can be assigned to regions of interest such as or voids in micro-CT . This technique, foundational since its introduction for , is particularly suited to microtomography for non-destructive of heterogeneous samples without prior segmentation. Isosurface rendering extracts and displays continuous surfaces defined by a constant scalar value, such as a threshold, to represent boundaries between materials. The algorithm processes the volume by dividing it into cubic cells and determining polygon configurations at each cell where the intersects, generating a triangulated for smooth surface rendering. In microtomography applications, this method isolates structures like trabecular surfaces or material interfaces, providing clear geometric models for further analysis while reducing visual clutter from internal densities. Originally developed for CT visualization, remains a for efficient extraction in high-resolution volumetric data. Analysis tools in workflows support interactive exploration and quantitative assessment of . Virtual slicing enables the extraction of arbitrary 2D planes through the 3D volume, revealing cross-sections at any orientation for detailed inspection of internal features. Orthogonal views display simultaneous projections along the principal axes, aiding in spatial orientation and alignment verification. Morphometric measurements, such as trabecular thickness—which quantifies the local of struts via parallel plate or sphere-fitting models—provide standardized metrics for structural characterization, often applied to assess or connectivity in cancellous tissues. These tools, when applied to segmented , enhance precision in evaluating morphological parameters. Specialized software facilitates these rendering and analysis tasks for microtomography volumes. Avizo offers advanced direct with customizable transfer functions and extraction, supporting large datasets through GPU acceleration for real-time interaction in materials and biological imaging. provides AI-assisted tools for volume visualization and morphometric quantification, including ray-casting-based rendering optimized for X-ray CT data to streamline 3D exploration and reporting. , an open-source platform, enables scalable via ray-casting and implementations, with plugins for morphometric analysis suitable for collaborative micro-CT workflows.

Applications

Biomedical and Biological Imaging

X-ray microtomography, also known as micro-CT, has become a cornerstone in biomedical imaging for its ability to provide non-destructive, three-dimensional visualization of biological tissues at micrometer resolutions. In medical and life sciences, it enables detailed analysis of both hard and soft tissues without the need for physical sectioning, facilitating studies on progression, tissue morphology, and developmental processes. This technique is particularly valuable for small animal models, where high-resolution datasets inform preclinical research on conditions like and cancer. In and dental analysis, micro-CT excels at quantifying trabecular architecture and identifying defects associated with . For instance, in aging models, it measures parameters such as volume fraction (BV/TV) and trabecular separation (Tb.Sp) in the , , and vertebrae, revealing significant age-related losses—such as a 63.8% decrease in BV/TV in male s from 6 to 22 months. Scans at 15 µm resolution using systems like the SCANCO vivaCT-40 allow for precise morphometric analysis of craniofacial bones, including the , where changes are more pronounced in males. This non-destructive approach supports longitudinal studies of bone quality and density, outperforming traditional histomorphometry in volumetric detail. Soft tissue imaging benefits from contrast agents and phase-contrast techniques to overcome inherent low density differences. Nanoparticle-based agents, such as liposomal iodine or nanoparticles, enhance visualization of vasculature and tumors in small animals like mice, enabling assessment of vascular morphology and tumor permeability via the . For example, in models, 88 µm scans delineate tumor blood pools and branching patterns over hours to days. Phase-contrast micro-CT further improves sensitivity to interfaces, such as muscle fibers, achieving resolutions below 10 µm without exogenous agents, as demonstrated in cardiac studies. These methods are crucial for preclinical and vascular research. In , micro-CT supports 4D imaging (three spatial dimensions plus time) of embryonic growth in models like and , capturing dynamic processes such as . Synchrotron-based histotomography provides cellular-resolution datasets of whole larvae, segmenting nuclei and measuring cell volumes at ~1 µm isotropic , allowing tracking of developmental stages without sectioning artifacts. In embryos, contrast-enhanced protocols yield high-resolution phenotyping of entire specimens, with voxel sizes around 10 µm³, facilitating mapping and . This approach surpasses conventional by offering complete volumetric data for time-series analysis. Micro-CT routinely achieves resolutions of 1-5 µm voxels for imaging cellular structures, enabling direct comparisons to histological methods. Cytoplasm-specific with allows virtual slicing at ~400 nm thickness, visualizing renal structures like glomeruli with fidelity matching 7-10 µm histological sections, while preserving specimen integrity for multi-modal studies. Such capabilities highlight micro-CT's role in bridging macroscopic and microscopic scales in biological research.

Materials Science and Industrial Uses

X-ray microtomography plays a pivotal role in by enabling non-destructive three-dimensional imaging of internal microstructures, which is essential for quality assurance and performance optimization in engineered . In composite and foams, it facilitates the quantification of void fractions and orientations, critical factors influencing mechanical integrity in polymers and metals. For example, high-resolution scans reveal the of voids in high--volume-fraction composites processed via resin transfer molding, allowing researchers to correlate defect morphology with reduced tensile strength. Similarly, segmentation algorithms applied to CT data quantify misalignment in unidirectional carbon--reinforced polymers, aiding in the design of components with enhanced load-bearing capacity. In metallic and polymeric foams, microtomography characterizes cell topology, strut thickness, and defect-induced , providing insights into energy absorption and lightweighting applications. Scans of aluminum foams, for instance, have mapped pore interconnectivity and deformation mechanisms under compression, informing development for automotive structures. For biobased foams reinforced with porous fillers, it visualizes heterogeneous density distributions and void clustering, which affect properties. Within electronics and microdevices, X-ray microtomography excels at inspecting joints and internal wiring in semiconductors, detecting subsurface voids and cracks that compromise reliability. Advanced techniques track electromigration-induced void growth in flip-chip bumps, enabling predictive modeling of failure modes in high-density . Combined with laminography, it improves visualization of hidden interconnections in ball-grid arrays, supporting 100% non-destructive in production lines. In additive manufacturing, microtomography assesses and layer in 3D-printed parts, identifying process-induced defects that degrade life. For powder bed fusion of like , it quantifies spherical and irregular pores in large components, linking their volume fractions—often below 1%—to tensile property variations post-heat treatment. This technique also elucidates formation mechanisms, such as keyhole collapse during melting, guiding parameter optimization to minimize in metal lattices. For non-destructive in and products, X-ray microtomography evaluates variations and internal defects without sample alteration. In , it images microstructural heterogeneity in baked goods and extruded products, detecting air pockets or inclusions that affect texture and . In , it reveals distributions, cracks, and gradients in timber beams, facilitating automated grading for structural applications.

Cultural Heritage and Geological Studies

X-ray microtomography enables non-destructive internal imaging of cultural artifacts, allowing archaeologists to study ancient without physical alteration. By revealing manufacturing techniques such as coil construction, temper distribution, and patterns, this method provides insights into ancient production processes and material sourcing for ceramics dating back thousands of years. For instance, scans of 18th-century Egyptian sealed have disclosed internal construction details and contents, combining CT with other imaging to preserve fragile vessels. In paleography, X-ray microtomography facilitates the virtual unrolling of sealed or carbonized scrolls, exposing hidden texts and inks without damage. Applied to the scrolls—over 1,800 carbonized rolls from the 79 CE eruption of —phase-contrast microtomography has detected carbon-based inks and internal structures, enabling non-invasive reading of otherwise inaccessible writings. Similarly, for later sealed documents, it identifies iron-based inks and reveals concealed content, supporting historical analysis while avoiding destructive opening. For fossils and microfossils, microtomography captures three-dimensional morphology of ancient specimens, uncovering hidden internal features that optical methods cannot resolve. In microfossils from the 1.88 billion-year-old Gunflint Formation, ptychographic computed achieved nanoscale resolution (down to 52 nm), revealing thin, irregular cell walls in kerogenous structures like Gunflintia and distinguishing biogenic from taphonomic elements such as internal cracks and mineral infills. This non-destructive approach has also quantified densities (e.g., 1.39–1.50 g/cm³ for ) and identified unique phases like in poorly preserved samples, enhancing assessments of biogenicity and preservation history. In geological studies, X-ray microtomography characterizes pore networks in rocks and sediments, informing permeability and flow models essential for exploration. Multi-resolution scans (0.42–190 µm sizes) of plugs have measured porosities and permeabilities (up to 20.4 D), highlighting multi-scale vugs and micropores in reefal lithofacies that influence petrophysical . This technique links microstructure to macroscopic behavior, such as non-unique porosity-permeability relationships in heterogeneous sediments, without requiring sample destruction. For gemstones like , X-ray microtomography maps internal inclusions and detects cracks, aiding authenticity verification and assessment. Scans at 5–10 µm resolution of colored gems, including , have quantified fracture networks and filler volumes (e.g., 7–13% in treated rubies), creating 3D digital twins for forensic purposes while minimizing radiation-induced color changes. In building materials, it visualizes crack propagation in at submicron scales (down to 50 nm), capturing 3D networks from mechanical or environmental stress in non-destructive 4D studies of 50 mm samples. This reveals initiation sites and evolution, supporting durability analyses without sectioning.

Limitations and Future Directions

Technical Challenges

X-ray microtomography faces fundamental resolution limits primarily due to physical constraints such as diffraction and penumbral blurring from the finite source size. Diffraction effects impose a theoretical limit based on the and , typically yielding resolutions of 0.5–2.5 μm for visible-light in detection systems. Penumbral blurring, arising from the geometric shadow of the source spot (often 0.5–5 μm in setups), further degrades edge sharpness, effectively capping achievable resolutions at around 0.5 μm in advanced lab-based nano-CT systems, though most commercial instruments achieve 1–5 μm for practical samples. These limits prevent sub-micron without sources, where smaller effective source sizes enable finer detail. Common artifacts in X-ray microtomography stem from the use of polychromatic laboratory sources, leading to distortions that compromise image fidelity. Beam hardening occurs as lower-energy photons are preferentially absorbed, shifting the spectrum and causing cupping artifacts or streaks, with errors up to 32.6% in density measurements for mineralized samples thicker than 1 mm. Streak artifacts similarly arise from high-attenuation regions, such as dense inclusions, propagating radial lines across projections. Windmill effects, often seen in cone-beam geometries with undersampling, manifest as swirling distortions around high-contrast features due to interpolation errors in helical or multi-slice acquisitions. These artifacts can be partially mitigated through post-acquisition corrections in reconstruction software, though they remain a persistent challenge in polychromatic systems. Radiation dose accumulation poses significant risks to sensitive samples, particularly organic materials like polymers, where cumulative exposure induces structural degradation. In micro-CT scans, doses can reach 10^4–10^5 Gy over typical acquisition times, leading to chain scission, cross-linking, and mass loss in polymers such as PMMA, with complete nanoscale destruction observed after 10–20 minutes at synchrotron-equivalent intensities. For beam-sensitive polymers, this results in altered mechanical properties and chemical bonding, exacerbating damage in repeated or prolonged scans; biological tissues and soft materials exhibit similar shrinkage and denaturation effects. Dose minimization strategies, such as lower tube currents or faster rotations, are essential but often trade off against . Handling the massive data volumes generated by X-ray microtomography presents substantial computational challenges, as datasets routinely scale to terabytes for high-resolution scans. A single from 1800 projections at 1–5 μm size can produce 100 GB to over 1 TB of , especially in time-resolved or large-field studies at facilities, where daily collections exceed several terabytes. Reconstruction algorithms, such as filtered back-projection, demand high-memory GPUs or clusters for iterative processing, with times ranging from hours to days per dataset, limiting throughput in resource-constrained lab environments. Efficient , including compression and , is critical to address storage and analysis bottlenecks. Recent advances in X-ray microtomography since 2020 have been driven by integrations of and , enabling faster and more accurate data processing. Deep learning algorithms, such as convolutional neural networks (CNNs) integrated with model-based (MBIR), have accelerated reconstruction times by over 9-fold for additive manufacturing parts by using sparse projections (193 views instead of 580) while maintaining detection of flaws larger than 50 μm at 78% accuracy. Similarly, self-supervised CNNs like Noise2Inverse have improved denoising in low-dose scenarios, achieving higher structural similarity index (SSIM) scores than traditional iterative methods without requiring paired training data. For automated segmentation, U-Net-based models trained on ash microtomography data have reached Dice coefficients of 0.998, automating quantitative analysis of and pore distributions that previously demanded extensive manual corrections. Phase-contrast and multi-energy imaging techniques have advanced through grating-based interferometry, providing enhanced soft-tissue contrast without contrast agents. The Talbot-Lau interferometer configuration detects phase shifts in low-absorbing materials like cartilage and lung tissue, enabling visualization of subtypes and joint details with noise levels reduced via high-contrast phase gratings (over 50% at 40-45 kVp). Post-2020 innovations, including dual-phase grating setups, have expanded the field of view and sensitivity by eliminating absorption gratings, as demonstrated in inverse Talbot-Lau systems that improve X-ray utilization for clinical applications like detection in COPD patients. These methods offer superior detail for weakly absorbing structures compared to conventional absorption-based microtomography. In-situ and 4D imaging capabilities have evolved to capture dynamic processes under environmental stresses, particularly in . Time-lapse X-ray microtomography during of 316L revealed a 48% increase in void populations at 2000 N load, with pore interconnections forming via cleavage when local stresses exceeded 531 MPa, allowing non-destructive tracking of evolution up to failure. The 4D-ONIX framework reconstructs 3D movies from just 2-3 sparse projections per timestamp, enabling kHz to MHz for processes like droplet collisions and laser-based without sample rotation, thus reducing dose and acquisition time. Efforts toward higher resolutions and low-dose imaging include nano-CT extensions achieving sub-500 nm voxels, as at the ESRF's ID16A beamline using nano-holotomography for cryo-EM comparable detail in battery materials. AI denoising techniques, such as transformer-based residual networks, have enabled effective reconstruction at 5% dose levels, outperforming CNNs in preserving high-frequency details for nanoscale . These developments, highlighted in 2024 reviews, prioritize reduced exposure while extending applications to sensitive biological samples.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.