Hubbry Logo
search
logo

Types of mesh

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

A mesh is a representation of a larger geometric domain by smaller discrete cells. Meshes are commonly used to compute solutions of partial differential equations and render computer graphics, and to analyze geographical and cartographic data. A mesh partitions space into elements (or cells or zones) over which the equations can be solved, which then approximates the solution over the larger domain. Element boundaries may be constrained to lie on internal or external boundaries within a model. Higher-quality (better-shaped) elements have better numerical properties, where what constitutes a "better" element depends on the general governing equations and the particular solution to the model instance.

Common cell shapes

[edit]

Two-dimensional

[edit]
Basic two-dimensional Cell shapes

There are two types of two-dimensional cell shapes that are commonly used. These are the triangle and the quadrilateral.

Computationally poor elements will have sharp internal angles or short edges or both.

Triangle

[edit]

This cell shape consists of 3 sides and is one of the simplest types of mesh. A triangular surface mesh is always quick and easy to create. It is most common in unstructured grids.

Quadrilateral

[edit]

This cell shape is a basic 4 sided one as shown in the figure. It is most common in structured grids.

Quadrilateral elements are usually excluded from being or becoming concave.

Three-dimensional

[edit]
Basic three-dimensional cell shapes

The basic 3-dimensional element are the tetrahedron, quadrilateral pyramid, triangular prism, and hexahedron. They all have triangular and quadrilateral faces.

Extruded 2-dimensional models may be represented entirely by the prisms and hexahedra as extruded triangles and quadrilaterals.

In general, quadrilateral faces in 3-dimensions may not be perfectly planar. A nonplanar quadrilateral face can be considered a thin tetrahedral volume that is shared by two neighboring elements.

Tetrahedron

[edit]

A tetrahedron has 4 vertices, 6 edges, and is bounded by 4 triangular faces. In most cases a tetrahedral volume mesh can be generated automatically.

Pyramid

[edit]

A quadrilaterally-based pyramid has 5 vertices, 8 edges, bounded by 4 triangular and 1 quadrilateral face. These are effectively used as transition elements between square and triangular faced elements and other in hybrid meshes and grids.

Triangular prism

[edit]

A triangular prism has 6 vertices, 9 edges, bounded by 2 triangular and 3 quadrilateral faces. The advantage with this type of layer is that it resolves boundary layer efficiently.

Hexahedron

[edit]

A cuboid, a topological cube, has 8 vertices, 12 edges, and 6 quadrilateral faces, making it a type of hexahedron. In the context of meshes, a cuboid is often called a hexahedron, hex, or brick.[1] For the same cell amount, the accuracy of solutions in hexahedral meshes is the highest.

The pyramid and triangular prism zones can be considered computationally as degenerate hexahedrons, where some edges have been reduced to zero. Other degenerate forms of a hexahedron may also be represented.

Advanced Cells (Polyhedron)

[edit]

A polyhedron (dual) element has any number of vertices, edges and faces. It usually requires more computing operations per cell due to the number of neighbours (typically 10).[2] Though this is made up for in the accuracy of the calculation.

Classification of grids

[edit]
Structured grid
Unstructured grid

Structured grids

[edit]

Structured grids are identified by regular connectivity. The possible element choices are quadrilateral in 2D and hexahedra in 3D. This model is highly space efficient, since the neighbourhood relationships are defined by storage arrangement. Some other advantages of structured grid over unstructured are better convergence and higher resolution.[3][4][5]

Unstructured grids

[edit]

An unstructured grid is identified by irregular connectivity. It cannot easily be expressed as a two-dimensional or three-dimensional array in computer memory. This allows for any possible element that a solver might be able to use. Compared to structured meshes, for which the neighborhood relationships are implicit, this model can be highly space inefficient since it calls for explicit storage of neighborhood relationships. The storage requirements of a structured grid and of an unstructured grid are within a constant factor. These grids typically employ triangles in 2D and tetrahedral in 3D.[6]

Hybrid grids

[edit]

A hybrid grid contains a mixture of structured portions and unstructured portions. It integrates the structured meshes and the unstructured meshes in an efficient manner. Those parts of the geometry that are regular can have structured grids and those that are complex can have unstructured grids. These grids can be non-conformal which means that grid lines don’t need to match at block boundaries.[7]

Mesh quality

[edit]

A mesh is considered to have higher quality if a more accurate solution is calculated more quickly. Accuracy and speed are in tension. Decreasing the mesh size always increases the accuracy but also increases computational cost.

Accuracy depends on both discretization error and solution error. For discretization error, a given mesh is a discrete approximation of the space, and so can only provide an approximate solution, even when equations are solved exactly. (In computer graphics ray tracing, the number of rays fired is another source of discretization error.) For solution error, for PDEs many iterations over the entire mesh are required. The calculation is terminated early, before the equations are solved exactly. The choice of mesh element type affects both discretization and solution error.

Accuracy depends on both the total number of elements, and the shape of individual elements. The speed of each iteration grows (linearly) with the number of elements, and the number of iterations needed depends on the local solution value and gradient compared to the shape and size of local elements.

Solution precision

[edit]

A coarse mesh may provide an accurate solution if the solution is a constant, so the precision depends on the particular problem instance. One can selectively refine the mesh in areas where the solution gradients are high, thus increasing fidelity there. Accuracy, including interpolated values within an element, depends on the element type and shape.

Rate of convergence

[edit]

Each iteration reduces the error between the calculated and true solution. A faster rate of convergence means smaller error with fewer iterations.

A mesh of inferior quality may leave out important features such as the boundary layer for fluid flow. The discretization error will be large and the rate of convergence will be impaired; the solution may not converge at all.

Grid independence

[edit]

A solution is considered grid-independent if the discretization and solution error are small enough given sufficient iterations. This is essential to know for comparative results. A mesh convergence study consists of refining elements and comparing the refined solutions to the coarse solutions. If further refinement (or other changes) does not significantly change the solution, the mesh is an "Independent Grid."

Deciding the type of mesh

[edit]
Skewness based on equilateral volume

If the accuracy is of the highest concern then hexahedral mesh is the most preferable one. The density of the mesh is required to be sufficiently high in order to capture all the flow features but on the same note, it should not be so high that it captures unnecessary details of the flow, thus burdening the CPU and wasting more time. Whenever a wall is present, the mesh adjacent to the wall is fine enough to resolve the boundary layer flow and generally quad, hex and prism cells are preferred over triangles, tetrahedrons and pyramids. Quad and Hex cells can be stretched where the flow is fully developed and one-dimensional.

Depicts the skewness of a quadrilateral

Based on the skewness, smoothness, and aspect ratio, the suitability of the mesh can be decided. [8]

Skewness

[edit]

The skewness of a grid is an apt indicator of the mesh quality and suitability. Large skewness compromises the accuracy of the interpolated regions. There are three methods of determining the skewness of a grid.

Based on equilateral volume

[edit]

This method is applicable to triangles and tetrahedral only and is the default method.

Smooth and large jump change

Based on the deviation from normalized equilateral angle

[edit]

This method applies to all cell and face shapes and is almost always used for prisms and pyramids

Equiangular skew

[edit]

Another common measure of quality is based on equiangular skew.

where:

  • is the largest angle in a face or cell,
  • is the smallest angle in a face or cell,
  • is the angle for equi-angular face or cell i.e. 60 for a triangle and 90 for a square.

A skewness' of 0 is the best possible one and a skewness of one is almost never preferred. For Hex and quad cells, skewness should not exceed 0.85 to obtain a fairly accurate solution.

Depicts the changes in aspect ratio

For triangular cells, skewness should not exceed 0.85 and for quadrilateral cells, skewness should not exceed 0.9.

Smoothness

[edit]

The change in size should also be smooth. There should not be sudden jumps in the size of the cell because this may cause erroneous results at nearby nodes.

Aspect ratio

[edit]

It is the ratio of longest to the shortest side in a cell. Ideally it should be equal to 1 to ensure best results. For multidimensional flow, it should be near to one. Also local variations in cell size should be minimal, i.e. adjacent cell sizes should not vary by more than 20%. Having a large aspect ratio can result in an interpolation error of unacceptable magnitude.

Mesh generation and improvement

[edit]

See also mesh generation and principles of grid generation. In two dimensions, flipping and smoothing are powerful tools for adapting a poor mesh into a good mesh. Flipping involves combining two triangles to form a quadrilateral, then splitting the quadrilateral in the other direction to produce two new triangles. Flipping is used to improve quality measures of a triangle such as skewness. Mesh smoothing enhances element shapes and overall mesh quality by adjusting the location of mesh vertices. In mesh smoothing, core features such as non-zero pattern of the linear system are preserved as the topology of the mesh remains invariant. Laplacian smoothing is the most commonly used smoothing technique.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In computational fluid dynamics (CFD) and related numerical simulation fields, a mesh refers to the subdivision of a continuous geometric domain into a finite number of discrete cells or elements, enabling the approximation and solution of governing partial differential equations for phenomena like fluid flow, heat transfer, and structural mechanics.[1] Meshes are broadly classified into structured, unstructured, hybrid, and specialized variants like overset or Cartesian, each suited to different geometric complexities and simulation requirements. Structured meshes arrange elements in a regular, grid-like pattern with predictable connectivity, often using hexahedral (brick-like) cells, which offer high accuracy, efficient solver performance, and rapid convergence but demand significant manual effort for complex shapes.[1] In contrast, unstructured meshes employ irregular connectivity, typically with tetrahedral elements, providing flexibility for intricate geometries through automated generation while potentially requiring more cells for equivalent accuracy.[1] Hybrid meshes combine element types—such as prismatic layers near boundaries for resolving thin shear layers and tetrahedral fills elsewhere—to balance ease of creation with precision, particularly in boundary layer modeling where anisotropic stretching is essential.[1] Polyhedral meshes, derived by merging simpler elements like tetrahedra, feature multi-faced cells that improve gradient resolution and reduce cell count compared to pure tetrahedral grids, enhancing efficiency in complex flows.[1] Other notable types include overset (or Chimera) meshes for handling moving components via overlapping grids, and higher-order curved elements that incorporate internal nodes for quadratic or cubic approximations, yielding superior resolution for phenomena like acoustics or vortices with fewer total elements.[1] The choice of mesh type profoundly influences simulation fidelity, computational cost, and reliability, with key considerations including element quality (e.g., aspect ratios, skewness), conformity at interfaces, and adaptability for refinement in high-gradient regions.[1] Advances in meshing algorithms continue to prioritize automation and quality, ensuring robust predictions across engineering applications from aerodynamics to biomedical modeling.[1]

Introduction to Meshes

Definition and Purpose

In computational modeling, a mesh refers to the discretization of a continuous geometric domain into a finite collection of discrete cells or elements interconnected at nodes, enabling the numerical approximation of physical phenomena through methods such as the finite element method (FEM). This process divides complex geometries into manageable subdomains where governing equations can be solved locally and assembled globally.[2][3] The concept of meshing traces its origins to the finite difference methods developed in the 1930s and 1940s for solving partial differential equations (PDEs) on structured grids, evolving significantly in the 1950s with early FEM applications in structural engineering and further maturing around 1960 into a versatile framework for irregular domains. Pioneering work by researchers like Richard Courant in 1943 and Alexander Hrennikoff in 1941 laid foundational ideas for domain subdivision, transitioning from rigid Cartesian grids to flexible element-based representations that better handle boundary irregularities.[4][5] Meshes serve primarily to approximate solutions to PDEs in diverse fields, including computational fluid dynamics (CFD) for simulating fluid flows, structural analysis for predicting material deformations, and electromagnetics for modeling field distributions. By enabling the solution of complex, real-world geometries that analytical methods cannot address, meshes facilitate accurate predictions of behaviors such as airflow around airfoils in aerodynamics or stress distributions in engineering components under load.[6][7][8] The basic workflow of meshing begins with domain decomposition, where the continuous space is partitioned into nodes as primary points, connected by edges to form lines, faces to define surfaces, and volumes to represent the full 3D structure, culminating in a cohesive network suitable for simulation. This hierarchical buildup ensures compatibility across elements, whether structured for regular topologies or unstructured for arbitrary shapes.[3][9]

Key Terminology

In the context of mesh generation for numerical simulations, such as the finite element method, several fundamental terms describe the building blocks and properties of a mesh. These terms provide the foundational vocabulary for understanding how a continuous domain is discretized to approximate solutions to partial differential equations (PDEs).[10] A node is a coordinate point in space that serves as a vertex, connected to one or more elements to define their shape and facilitate the computation of displacements or other field variables at discrete locations.[10] An edge is a one-dimensional line segment connecting two nodes, forming part of the boundary between adjacent elements in two- or three-dimensional meshes.[11] A face is a two-dimensional surface bounded by one or more edges, representing the interface between elements in three-dimensional meshes or the boundary in two-dimensional contexts.[11] A cell, also referred to as an element, is the basic volumetric or areal unit of the mesh, defined as an ordered grouping of nodes that bounds a portion of the spatial domain, such as a tetrahedron in 3D or a triangle in 2D.[10] Connectivity describes the mapping or topological relationship between nodes and elements, typically stored in a connectivity table that specifies which nodes belong to each cell to ensure the integrity of the mesh structure.[12] The Jacobian matrix is the matrix of partial derivatives that transforms coordinates from a reference (parametric) element to the physical space, given by
J=(xξxηxζyξyηyζzξzηzζ), \mathbf{J} = \begin{pmatrix} \frac{\partial x}{\partial \xi} & \frac{\partial x}{\partial \eta} & \frac{\partial x}{\partial \zeta} \\ \frac{\partial y}{\partial \xi} & \frac{\partial y}{\partial \eta} & \frac{\partial y}{\partial \zeta} \\ \frac{\partial z}{\partial \xi} & \frac{\partial z}{\partial \eta} & \frac{\partial z}{\partial \zeta} \end{pmatrix},
where (x,y,z)(x, y, z) are physical coordinates and (ξ,η,ζ)(\xi, \eta, \zeta) are parametric coordinates; its determinant assesses local distortion or quality of the element mapping.[13] Resolution refers to the density of cells within the mesh, quantified as the number of elements per unit length, area, or volume, which determines the accuracy of the discretization.[14]

Geometric Elements

Triangular Elements

Triangular elements are fundamental two-dimensional finite elements in mesh generation and finite element analysis, characterized by three nodes located at the vertices, connected by three straight edges, and forming a planar triangular shape. These elements discretize planar domains into a network of triangles, enabling the approximation of field variables such as displacement or temperature within each triangle. The geometry is defined solely by the nodal coordinates, ensuring simplicity in formulation and computation.[15] A key advantage of triangular elements is their ability to conform to irregular boundaries and complex geometries, facilitating automated meshing of arbitrary domains. They are particularly amenable to generation via Delaunay triangulation, an algorithm that produces meshes optimizing geometric criteria, such as maximizing the minimum angle of triangles to enhance interpolation accuracy and reduce numerical errors. However, triangular elements have disadvantages relative to quadrilateral elements; linear variants exhibit constant strain across the element, resulting in stiffer behavior and lower accuracy for problems involving bending or stress gradients, often necessitating higher polynomial orders or denser meshes to match the efficiency of quadrilaterals.[16][17][18][19] In applications, triangular elements are widely employed for simulating 2D heat transfer, where they model temperature distributions and heat fluxes in irregular domains, and for planar stress analysis in thin plates or membranes, capturing deformation under in-plane loading. For interpolation, linear triangular elements use shape functions that assume constant strain, suitable for coarse approximations but limited in resolving variations; quadratic triangular elements incorporate three additional mid-side nodes, enabling linear strain interpolation for improved accuracy in capturing gradients without excessive refinement. The area $ A $ of a triangular element with vertices at $ (x_1, y_1) $, $ (x_2, y_2) $, and $ (x_3, y_3) $ is computed using the shoelace formula:
A=12x1(y2y3)+x2(y3y1)+x3(y1y2) A = \frac{1}{2} \left| x_1(y_2 - y_3) + x_2(y_3 - y_1) + x_3(y_1 - y_2) \right|
This measure is essential for integrating element matrices and ensuring positive orientation in the mesh. Triangular elements contribute to the flexibility of unstructured meshes by adapting seamlessly to varying domain shapes.[20][15][15][16]

Quadrilateral Elements

Quadrilateral elements are two-dimensional finite elements commonly employed in numerical simulations such as the finite element method (FEM), consisting of four nodes connected by four edges to form a closed polygonal shape.[21] These elements typically utilize a bilinear mapping to interpolate both geometry and field variables within the element domain, enabling efficient representation of planar regions.[22] Quadrilateral elements can be classified as convex or concave based on their interior angles and diagonal positions. In convex quadrilaterals, all interior angles are less than 180° and both diagonals lie entirely within the element, ensuring stable numerical behavior in simulations.[23] Concave quadrilaterals feature at least one interior angle exceeding 180°, with one diagonal partially outside the element, which may introduce numerical instabilities and is less common in practice.[23] The isoparametric formulation is a standard approach for these elements, where the same shape functions define both the physical geometry and the approximate solution, facilitating the handling of irregular shapes through a parametric coordinate system (ξ, η) ranging from -1 to 1.[24] The geometric mapping in isoparametric quadrilateral elements is expressed as:
x(ξ,η)=i=14Ni(ξ,η)xi \mathbf{x}(\xi, \eta) = \sum_{i=1}^{4} N_i(\xi, \eta) \mathbf{x}_i
where xi\mathbf{x}_i are the nodal coordinates, and Ni(ξ,η)N_i(\xi, \eta) are the bilinear shape functions given by N1=14(1ξ)(1η)N_1 = \frac{1}{4}(1 - \xi)(1 - \eta), N2=14(1+ξ)(1η)N_2 = \frac{1}{4}(1 + \xi)(1 - \eta), N3=14(1+ξ)(1+η)N_3 = \frac{1}{4}(1 + \xi)(1 + \eta), and N4=14(1ξ)(1+η)N_4 = \frac{1}{4}(1 - \xi)(1 + \eta).[21] This formulation allows for straightforward computation of the Jacobian matrix to transform integrals from the physical to the parametric domain.[22] One key advantage of quadrilateral elements is their potential for higher accuracy per node compared to alternative 2D elements, particularly in regions with smooth variations, due to the bilinear interpolation providing a more uniform stress distribution.[23] They also align naturally with Cartesian coordinate systems, simplifying implementations in structured meshes for uniform domains.[24] However, generating quadrilateral meshes for domains with curved boundaries often leads to element distortion, reducing accuracy and complicating automation, as these elements struggle to conform without introducing parasitic modes or locking effects.[25] In applications, quadrilateral elements are widely used in 2D fluid dynamics simulations, such as solving the Navier-Stokes equations on hybrid unstructured meshes combining quadrilaterals and triangles for efficient flow modeling around obstacles.[26] They are also prevalent in beam bending simulations under plane stress assumptions, where the elements capture transverse deflections and stress gradients effectively in structural analyses.

Tetrahedral Elements

Tetrahedral elements, fundamental building blocks in three-dimensional finite element meshes, consist of four triangular faces, six edges, and four nodes positioned at the vertices, forming a simplex that fills space without gaps or overlaps.[27] These elements derive their faces from two-dimensional triangular elements, providing a natural extension for volumetric discretization.[28] The volume $ V $ of a tetrahedral element is given by the formula
V=16det(M), V = \frac{1}{6} \left| \det(M) \right|,
where $ M $ is the 3×3 matrix constructed from the coordinates of three nodes relative to the fourth.[29] Tetrahedral elements offer significant advantages in flexibility, readily conforming to complex three-dimensional geometries such as irregular or organic shapes, which makes them ideal for domains where structured alternatives struggle.[30] Their automatic generation is highly feasible and efficient, often requiring minimal user intervention compared to other element types, enabling rapid meshing of intricate models.[31] However, these elements are susceptible to ill-conditioning, particularly when aspect ratios become unfavorable, leading to numerical instability in stiffness matrices.[32] Additionally, achieving sufficient accuracy demands a larger number of elements than hexahedral counterparts, escalating computational demands and time steps.[30][31] In applications, tetrahedral elements excel in three-dimensional crash simulations, where they model high-impact scenarios on deformable structures like human organs with stable energy absorption and acceptable mechanical fidelity.[31] They are also prevalent in biomedical modeling, such as analyzing plantar pressures and shear stresses in foot and footwear biomechanics, leveraging their adaptability to anatomical complexities.[30] For field interpolation, the linear T4 element employs four corner nodes to approximate linear variations within the volume, suitable for basic analyses but prone to inaccuracies in stress predictions.[33] Higher-order T10 elements, incorporating ten nodes—including midpoints along the six edges—support quadratic interpolation, enhancing accuracy for contact pressures and deformations at the cost of increased complexity.[33]

Pyramidal Elements

Pyramidal elements are three-dimensional finite elements featuring a quadrilateral base, typically square or rectangular, connected by four triangular faces that converge to a single apex vertex. This geometry allows them to serve as transitional components in volumetric meshes, effectively linking regions with differing element types. The base lies in one plane, while the apex is offset perpendicularly or at an angle, forming a structure analogous to a geometric pyramid.[34] A key advantage of pyramidal elements is their ability to bridge quad-dominated regions, such as those using hexahedral elements, with tri-dominated areas employing tetrahedral elements within hybrid meshes. This transitional utility enhances mesh flexibility for complex geometries without introducing hanging nodes or discontinuities. However, they present challenges, including potential singularities at the apex that can cause numerical stiffness or inaccurate stress predictions, particularly in lower-order formulations. Additionally, their irregular shape necessitates special numerical integration schemes, as standard Gaussian quadrature may fail due to the element's degeneracy.[35][36][34] In practice, pyramidal elements are employed in transition zones of intricate models, such as turbine blade simulations where they connect structured prismatic layers near surfaces to unstructured tetrahedral fills in the interior, and in layered representations of thin structures like membranes or coatings. Their mapping from a reference domain often employs a degenerate isoparametric approach, where edges collapse to points at the apex to maintain compatibility with adjacent elements. The volume of a pyramidal element is computed as
V=13Abh V = \frac{1}{3} A_b h
where $ A_b $ is the area of the base and $ h $ is the height from the base to the apex. This formula derives from the geometric properties preserved under affine transformations in finite element formulations.[37][38]

Prismatic Elements

Prismatic elements, also known as prism or wedge elements in finite element analysis, are three-dimensional cells formed by extruding a two-dimensional triangular or quadrilateral base along a specified height, resulting in two parallel bases connected by triangular or quadrilateral lateral faces.[39] This extrusion-based structure allows for the creation of anisotropic elements, particularly useful in layered geometries.[1] Prismatic elements are commonly generated by extruding 2D surface meshes, such as those composed of triangles.[40] There are two primary types of prismatic elements: linear and higher-order. Linear prismatic elements feature straight edges and linear shape functions for interpolation within the element, providing a basic approximation suitable for simpler problems.[41] Higher-order prismatic elements, in contrast, include additional nodes along edges or faces to enable quadratic or higher-degree polynomials, improving accuracy for capturing complex field variations, such as in wave propagation or curved geometries.[42] The term "wedge" specifically denotes a triangular prism, which has triangular bases and three quadrilateral sides, often preferred for transitioning between tetrahedral and hexahedral regions in hybrid meshes.[39] The volume $ V $ of a prismatic element is calculated as the product of the base area $ A $ and the height $ h $, expressed as
V=A×h, V = A \times h,
where $ A $ is the area of the triangular or quadrilateral base.[43] This formula underscores the element's dependence on extrusion parameters for volume determination. Prismatic elements offer significant advantages in simulations requiring resolution of directional gradients, such as boundary layers in fluid flows, where their high aspect ratios efficiently capture steep variations normal to surfaces using fewer elements than isotropic alternatives like tetrahedra.[1] They are particularly effective in layered domains, reducing computational cost while maintaining accuracy in viscous regions.[44] However, these elements are less suitable for isotropic filling of complex 3D volumes due to their inherent anisotropy, which can lead to distortions if not aligned properly with the geometry or flow direction.[45] In applications, prismatic elements are widely employed in computational fluid dynamics (CFD) for near-wall meshing to model viscous boundary layers in Reynolds-averaged Navier-Stokes simulations.[40] They also find use in manufacturing simulations involving extrusion processes, where the layered structure mirrors physical material flow.[46]

Hexahedral Elements

Hexahedral elements, also known as brick elements, are three-dimensional finite elements characterized by six quadrilateral faces, twelve edges, and eight nodes.[47] These elements form a closed polyhedron that approximates volumes in computational domains, providing a structured basis for discretizing complex geometries in numerical simulations.[47] One key advantage of hexahedral elements is their high accuracy in approximating solutions to partial differential equations, often requiring fewer elements than alternative types to achieve comparable precision.[47] This efficiency stems from their ability to align well with tensor-product solvers, such as geometric multigrid methods, which exploit the element's regular connectivity to accelerate convergence and reduce computational overhead.[47] Consequently, they enable simulations with lower element counts while maintaining solution quality, particularly in problems benefiting from orthogonal grids.[47] Despite these benefits, generating hexahedral meshes presents significant challenges, especially for domains with irregular or non-rectilinear boundaries, where automated meshing algorithms are less robust and often demand substantial user intervention.[47] This labor-intensive process can limit their applicability in highly curved or intricate geometries, contrasting with more flexible meshing approaches.[47] Hexahedral elements find prominent applications in structural mechanics, where their accuracy supports detailed stress and deformation analyses in engineering components.[47] They are also widely used in reservoir simulation for modeling fluid flow in porous media, leveraging their efficiency in layered subsurface structures to predict hydrocarbon recovery and pressure distributions.[48] The geometric mapping for hexahedral elements typically employs a trilinear isoparametric formulation, which transforms coordinates from a reference cube in parametric space (ξ,η,ζ)[1,1]3(\xi, \eta, \zeta) \in [-1,1]^3 to physical space (x,y,z)(x,y,z).[47] Within this framework, two common interpolation types are serendipity elements, which use 20 nodes to reduce degrees of freedom by omitting certain internal points, and Lagrange elements, featuring 27 nodes for a complete tensor-product basis that enhances polynomial completeness.[47] The Jacobian matrix J\mathbf{J} of this mapping, essential for computing integrals and ensuring element validity, is defined as
J=(x,y,z)(ξ,η,ζ)=(xξyξzξxηyηzηxζyζzζ), \mathbf{J} = \frac{\partial (x,y,z)}{\partial (\xi,\eta,\zeta)} = \begin{pmatrix} \frac{\partial x}{\partial \xi} & \frac{\partial y}{\partial \xi} & \frac{\partial z}{\partial \xi} \\ \frac{\partial x}{\partial \eta} & \frac{\partial y}{\partial \eta} & \frac{\partial z}{\partial \eta} \\ \frac{\partial x}{\partial \zeta} & \frac{\partial y}{\partial \zeta} & \frac{\partial z}{\partial \zeta} \end{pmatrix},
with its determinant det(J)\det(\mathbf{J}) determining the local volume scaling and orientation; positive values confirm invertibility and proper element twisting.[49]

Polyhedral Elements

Polyhedral elements, also known as polyhedral cells, are three-dimensional geometric primitives used in finite volume and finite element methods for discretizing complex domains in computational simulations. These elements consist of an arbitrary number of polygonal faces, typically more than six, connected by edges and nodes, enabling the representation of non-standard polyhedra that deviate from regular shapes like hexahedra or tetrahedra.[50] This flexibility allows polyhedral elements to adapt to irregular boundaries without excessive distortion, extending the capabilities of simpler prismatic or hexahedral forms for more intricate three-dimensional meshes.[51] Generation of polyhedral meshes often relies on the dual of Voronoi diagrams, where Voronoi cells naturally form polyhedra that conform to scattered point distributions, providing a robust basis for unstructured grids in complex geometries.[52] Alternatively, cut-cell techniques clip or refine Voronoi-based structures to enforce boundary conformity, avoiding the need for extensive manual adjustments while preserving cell integrity.[53] These methods have gained prominence since the early 2000s, driven by advancements in algorithmic efficiency for handling high-fidelity simulations.[54] A key advantage of polyhedral elements lies in their ability to conform closely to intricate geometries, such as those with curved surfaces or sharp features, thereby minimizing discretization errors in regions of high gradient.[1] In Voronoi or Delaunay-based approaches, they reduce the total element count required for equivalent resolution by leveraging multiple neighboring faces—often 10 to 20 per cell—for improved gradient approximation and flux balancing.[51] However, these benefits come with challenges: the arbitrary topology complicates numerical integration over faces, necessitating specialized quadrature rules that increase setup time.[54] Additionally, solvers incur higher computational costs due to the expanded stencil size from numerous neighbors, with convergence times reported to be 50% to 140% longer than for hexahedral meshes of similar scale.[55] Polyhedral elements find applications in simulating porous media flow, where their adaptability captures the heterogeneous void structures essential for accurate transport predictions in subsurface reservoirs.[56] They are also employed in urban airflow modeling, particularly for large-eddy simulations around buildings, enabling efficient resolution of turbulent wakes and boundary layer interactions in unstable atmospheric conditions—a development accelerated by post-2000s computational resources.[57] The volume $ V $ of a polyhedral element is computed using the divergence theorem applied to the position vector field, yielding
V=13VrndS, V = \frac{1}{3} \oint_{\partial V} \mathbf{r} \cdot \mathbf{n} \, dS,
where the surface integral is evaluated over all faces of the polyhedron, r\mathbf{r} is the position vector from a fixed origin, n\mathbf{n} is the outward unit normal, and dSdS is the surface element—ensuring exact computation for arbitrary shapes without decomposition.[58]

Mesh Topologies

Structured Meshes

Structured meshes feature a regular, grid-like connectivity where nodes and elements are organized in a predictable pattern, typically indexed by coordinates such as ii, jj, and kk in a computational space.[59] This topological mapping transforms the physical domain into a logical, often rectangular, parameter space, enabling straightforward element enumeration and neighbor identification.[60] Curvilinear structured meshes extend this approach by allowing warped grids that conform to curved boundaries while preserving the underlying regularity.[61] Generation of structured meshes commonly employs algebraic techniques, such as transfinite interpolation, which blends boundary data across the domain to produce a smooth grid without solving differential equations.[62] Alternatively, partial differential equation (PDE)-based methods, like elliptic smoothing, solve systems such as Laplace or Poisson equations to distribute points evenly and minimize grid distortion.[59] These approaches often use multiblock strategies to decompose complex geometries into simpler subdomains, each amenable to structured gridding.[60] A key advantage of structured meshes lies in their simple indexing, which facilitates efficient memory usage and rapid access during simulations, often requiring up to three times less storage than equivalent unstructured meshes.[59] This regularity supports fast numerical solvers by allowing direct array-based operations and implicit treatments of derivatives. The coordinate transformation underpinning these meshes is expressed as x=x(ξ,η,ζ)\mathbf{x} = \mathbf{x}(\xi, \eta, \zeta), where physical coordinates x\mathbf{x} map from curvilinear computational coordinates (ξ,η,ζ)(\xi, \eta, \zeta).[62] However, structured meshes are disadvantaged by their restriction to topologically simple domains, struggling with intricate geometries that demand non-uniform connectivity or holes.[59] Boundary-fitted generation in such cases can lead to high distortion or excessive element counts to maintain orthogonality.[60] In applications, structured meshes excel in rectangular or mildly curved domains, such as uniform flow simulations over flat plates.[63] They are particularly prevalent in airfoil grid generation for computational fluid dynamics, where O- or C-type grids wrap efficiently around the profile to resolve boundary layers.[64] These meshes typically utilize quadrilateral elements in two dimensions or hexahedral elements in three dimensions to leverage their structured nature.[59]

Unstructured Meshes

Unstructured meshes consist of nodes connected in an irregular manner without a fixed grid pattern, allowing the mesh to conform closely to complex boundaries and internal features.[https://www.sciencedirect.com/topics/computer-science/unstructured-mesh] The connectivity between nodes is typically stored using data structures such as adjacency lists, which record neighboring elements for each node to facilitate traversal and computation.[https://onscale.com/meshing-in-fea-structured-vs-unstructured-meshes/] These meshes primarily rely on simplicial elements like triangles in two dimensions or tetrahedra in three dimensions to fill the domain.[https://people.eecs.berkeley.edu/~jrs/papers/umg.pdf] Generation of unstructured meshes involves creating point distributions and defining element connectivity, often through methods like advancing front or Delaunay triangulation. The advancing front method starts from the domain boundary and progressively constructs elements inward by selecting optimal points based on local geometry and spacing criteria, ensuring smooth transitions and boundary fidelity.[https://ntrs.nasa.gov/api/citations/19950020607/downloads/19950020607.pdf] Delaunay triangulation, a seminal approach introduced in foundational works on geometric meshing, generates a mesh by connecting points such that no point lies inside the circumcircle of any triangle (or circumsphere in 3D), providing guaranteed element quality and robustness for irregular point sets.[https://ui.adsabs.harvard.edu/abs/1993JCoPh.106..125R/abstract] A key advantage of unstructured meshes is their ability to adapt to intricate geometries, such as those with sharp features or varying scales, by allowing flexible node placement without the constraints of a regular grid.[https://www.researchgate.net/publication/318456955_Mesh_Generation_in_CFD] Adaptive refinement is also straightforward, enabling local densification in regions of high gradients, such as near shocks or boundaries, to improve solution accuracy efficiently.[https://www.researchgate.net/publication/318456955_Mesh_Generation_in_CFD] However, these meshes require more memory to store explicit connectivity information compared to structured grids, increasing overhead for large-scale simulations.[https://onscale.com/meshing-in-fea-structured-vs-unstructured-meshes/] Solvers for unstructured meshes are generally slower due to the irregular data access patterns, which complicate vectorization and parallelization efforts.[https://www.researchgate.net/publication/339285304_Unstructured_Meshing_for_CFD] Unstructured meshes find widespread applications in fields demanding high fidelity to complex shapes, including automotive crash simulations where they model deformable structures and impact dynamics with millions of elements.[https://www.cad-journal.net/files/vol_3/CAD_3%286%29_2006_741-750.pdf] In biomedical engineering, they enable detailed CFD modeling of organs like hearts or blood vessels, capturing irregular anatomies derived from medical imaging.[https://www.researchgate.net/publication/227925824_Robust_generation_of_high-quality_unstructured_meshes_on_realistic_biomedical_geometry] Their adoption surged in the post-1990s era with the CFD boom, driven by advances in generation algorithms that supported industrial-scale computations.[https://ntrs.nasa.gov/api/citations/19950020607/downloads/19950020607.pdf]

Hybrid Meshes

Hybrid meshes integrate structured and unstructured elements within a single computational domain, typically combining prismatic or hexahedral layers near boundaries with tetrahedral elements in the interior to optimize resolution in regions of high gradients, such as viscous boundary layers. This approach leverages the geometric flexibility of unstructured meshes for complex interior geometries while employing structured-like elements for improved accuracy and efficiency near walls. The concept was advanced in the 1990s through contributions like those of Weatherill, who highlighted the complementary strengths of structured and unstructured methods to address limitations in pure approaches.[65] Generation of hybrid meshes often employs zonal strategies, where the domain is divided into regions: structured prismatic layers are extruded from wall surfaces to capture boundary layer effects, followed by an unstructured tetrahedral core filled via methods like Delaunay triangulation to conform to the geometry. Transitional elements, such as pyramids or prisms, may be used briefly at interfaces to ensure connectivity without distortion. This process, refined since the early 2000s, allows for automated tools to produce meshes with fewer total elements compared to fully unstructured grids of equivalent resolution in viscous regions.[65][66] Key advantages include a balance between computational efficiency and solution accuracy, as the structured boundary layers reduce the number of elements needed for high-fidelity viscous simulations while the unstructured core handles geometric complexity with minimal manual intervention. Hybrid meshes can achieve convergence with fewer cells than pure tetrahedral meshes in boundary layer-dominated flows, enhancing solver performance without sacrificing precision. However, disadvantages arise from challenges in matching interfaces between element types, which can introduce truncation errors or require specialized solvers to handle mixed topologies, increasing implementation complexity.[1][66][65] Applications of hybrid meshes are prevalent in turbomachinery and aerodynamics, particularly for simulations involving viscous layers around airfoils, blades, and aircraft components, where they have been widely adopted since the 2000s to model flows in engines and high-lift devices. For instance, they enable efficient resolution of rotor-stator interactions in compressors and detailed boundary layer effects in transonic wings, supporting design optimizations in aerospace engineering.[65][67][68]

Mesh Quality Metrics

Solution Precision

Solution precision in numerical simulations refers to the accuracy with which a mesh approximates the underlying partial differential equations, directly impacting the reliability of computed results such as stresses, velocities, or temperatures. The choice of mesh type influences this precision through its ability to represent geometric features and apply discretization schemes effectively. Key factors include the order of the elements used—linear elements (polynomial degree 1) typically yield second-order accuracy (p=2) in the L² norm, while quadratic elements (degree 2) provide third-order accuracy (p=3) in the L² norm—and the geometric fidelity, which determines how closely the mesh captures curved boundaries without excessive distortion. Higher-order elements reduce approximation errors by better representing solution gradients within each cell, but they demand more computational resources and require meshes that maintain quality to avoid ill-conditioning.[69] A primary source of error in mesh-based methods is discretization error, which scales asymptotically as O(hp)O(h^p), where hh is the characteristic cell size and pp is the convergence order in the relevant norm. In finite element methods, this error arises from the projection of the continuous solution onto the discrete mesh space, with coarser meshes or lower-order elements amplifying inaccuracies, particularly in regions of high curvature or steep gradients. For instance, unstructured tetrahedral meshes, while flexible for complex geometries, often introduce larger interpolation errors due to their irregular connectivity compared to structured meshes, leading to reduced precision for equivalent cell counts. In contrast, structured hexahedral meshes generally achieve higher precision for the same resolution because their orthogonal alignment minimizes numerical diffusion and aliasing in the discretization.[70][71] To verify and estimate solution precision, Richardson extrapolation is commonly employed, leveraging solutions from progressively refined meshes to isolate the discretization error and extrapolate to the continuum limit. This technique assumes the error follows the O(hp)O(h^p) form and computes a higher-order approximation by combining results from at least two grid levels, providing an efficient way to assess mesh adequacy without exhaustive refinements. In practice, it has been shown to yield reliable error bounds in finite element contexts, particularly for elliptic problems where asymptotic assumptions hold.[72][73] Advancements in the 2020s have emphasized high-order methods to enhance precision, with spectral element techniques standing out for their exponential convergence rates on smooth solutions. These methods use high-degree polynomials (often p>5) within elements, enabling superior accuracy on coarser meshes than traditional low-order approaches, and are particularly effective in applications like turbulent flows or wave propagation where low-order meshes struggle with resolution. Spectral elements combine the geometric flexibility of finite elements with the precision of spectral methods, often on hybrid or unstructured topologies, and have been integrated into frameworks for entropy-stable discretizations to maintain robustness. This shift addresses limitations in legacy low-order simulations, offering practical gains in precision for complex 3D domains.[74][75]

Convergence Rate

The convergence rate in finite element analysis quantifies how rapidly the numerical solution approaches the exact solution as the mesh is refined, characterized by the order $ p $ in the asymptotic relation $ |u - u_h|_{L^2} \propto h^p $, where $ u $ is the exact solution, $ u_h $ is the numerical approximation, and $ h $ denotes the characteristic mesh size. This rate is empirically determined using log-log plots of the error norm against $ h $, with the slope of the linear region yielding $ p $. For smooth solutions to elliptic partial differential equations, theoretical $ p $ values depend on the polynomial degree of the basis functions and the problem's regularity, but practical rates are influenced by mesh and element properties.[76] A common method to estimate $ p $ from two successive uniform refinements involves the formula
p=log(E1/E2)log(h1/h2), p = \frac{\log(E_1 / E_2)}{\log(h_1 / h_2)},
where $ E_1 $ and $ E_2 $ are the computed errors on meshes with sizes $ h_1 > h_2 $. This estimator assumes monotonic error reduction and is widely used to verify the attainment of expected rates in benchmarks. Mesh regularity significantly impacts $ p $; structured meshes, with their orthogonal and uniform elements, typically sustain higher rates compared to unstructured ones, where irregular connectivity can degrade approximation quality. For instance, hexahedral elements in structured grids often achieve p = 2 in the L² norm and p = 1 in the energy norm for linear approximations on elliptic problems, whereas tetrahedral elements in unstructured meshes may exhibit reduced convergence rates below the theoretical p=2 in the L² norm for linear elements due to inherent shape distortions and lower robustness in 3D. Quadratic tetrahedral elements can achieve rates closer to 3 in the L² norm, rivaling or exceeding hexahedral performance.[77][76][30] To realize optimal convergence rates, adaptive refinement methods dynamically adjust mesh density based on local error indicators, prioritizing regions of high gradient or singularity to balance global error reduction. These techniques, such as those employing a posteriori error estimators, ensure quasi-optimal rates of $ O(N^{-p/d}) $ (where $ N $ is the number of degrees of freedom and $ d $ the dimension) by avoiding uniform refinement's inefficiency. Seminal algorithms demonstrate linear convergence in the combined energy error and estimator norm across iterations.[78][79] Challenges arise with poorly constructed meshes, where non-monotonic convergence occurs: the error may stagnate or increase upon refinement due to element distortion, locking effects in linear tetrahedra, or insufficient resolution in critical areas. Such behavior underscores the need for quality controls during meshing to maintain reliable rate progression.[80]

Grid Independence

Grid independence, also known as mesh independence, refers to the verification process in computational simulations, particularly in computational fluid dynamics (CFD), where successive mesh refinements are performed until the numerical solution stabilizes and changes in key outputs fall below a predefined tolerance, ensuring that results are not influenced by discretization errors.[81] This step is essential for establishing the reliability of simulations by confirming that further refinement does not significantly alter the solution, thereby isolating the effects of physical modeling from numerical artifacts.[81] The process involves a systematic refinement of the mesh, typically starting with a coarse grid and progressively increasing resolution by a consistent factor, such as doubling the number of cells in each direction. Simulations are run on at least three successively finer grids to assess convergence, with the refinement continuing until the relative change in monitored quantities is less than a specified tolerance, often around 1% for engineering applications.[81] This iterative approach allows identification of the asymptotic convergence range, where the solution becomes independent of grid spacing.[82] Key criteria for achieving grid independence include monitoring representative outputs such as the drag coefficient or lift in aerodynamic simulations, ensuring that variations between successive grids diminish predictably. A widely adopted metric is the Grid Convergence Index (GCI), which quantifies the discretization uncertainty as a percentage. The GCI is calculated using the formula:
GCI12=1.25ϵrp1 \text{GCI}_{12} = \frac{1.25 |\epsilon|}{r^p - 1}
where ϵ\epsilon is the relative error between solutions on finer (1) and coarser (2) grids, rr is the grid refinement factor (typically 2), and pp is the observed order of convergence, estimated from multiple grids.[82] Grid independence is generally confirmed when the GCI falls below an acceptable threshold, such as 1-5% depending on the application's precision requirements.[81] Best practices emphasize beginning with a coarse mesh to minimize computational cost, followed by uniform refinement across the domain to maintain consistency in grid topology and spacing. This method applies equally to structured and unstructured meshes, though care must be taken to preserve boundary layer resolution in finer grids.[81] In engineering standards, such as those from the AIAA updated in the 2010s, the factor of safety of 1.25 in the GCI formula is recommended for three or more grids to account for potential non-monotonic convergence, providing a conservative estimate suitable for validation in aerospace applications.[83] This builds on prior observations of convergence rates to ensure practical endpoint verification.

Selection Criteria for Meshes

Skewness Evaluation

Skewness serves as a key metric in mesh evaluation, quantifying the deviation of an element's geometry from ideal equilateral or equiangular shapes, which is essential for ensuring numerical stability and accuracy in simulations such as computational fluid dynamics (CFD). This distortion arises when elements are stretched or twisted away from their optimal configurations, potentially leading to errors in gradient approximations and flux calculations.[84][85] Two primary types of skewness measures are employed: volume-based and angle-based. Volume-based skewness assesses the disparity between the actual volume of an element and that of an ideal equilateral element (such as a tetrahedron) constructed with equivalent edge lengths, often expressed as the maximum volume ratio to highlight volumetric distortion.[84] Angle-based skewness, in contrast, focuses on angular deviations from equiangular ideals, such as 60° for triangles, providing a direct measure of how angles stray from uniformity.[86] A specific formulation for equiangular skew in triangular elements is given by
skew=max(α60)120, \text{skew} = \frac{\max(\alpha - 60^\circ)}{120^\circ},
where α\alpha represents the largest interior angle, normalizing the deviation relative to the possible range up to a degenerate 180° configuration; this approach is generalized to 3D elements by adapting the ideal angles (e.g., approximately 70.53° for tetrahedra) and extending the normalization to account for multiple faces and vertices.[85] High values of skewness, particularly exceeding 0.9, can induce solver instability by amplifying discretization errors and hindering convergence in finite volume or finite element methods.[84] Skewness is typically computed and normalized to a range of 0 (ideal) to 1 (highly distorted) using specialized meshing tools. For instance, ANSYS Gambit and ICEM CFD automate these calculations, generating histograms and statistics to identify problematic elements across the mesh.[87] While skewness primarily addresses angular and volumetric distortion, it complements aspect ratio analysis by emphasizing shape irregularity over mere elongation.[85]

Aspect Ratio Analysis

The aspect ratio of a mesh element quantifies its elongation or stretching, defined as the ratio of the longest edge length to the shortest edge length in two-dimensional elements. In three dimensions, it is commonly computed as the maximum aspect ratio among the element's faces or as the ratio of the longest edge to the shortest height between opposite faces. This metric is scale-invariant and helps assess how closely an element approximates an ideal shape, such as a square in 2D or a cube/hexahedron in 3D.[86][88] The formula for aspect ratio is given by
AR=max(li)min(li), \text{AR} = \frac{\max(l_i)}{\min(l_i)},
where $ l_i $ are the relevant edge lengths or dimensions of the element. An ideal aspect ratio approaches 1, indicating isotropic elements suitable for uniform flow or stress fields, which minimizes numerical errors in finite element or finite volume simulations. However, higher values are tolerable and often necessary in specific regions, such as boundary layers where prismatic or layered elements may exhibit very high ratios, often exceeding 100:1, to resolve thin shear layers without excessive computational cost.[89][90] High aspect ratios degrade solution accuracy, particularly in capturing steep gradients, by amplifying interpolation errors within the element and potentially leading to ill-conditioned matrices in solvers. In explicit time integration methods, they further constrain the time step size to satisfy stability criteria like the Courant-Friedrichs-Lewy (CFL) condition, increasing computational expense. Guidelines recommend keeping aspect ratios below 5 for interior elements to maintain robustness, while allowing higher ratios in boundary layers provided orthogonality remains high. In contemporary mesh quality frameworks, aspect ratio is frequently evaluated alongside orthogonality—measuring angle deviations from 90 degrees—to account for combined distortion effects, as seen in tools like Gmsh.[89][91][92] Aspect ratio analysis complements skewness evaluation by emphasizing linear stretching over angular misalignment.[86]

Smoothness Assessment

Smoothness assessment evaluates the variation in cell size and orientation between adjacent elements, ensuring gradual transitions in mesh density and alignment to maintain overall grid uniformity. This inter-element property is crucial for high-quality meshes across structured, unstructured, and hybrid topologies, as abrupt changes can introduce inconsistencies in numerical approximations.[93][94] Common metrics for smoothness include the expansion ratio, defined as the ratio of volumes (or edge lengths) between adjacent cells, with an ideal value below 1.2 to restrict size changes to less than 20% and promote even gradation.[93][84] Another key measure is the warping angle, which quantifies non-planarity in quadrilateral faces by computing the maximum deviation angle ϕ=maxisin1(vivil)\phi = \max_i \sin^{-1} \left( \frac{\| \mathbf{v}_i - \mathbf{v}_i' \|}{l} \right), where vi\mathbf{v}_i and vi\mathbf{v}_i' are actual and projected node positions, and ll is the edge length; values approaching 0° indicate optimal flatness.[95][94] The importance of smoothness lies in its role in preventing numerical diffusion and solution artifacts, such as spurious oscillations or reduced convergence in finite element or finite volume methods, by minimizing discretization errors from inconsistent element transitions.[93][94] Assessment typically combines visual inspection for qualitative uniformity with automated software checks, such as those in ANSYS Fluent or SALOME, which flag regions exceeding metric thresholds based on volume gradients or angle deviations.[84][96] For curvilinear meshes, Jacobian-based metrics enhance evaluation by analyzing the determinant of the transformation matrix J=det(x/ξ)J = \det(\partial \mathbf{x}/\partial \xi) to verify positive volumes and shape preservation, including advancements around 2015 emphasizing high-order validity and boundary smoothness in adaptive generation.[97][98]

Mesh Generation and Adaptation

Generation Techniques

Mesh generation techniques encompass a variety of algorithms designed to discretize geometric domains into computational meshes suitable for numerical simulations in fields such as computational fluid dynamics and finite element analysis. These methods are broadly categorized into direct and indirect approaches. Direct methods involve the incremental insertion of nodes directly into the domain to form elements, often starting from boundary representations and progressively filling the interior while maintaining element quality. Indirect methods, in contrast, generate meshes by refining or transforming coarser initial grids, such as through octree subdivision or template-based overlay, which can be more efficient for complex geometries but may require additional smoothing to resolve irregularities.[99] For unstructured meshes, prevalent in both two-dimensional (2D) and three-dimensional (3D) applications, Delaunay triangulation serves as a foundational technique due to its ability to produce well-shaped elements by maximizing the minimum angle in triangles or tetrahedra. In 2D, Delaunay methods triangulate polygonal domains by ensuring no point lies inside the circumcircle of any triangle, facilitating robust handling of arbitrary geometries. Extending to 3D, tetrahedralization via Delaunay employs similar principles to create volume-filling tetrahedra, often combined with Ruppert's algorithm for size control and refinement. Structured meshes, suitable for regular domains, commonly employ multiblock techniques, where the domain is decomposed into topologically rectangular blocks, each meshed with quadrilateral or hexahedral elements using transfinite interpolation or algebraic mapping to ensure grid orthogonality and smoothness.[100][101][102] A range of software tools implements these techniques, distinguishing between commercial and open-source options. Commercial platforms like ANSYS Meshing and STAR-CCM+ provide integrated environments for automated generation, supporting hybrid unstructured-structured workflows with advanced features such as parallel processing and CAD interoperability. Open-source alternatives, including Gmsh and Salome, offer flexible, scriptable interfaces for custom meshing; Gmsh excels in parametric geometry handling and built-in Delaunay algorithms, while Salome integrates CAD modeling with tetrahedral and boundary layer meshing for multiphysics applications. These tools often leverage libraries like TetGen for robust 3D tetrahedralization.[103][104] The typical workflow for mesh generation begins with geometry import, where CAD models in formats like STEP or IGES are loaded and repaired to ensure a valid boundary representation. Subsequent steps include boundary layer insertion, particularly for viscous flow simulations, where prismatic or hexahedral layers are extruded normal to walls to capture near-surface gradients with high resolution. This is followed by core meshing via tetrahedralization for unstructured regions, employing advancing front or Delaunay methods to fill the volume while conforming to boundaries and size specifications. Representative elements in these processes include tetrahedra for flexible 3D filling.[105][106] Generating meshes presents several challenges, notably in handling hanging nodes during adaptive processes, where mismatched node alignments at element interfaces can degrade solution accuracy and require specialized transition elements for conformity. Ensuring watertight domains is another critical issue, as gaps or overlaps in boundary surfaces can lead to invalid elements or leakage in simulations, often necessitating automated healing algorithms prior to meshing.[107][108] Emerging in the 2020s, machine learning-assisted generation has gained traction for topology optimization, where neural networks predict optimal node placements or element distributions to accelerate iterative design processes. For instance, physics-informed models train on simulation data to generate initial meshes that reduce computational cost in structural optimization tasks.[109][110]

Adaptation and Improvement Methods

Mesh adaptation techniques refine existing meshes by adjusting element sizes, orders, or node positions to better capture solution features, enhancing accuracy without uniform global refinement. These methods are particularly valuable in finite element simulations where initial meshes may not adequately resolve local phenomena such as shocks or boundary layers. h-refinement involves subdividing elements to reduce size in regions of high gradients, improving resolution while increasing the number of degrees of freedom. This approach, foundational to adaptive finite element methods, allows for exponential convergence in error reduction for elliptic problems when combined with error estimators. p-refinement elevates the polynomial order of basis functions within elements, effective for smooth solutions as it achieves higher-order accuracy without altering mesh topology. Seminal work on the h-p version demonstrated that adaptive combinations of h- and p-refinement can attain near-optimal convergence rates, with error decreasing as O(Nk)O(N^{-k}) where NN is the number of unknowns and kk approaches the smoothness of the solution. r-refinement repositions nodes without changing element connectivity, suitable for moving interfaces or evolving geometries, preserving mesh quality while adapting to dynamic solution features. Improvement methods focus on repositioning nodes and modifying connectivity to mitigate poor element shapes post-adaptation. Laplacian smoothing iteratively relocates interior nodes to the average position of their neighbors, reducing skewness and aspect ratios in unstructured meshes while preserving volume. This technique, when improved with angle-based constraints, prevents mesh inversion and maintains boundary integrity, yielding up to 20-30% better element quality metrics in tetrahedral meshes. Edge swapping reconfigures local connectivity by flipping diagonals in quadrilaterals or tetrahedra, optimizing dihedral angles and minimizing the number of obtuse elements. Applied iteratively, it enhances overall mesh orthogonality, with studies showing reductions in maximum skewness by factors of 2-5 in 3D unstructured grids. Error-driven adaptation uses solution-derived indicators to guide refinements, ensuring targeted improvements. Sensor-based methods, such as those employing the Hessian matrix of the solution, estimate interpolation errors by aligning mesh anisotropy with second derivatives, effectively capturing gradients in anisotropic flows like aerodynamics. These Hessian-based estimators enable efficient adaptation, reducing degrees of freedom by 50-70% compared to isotropic refinement while achieving comparable accuracy. Goal-oriented adaptation, based on dual-weighted residuals, prioritizes refinements that minimize errors in specific functionals, such as lift coefficients in CFD, by solving adjoint problems to weight local errors. This approach has demonstrated convergence improvements of one order in goal quantities for nonlinear PDEs, making it ideal for optimization-driven simulations. Advanced algorithms integrate local remeshing and topology optimization for complex adaptations. Local remeshing replaces subsets of elements with higher-quality ones based on error thresholds, avoiding full regeneration and enabling efficient handling of large-scale deformations. When coupled with topology optimization, adaptive meshes refine regions of material density gradients, accelerating convergence in structural design problems by dynamically coarsening void areas. These integrations have reduced computational costs by up to 80% in 3D topology optimization while maintaining solution fidelity. The primary benefits of these methods include achieving grid independence more efficiently, as adaptive refinements converge to asymptotic accuracy with fewer elements than uniform grids, often halving simulation times for turbulent flows. In multiphysics simulations, post-2020 advances enable real-time adaptation, such as anisotropic remeshing for vascular flows, supporting interactive biomechanical modeling with sub-second updates.

References

User Avatar
No comments yet.