Hubbry Logo
Minkowski–Bouligand dimensionMinkowski–Bouligand dimensionMain
Open search
Minkowski–Bouligand dimension
Community hub
Minkowski–Bouligand dimension
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Minkowski–Bouligand dimension
Minkowski–Bouligand dimension
from Wikipedia
Estimating the box-counting dimension of the coast of Great Britain

In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a bounded set in a Euclidean space , or more generally in a metric space . It is named after the Polish mathematician Hermann Minkowski and the French mathematician Georges Bouligand.

To calculate this dimension for a fractal , imagine this fractal lying on an evenly spaced grid and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid finer by applying a box-counting algorithm.

Suppose that is the number of boxes of side length required to cover the set. Then the box-counting dimension is defined as

Roughly speaking, this means that the dimension is the exponent such that , which is what one would expect in the trivial case where is a smooth space (a manifold) of integer dimension .

If the above limit does not exist, one may still take the limit superior and limit inferior, which respectively define the upper box dimension and lower box dimension. The upper box dimension is sometimes called the entropy dimension, Kolmogorov dimension, Kolmogorov capacity, limit capacity or upper Minkowski dimension, while the lower box dimension is also called the lower Minkowski dimension.

The upper and lower box dimensions are strongly related to the more popular Hausdorff dimension. Only in very special applications is it important to distinguish between the three (see below). Yet another measure of fractal dimension is the correlation dimension.

Alternative definitions

[edit]
Examples of ball packing, ball covering, and box covering

It is possible to define the box dimensions using balls, with either the covering number or the packing number. The covering number is the minimal number of open balls of radius required to cover the fractal, or in other words, such that their union contains the fractal. We can also consider the intrinsic covering number , which is defined the same way but with the additional requirement that the centers of the open balls lie in the set S. The packing number is the maximal number of disjoint open balls of radius one can situate such that their centers would be in the fractal. While , , and are not exactly identical, they are closely related to each other and give rise to identical definitions of the upper and lower box dimensions. This is easy to show once the following inequalities are proven:

These, in turn, follow either by definition or with little effort from the triangle inequality.

The advantage of using balls rather than squares is that this definition generalizes to any metric space. In other words, the box definition is extrinsic – one assumes the fractal space S is contained in a Euclidean space, and defines boxes according to the external geometry of the containing space. However, the dimension of S should be intrinsic, independent of the environment into which S is placed, and the ball definition can be formulated intrinsically. One defines an internal ball as all points of S within a certain distance of a chosen center, and one counts such balls to get the dimension. (More precisely, the Ncovering definition is extrinsic, but the other two are intrinsic.)

The advantage of using boxes is that in many cases N(ε) may be easily calculated explicitly, and that for boxes the covering and packing numbers (defined in an equivalent way) are equal.

The logarithm of the packing and covering numbers are sometimes referred to as entropy numbers and are somewhat analogous to the concepts of thermodynamic entropy and information-theoretic entropy, in that they measure the amount of "disorder" in the metric space or fractal at scale ε and also measure how many bits or digits one would need to specify a point of the space to accuracy ε.

Another equivalent (extrinsic) definition for the box-counting dimension is given by the formula

where for each r > 0, the set is defined to be the r-neighborhood of S, i.e. the set of all points in that are at distance less than r from S (or equivalently, is the union of all the open balls of radius r which have a center that is a member of S).

Properties

[edit]

The upper box dimension is finitely stable, i.e. if {A1, ..., An} is a finite collection of sets, then

However, it is not countably stable, i.e. this equality does not hold for an infinite sequence of sets. For example, the box dimension of a single point is 0, but the box dimension of the collection of rational numbers in the interval [0, 1] has dimension 1. The Hausdorff dimension by comparison, is countably stable. The lower box dimension, on the other hand, is not even finitely stable.

An interesting property of the upper box dimension not shared with either the lower box dimension or the Hausdorff dimension is the connection to set addition. If A and B are two sets in a Euclidean space, then A + B is formed by taking all the pairs of points ab where a is from A and b is from B and adding a + b. One has

Relations to the Hausdorff dimension

[edit]

The box-counting dimension is one of a number of definitions for dimension that can be applied to fractals. For many well-behaved fractals all these dimensions are equal; in particular, these dimensions coincide whenever the fractal satisfies the open set condition (OSC).[1] For example, the Hausdorff dimension, lower box dimension, and upper box dimension of the Cantor set are all equal to log(2)/log(3). However, the definitions are not equivalent.

The box dimensions and the Hausdorff dimension are related by the inequality

In general, both inequalities may be strict. The upper box dimension may be bigger than the lower box dimension if the fractal has different behaviour in different scales. For example, examine the set of numbers in the interval [0, 1] satisfying the condition

for any n, all the digits between the 22n-th digit and the (22n+1 − 1)-th digit are zero.

The digits in the "odd place-intervals", i.e. between digits 22n+1 and 22n+2 − 1, are not restricted and may take any value. This fractal has upper box dimension 2/3 and lower box dimension 1/3, a fact which may be easily verified by calculating N(ε) for and noting that their values behave differently for n even and odd.

Another example: the set of rational numbers , a countable set with , has because its closure, , has dimension 1. In fact,

These examples show that adding a countable set can change box dimension, demonstrating a kind of instability of this dimension.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Minkowski–Bouligand dimension, also known as the box-counting dimension, is a that quantifies the complexity and scaling properties of a set in a or more generally in a . It measures how the "roughness" or detail of the set changes with the scale of observation, typically by examining the minimal number of small cubes (or boxes) of side length ϵ\epsilon required to cover the set as ϵ\epsilon approaches zero. Formally, the upper Minkowski–Bouligand dimension of a set FRnF \subset \mathbb{R}^n is defined as dimMBF=lim supϵ0logN(F,ϵ)logϵ,\overline{\dim}_{MB} F = \limsup_{\epsilon \to 0} \frac{\log N(F, \epsilon)}{-\log \epsilon}, where N(F,ϵ)N(F, \epsilon) denotes this minimal , while the lower dimension uses lim inf\liminf instead; the dimension exists if the upper and lower coincide. This approach provides an intuitive, computationally accessible way to estimate dimensions for irregular or self-similar structures, such as coastlines or porous materials, where traditional dimensions fail. The origins of the Minkowski–Bouligand dimension trace back to early 20th-century . introduced related concepts of content and sausage-like neighborhoods around sets in his 1910 work Geometrie der Zahlen, laying the groundwork for measuring non-integer-dimensional "volumes." Georges Bouligand rigorously formalized the dimension in 1928 through his study of "improper sets" and their dimensional numbers, adapting Minkowski's ideas to define a capacity-like measure that scales with non-integer exponents. Unlike the , which relies on infima over countable covers with diameters raised to a power and is more theoretically refined but harder to compute, the Minkowski–Bouligand dimension uses uniform grid covers and always satisfies dimHFdimMBF\dim_H F \leq \dim_{MB} F, with equality holding for many strictly self-similar fractals but strict inequality possible for others, such as certain pathological sets. Key properties include its stability under bi-Lipschitz maps and its utility in bounding the "thickness" of sets, as captured by the Minkowski content—a positive, finite measure at the dimension that indicates measurability in the Minkowski sense. The dimension is particularly valuable in applied fields like , where it analyzes waveform irregularities, and in or for quantifying boundary complexity, often via numerical box-counting algorithms on digitized data. Despite its advantages, it can overestimate dimensions for sets with varying local densities, prompting refinements like the Assouad dimension for more controlled estimates.

Introduction

Definition

The Minkowski–Bouligand dimension, also known as the box-counting dimension, serves as a measure of the fractal complexity of a set by quantifying the rate at which the number of small boxes required to cover the set increases as the box size decreases. Intuitively, this dimension captures how the "roughness" or intricacy of the set manifests across different scales, with higher values indicating greater space-filling capacity in a non-integer sense. For a in , the process involves overlaying a grid of shrinking squares in two dimensions or cubes in higher dimensions and counting only those that intersect the set, revealing the scaling behavior essential to geometry. This box-counting approach estimates the as the exponent in the power-law relationship where the count of occupied grows inversely with the box size, providing an accessible way to assess the set's dimensional content without relying on more intricate measures. Often referred to as a capacity , it emphasizes the set's ability to occupy at all observable scales, originating from efforts to extend classical notions of content to sets with non-integer dimensions. In practice, as the grid resolution increases, the number of needed reflects the set's inherent irregularity, making this a fundamental tool for characterizing structures.

Historical background

The Minkowski–Bouligand dimension traces its origins to the early 20th century, when developed foundational ideas in . In 1910, Minkowski introduced related concepts of content and sausage-like neighborhoods around sets in his work Geometrie der Zahlen, laying the groundwork for measuring non-integer-dimensional "volumes." This concept was significantly generalized by Georges Bouligand in 1928, who extended it beyond convex bodies to arbitrary sets in . In his paper "Ensembles impropres et nombre dimensionnel," Bouligand defined a rigorous notion of dimension based on covering densities akin to , providing a framework for non-integer dimensions applicable to irregular sets. A key milestone occurred in 1961 with Lewis Fry Richardson's empirical investigations into coastline lengths, where he plotted how measured lengths varied with scale, revealing power-law behavior that implicitly highlighted the dimension's role in quantifying irregularity for natural boundaries. The dimension experienced a major revival in the post-1970s era through the emergence of fractal geometry, particularly via Benoit Mandelbrot's seminal works. Mandelbrot reinterpreted and popularized the covering-based approach as the "box dimension" in his 1975 book Les objets fractals: forme, hasard et dimension, integrating Richardson's observations and establishing its centrality in studying self-similar structures across science.

Formal aspects

Box-counting procedure

The box-counting procedure provides a practical algorithmic approach to estimate the Minkowski–Bouligand dimension by systematically covering a given set with grids of boxes at progressively finer scales and recording the minimal number required for coverage. This method is particularly suited for computational implementation, as it translates the abstract notion of dimension into countable operations on discretized data. It assumes the set is compact and can be approximated numerically, making it applicable to images, point clouds, or volumetric data in Euclidean space. To begin, embed the set in Rn\mathbb{R}^n, where nn is the ambient , ensuring all points within a defined region for gridding. Select a sequence of decreasing scales ϵk>0\epsilon_k > 0, starting from a value comparable to the set's and approaching zero, typically spanning several orders of magnitude for robust estimation. For each ϵk\epsilon_k, overlay an nn-dimensional grid of hypercubes (boxes) with side length ϵk\epsilon_k onto the containing the set. Count N(ϵk)N(\epsilon_k) as the smallest number of such boxes that intersect or contain at least one point of the set, effectively covering it without overlap beyond necessity. Repeat this for all chosen ϵk\epsilon_k, often automating the counting via scanning algorithms that check or occupancy. For bounded sets, first normalize the coordinates to fit within a unit cube [0,1]n[0,1]^n by scaling and translating, which standardizes the grid application and avoids from arbitrary bounding boxes. Grid alignment typically employs axis-aligned boxes for simplicity and computational efficiency, though slight rotations or offsets of the grid can be tested to minimize N(ϵk)N(\epsilon_k) and account for directional biases, as different orientations may yield varying counts due to the set's . In practice, choose ϵk\epsilon_k using dyadic sequences (e.g., ϵk=2k\epsilon_k = 2^{-k}) or logarithmic spacing to distribute scales evenly and capture multi-scale behavior effectively. When dealing with finite approximations, such as discrete point sets or rasterized images, set the smallest ϵk\epsilon_k above the minimal inter-point distance or pixel resolution to prevent overcounting noise and ensure stable results. Conceptually, the procedure illustrates scaling behavior: as ϵ\epsilon shrinks, N(ϵ)N(\epsilon) increases, reflecting how finer resolutions reveal more intricate details of the set's structure in a manner that follows a power-law pattern, highlighting its effective dimensionality across scales. This visual progression—from coarse covers with few boxes to dense fine-grained tilings—underscores the method's utility in revealing hidden complexity in bounded geometric objects.

Mathematical formulation

The Minkowski–Bouligand dimension of a bounded set ARnA \subset \mathbb{R}^n is formally defined as dB(A)=limϵ0logN(ϵ)log(1/ϵ),d_B(A) = \lim_{\epsilon \to 0} \frac{\log N(\epsilon)}{\log (1/\epsilon)}, where N(ϵ)N(\epsilon) denotes the smallest number of sets of diameter at most ϵ\epsilon required to cover AA. This limit quantifies the scaling behavior of the minimal covering number as the scale ϵ\epsilon approaches zero. The limit exists if the upper and lower variants coincide; otherwise, the upper Minkowski–Bouligand dimension is given by dB(A)=lim supϵ0logN(ϵ)log(1/ϵ),d_B^*(A) = \limsup_{\epsilon \to 0} \frac{\log N(\epsilon)}{\log (1/\epsilon)}, and the lower Minkowski–Bouligand dimension by d_B_*(A) = \liminf_{\epsilon \to 0} \frac{\log N(\epsilon)}{\log (1/\epsilon)}. Notational variants commonly replace ϵ\epsilon with δ\delta, or specify N(δ)N(\delta) as the infimum over all covers by δ\delta-diameter sets, or equivalently as the number of δ\delta-sided grid boxes intersecting AA in a fixed mesh. This formulation captures the asymptotic covering density because, for a set of dimension dd, N(ϵ)N(\epsilon) scales like ϵd\epsilon^{-d} at small ϵ\epsilon, so the logarithmic ratio approaches dd, reflecting the set's effective "volume" growth in the covering space.

Variants and alternatives

Upper and lower dimensions

When the limit defining the Minkowski–Bouligand dimension does not exist, the upper and lower variants provide refined measures of a set's scaling behavior. The upper Minkowski–Bouligand dimension of a ARnA \subset \mathbb{R}^n, denoted dB(A)d_B^*(A), is given by dB(A)=lim supε0logN(ε)log(1/ε),d_B^*(A) = \limsup_{\varepsilon \to 0} \frac{\log N(\varepsilon)}{\log (1/\varepsilon)}, where N(ε)N(\varepsilon) denotes the smallest number of sets of diameter at most ε\varepsilon required to cover AA. The lower Minkowski–Bouligand dimension, denoted d_B_*(A), is defined analogously using the limit inferior: d_B_*(A) = \liminf_{\varepsilon \to 0} \frac{\log N(\varepsilon)}{\log (1/\varepsilon)}.[](https://archive.org/download/FractalGeometry/Fractal_Geometry-Falconer-2e.pdf) These quantities always satisfy $d_B_*(A) \leq d_B^*(A)$, with equality holding if and only if the original limit exists.[](https://www.math.stonybrook.edu/~bishop/classes/math324.F15/book1Dec15.pdf) The upper and lower dimensions differ for irregular sets that display varying local densities across scales, such as those constructed with oscillating scalings where the covering number $N(\varepsilon)$ fluctuates irregularly as $\varepsilon \to 0$.[](https://www.math.stonybrook.edu/~bishop/classes/math324.F15/book1Dec15.pdf) In such cases, the limsup captures the most rapid growth in complexity at certain scales, while the liminf reflects slower growth at others, leading to $d_B_*(A) < d_B^*(A)$.[](https://archive.org/download/FractalGeometry/Fractal_Geometry-Falconer-2e.pdf) Equality between the upper and lower dimensions occurs under conditions of sufficient regularity that ensure consistent scaling. For self-similar sets satisfying the open set condition, the upper and lower Minkowski–Bouligand dimensions coincide and equal the similarity dimension.[](https://archive.org/download/FractalGeometry/Fractal_Geometry-Falconer-2e.pdf) Similarly, Ahlfors–David regular sets, where the $s$-dimensional Hausdorff measure of $\varepsilon$-balls around points scales comparably to $\varepsilon^s$, exhibit $d_B^*(A) = d_B_*(A) = s$.[](https://archive.org/download/FractalGeometry/Fractal_Geometry-Falconer-2e.pdf) The Minkowski–Bouligand dimension is considered well-defined as a unique value only when $d_B^*(A) = d_B_*(A)$; otherwise, the interval $[d_B_*(A), d_B^*(A)]$ bounds the set's fractal complexity, providing lower and upper estimates without a single precise exponent.[](https://www.math.stonybrook.edu/~bishop/classes/math324.F15/book1Dec15.pdf) This distinction highlights the sensitivity of box-counting methods to multiscale irregularities in non-regular fractals.[](https://archive.org/download/FractalGeometry/Fractal_Geometry-Falconer-2e.pdf) ### Equivalent definitions The Minkowski–Bouligand dimension admits an equivalent formulation based on the dilation or "sausage" method, which examines the scaling behavior of the Lebesgue measure of the ε-neighborhood of a bounded set $A \subset \mathbb{R}^n$. Specifically, let $\mu(\varepsilon)$ denote the $n$-dimensional Lebesgue measure of the ε-parallel set $B(A, \varepsilon) = \{ x \in \mathbb{R}^n : \dist(x, A) \leq \varepsilon \}$. The dimension is then given by \[ d_B(A) = n - \lim_{\varepsilon \to 0^+} \frac{\log \mu(\varepsilon)}{\log \varepsilon}, provided the limit exists, where the negative sign arises because logε<0\log \varepsilon < 0 as ε0+\varepsilon \to 0^+, and μ(ε)\mu(\varepsilon) scales asymptotically as εndB(A)\varepsilon^{n - d_B(A)}. Another equivalent definition employs the Minkowski content, which quantifies the "s-dimensional measure" of AA via coverings. The upper ss-dimensional Minkowski content is defined as the infimum over all countable covers {Ui}\{U_i\} of AA by sets with diameter at most δ>0\delta > 0, Ms(A,δ)=infi\diam(Ui)s,\overline{M}_s(A, \delta) = \inf \sum_i \diam(U_i)^s, with the upper Minkowski content Ms(A)=limδ0+Ms(A,δ)\overline{M}_s(A) = \lim_{\delta \to 0^+} \overline{M}_s(A, \delta); the lower content Ms(A)\underline{M}_s(A) is defined analogously using the supremum of such sums over refinements. The Minkowski–Bouligand dimension dB(A)d_B(A) is the critical value of ss where the content transitions from infinite to zero: Ms(A)=\overline{M}_s(A) = \infty for s<dB(A)s < d_B(A) and Ms(A)=0\underline{M}_s(A) = 0 for s>dB(A)s > d_B(A). These formulations are equivalent to the standard box-counting definition for bounded sets in Rn\mathbb{R}^n. The proof relies on the fact that the volume of the ε-neighborhood μ(ε)\mu(\varepsilon) is comparable to N(ε)εnN(\varepsilon) \varepsilon^n, where N(ε)N(\varepsilon) is the box-counting number, up to constants depending on the set's ; specifically, c1εnN(2ε)μ(ε)c2εnN(ε/2)c_1 \varepsilon^n N(2\varepsilon) \leq \mu(\varepsilon) \leq c_2 \varepsilon^n N(\varepsilon/2) for suitable c1,c2>0c_1, c_2 > 0, ensuring both capture the same logarithmic scaling exponent as ε0+\varepsilon \to 0^+. Similarly, the infimum over covers in the content definition approximates the dilation volume, yielding identical values. The content-based approach traces its origins to Georges Bouligand's work, where he introduced the notion of "nombre dimensionnel" through improper ensembles and boundary measures, predating the box-counting method and laying foundational ideas for scaling exponents in non-integer dimensions.90001-7/pdf)

Properties

Monotonicity and bounds

The Minkowski–Bouligand dimension, also known as the box-counting dimension and denoted dB(E)d_B(E) for a set EE, exhibits monotonicity with respect to set inclusion. Specifically, if ABRnA \subseteq B \subseteq \mathbb{R}^n, then dB(A)dB(B)d_B(A) \leq d_B(B). This inequality is strict in many cases where AA is a proper of BB, particularly when the subset has a sparser structure, such as a finite subset of an infinite fractal set. The is invariant under changes in the ambient . For a set ERnE \subseteq \mathbb{R}^n embedded into Rm\mathbb{R}^m with mnm \geq n via a bi-Lipschitz , dB(E)=dB(f(E))d_B(E) = d_B(f(E)), ensuring the value remains unchanged regardless of the higher-dimensional embedding. This property arises from the dimension's stability under bi-Lipschitz transformations, which preserve the scaling behavior central to the box-counting procedure. For any bounded set ERnE \subseteq \mathbb{R}^n, the Minkowski–Bouligand dimension satisfies 0dB(E)n0 \leq d_B(E) \leq n. The lower bound of 0 holds for empty or finite sets, while the upper bound of nn is achieved by sets with non-empty interior, such as open balls in Rn\mathbb{R}^n. Additionally, dB(E)dT(E)d_B(E) \geq d_T(E), where dT(E)d_T(E) is the topological dimension of EE, with equality for smooth manifolds but often strict inequality for fractals. These bounds highlight how the captures beyond topological . For instance, rectifiable curves in Rn\mathbb{R}^n, which have topological 1, satisfy dB1d_B \leq 1. In contrast, curves like the , also topologically 1-dimensional, have dB=log4/log31.2619>1d_B = \log 4 / \log 3 \approx 1.2619 > 1, illustrating how the Minkowski–Bouligand dimension can exceed the topological dimension to quantify fine-scale irregularity.

Multiplicative properties

The Minkowski–Bouligand dimension, also known as the box-counting dimension, exhibits additivity under Cartesian products of bounded sets. For bounded subsets ARmA \subset \mathbb{R}^m and BRnB \subset \mathbb{R}^n such that the dimension exists (i.e., the upper and lower box dimensions coincide), the dimension of the product set satisfies dB(A×B)=dB(A)+dB(B).d_B(A \times B) = d_B(A) + d_B(B). This result holds because the number of boxes required to cover the product scales multiplicatively with the scales in each coordinate, leading to the logarithmic additivity in the dimension formula. More generally, even when the dimensions may differ between upper and lower variants, the upper box dimension of the product is bounded above by the sum of the upper dimensions, dB(A×B)dB(A)+dB(B)\overline{d}_B(A \times B) \leq \overline{d}_B(A) + \overline{d}_B(B), while the lower box dimension satisfies dB(A×B)dB(A)+dB(B)\underline{d}_B(A \times B) \geq \underline{d}_B(A) + \underline{d}_B(B). Equality in the additive case requires the existence of the for each factor, ensuring countable stability in the covering estimates. For unions, the box dimension is finitely stable but not countably stable. For a finite collection of subsets {Ai}i=1kRn\{A_i\}_{i=1}^k \subset \mathbb{R}^n, the upper box dimension of the union satisfies dB(i=1kAi)=supi=1kdB(Ai).\overline{d}_B\left( \bigcup_{i=1}^k A_i \right) = \sup_{i=1}^k \overline{d}_B(A_i). The same holds for the lower box dimension. For countable unions {Ai}i=1\{A_i\}_{i=1}^\infty, only the inequality dB(i=1Ai)supidB(Ai)\overline{d}_B\left( \bigcup_{i=1}^\infty A_i \right) \geq \sup_{i} \overline{d}_B(A_i) holds, with strict inequality possible; for example, the rational numbers in [0,1][0,1] are a countable union of singletons (each with upper box dimension 0) but have upper box dimension 1. This reflects the subadditive nature of box coverings for finite unions, where the total number of boxes is at most the sum, but for countable unions, the covering number can grow faster asymptotically. Regarding intersections, the dimension provides lower bounds analogous to those for Hausdorff dimension. In Rn\mathbb{R}^n, for subsets A,BRnA, B \subset \mathbb{R}^n, a general lower bound is dB(AB)dB(A)+dB(B)n,d_B(A \cap B) \geq d_B(A) + d_B(B) - n, assuming the dimensions exist; this follows from slicing or projection arguments that control the covering numbers in the intersection via the ambient dimension. This bound is sharp in many cases, such as transversal intersections of sets with complementary dimensions. The dimension is also stable under bi-Lipschitz mappings, preserving the scaling behavior of coverings. If f:RmRnf: \mathbb{R}^m \to \mathbb{R}^n is bi-Lipschitz, then for any ARmA \subset \mathbb{R}^m, dB(f(A))=dB(A).d_B(f(A)) = d_B(A). Under merely Lipschitz mappings, the dimension can only decrease or stay the same, dB(f(A))dB(A)d_B(f(A)) \leq d_B(A), due to the contraction of distances affecting box counts.

Comparisons

Relation to Hausdorff dimension

The Minkowski–Bouligand dimension dB(A)d_B(A), also known as the box-counting dimension, provides an upper bound for the dimH(A)\dim_H(A) of a set ARnA \subset \mathbb{R}^n. Specifically, for any such set, dimH(A)dB(A)\dim_H(A) \leq d_B(A). This inequality reflects the differing covering strategies: the is computed using the infimum over all countable covers by sets of arbitrary diameters, enabling highly optimized approximations of the set's "size" at fine scales, whereas the Minkowski–Bouligand dimension relies on uniform grid-based covers (such as boxes of equal side length δ\delta), which are inherently less flexible and thus yield a potentially larger dimension value. The inequality can be strict, as demonstrated by certain pathological sets where the box-counting covers inefficiently capture the geometry compared to tailored Hausdorff covers. However, equality dimH(A)=dB(A)\dim_H(A) = d_B(A) holds under specific regularity conditions. For instance, compact self-similar sets satisfying the open set condition—where the images of the generating similitudes are contained in disjoint open sets—exhibit this equality, with both dimensions equaling the unique solution ss to the equation ris=1\sum r_i^s = 1, where rir_i are the similarity ratios. This result follows from the self-similarity ensuring consistent scaling behavior across levels, allowing mass distribution principles to bound the Hausdorff dimension from above by the box dimension while the open set condition prevents excessive overlap that could inflate the box count. Ahlfors-regular sets, characterized by the existence of a measure μ\mu such that c1rsμ(B(x,r))crsc^{-1} r^s \leq \mu(B(x,r)) \leq c r^s for constants c1c \geq 1 and all balls B(x,r)B(x,r) centered at points of the set, also satisfy dimH(A)=dB(A)=s\dim_H(A) = d_B(A) = s. The regularity enforces uniform density, bridging the gap between variable and uniform coverings by guaranteeing that local scales behave predictably, thereby equating the two dimensions. In broader contexts, such as Euclidean spaces, refined bounds like dB(A)2dimH(A)d_B(A) \leq 2 \dim_H(A) may apply to specific classes of sets, though the general relation is mediated by the packing dimension with dB(A)dimP(A)d_B(A) \leq \dim_P(A). These connections highlight the Minkowski–Bouligand dimension's role as a computationally accessible proxy that upper-bounds the more geometrically refined .

Relation to packing dimension

The packing dimension of a set AA, denoted dimPA\dim_P A, can be defined using the supremum over all δ>0\delta > 0 of the maximal number M(δ,A)M(\delta, A) of pairwise disjoint balls of radius δ\delta centered at points of AA, via dimPA=lim supδ0logM(δ,A)logδ.\dim_P A = \limsup_{\delta \to 0} \frac{\log M(\delta, A)}{-\log \delta}. This dimension captures the "packing efficiency" of AA at small scales by maximizing the number of separated subsets without overlap. The Minkowski–Bouligand dimension dB(A)d_B(A) satisfies the inequality dB(A)dimPAd_B(A) \leq \dim_P A, as an optimal cover by N(δ,A)N(\delta, A) sets of δ\delta implies at most N(2δ,A)N(2\delta, A) disjoint balls of radius δ\delta can be packed into AA, leading to logN(2δ,A)logM(δ,A)\log N(2\delta, A) \leq \log M(\delta, A) and thus the upper bounds aligning in the limit. Together with the , this forms the sandwich inequality dimHAdB(A)dimPA\dim_H A \leq d_B(A) \leq \dim_P A. Equality dB(A)=dimPAd_B(A) = \dim_P A holds for many regular fractal sets, such as the Sierpiński gasket, where all three dimensions coincide at log3/log21.585\log 3 / \log 2 \approx 1.585, due to the uniform scaling and condition ensuring efficient packings match coverings. In general, equality occurs when the set admits packings that are nearly as efficient as minimal covers, often under assumptions. The packing dimension exceeds the Minkowski–Bouligand dimension for irregular or sparse sets where separated points allow denser packings relative to the overlapping covers required by box-counting, such as certain self-affine carpets or sets with varying local densities that permit more isolated subsets at fine scales without increasing the covering number proportionally.

Examples

Classic fractals

The Minkowski–Bouligand , also known as the box-counting , provides a measure of the complexity for sets through their scaling properties. For classic constructed via systems, this is computed using the dB=logNlog(1/[r](/page/R))d_B = \frac{\log N}{\log (1/[r](/page/R))}, where NN is the number of copies and rr is the scaling factor. In these cases, the Minkowski–Bouligand equals the due to the satisfaction of the condition. The middle-third Cantor set is generated by starting with the interval [0,1] and iteratively removing the open middle third of each remaining interval, yielding 2n2^n intervals of length 3n3^{-n} at stage nn. This self-similar structure has N=2N = 2 copies scaled by r=1/3r = 1/3, resulting in dB=log2log30.631d_B = \frac{\log 2}{\log 3} \approx 0.631. The begins with an and iteratively replaces each side with four segments of length one-third, forming a closed . The boundary exhibits with N=4N = 4 segments scaled by r=1/3r = 1/3, giving dB=log4log31.262d_B = \frac{\log 4}{\log 3} \approx 1.262. The Sierpinski triangle, or gasket, is constructed by starting with an and recursively removing the central inverted triangle from each subtriangle, leaving three smaller triangles at each step. With N=3N = 3 copies scaled by r=1/2r = 1/2, the dimension is dB=log3log21.585d_B = \frac{\log 3}{\log 2} \approx 1.585.

Computational estimation

Computational of the Minkowski–Bouligand dimension typically involves numerical approximations for datasets that cannot be analyzed analytically, such as digitized images or sampled , relying on iterative covering procedures to approximate the scaling behavior of the set. One common algorithm is the sliding window box-counting method, which scans the dataset with overlapping boxes of varying sizes to count occupied regions, reducing boundary effects and improving accuracy for irregular shapes like textures in images. For data, the sandbox method places cubic or spherical "sandboxes" centered on each point and measures the mass (number of points) within them as a function of , providing a localized estimate suitable for sparse or three-dimensional distributions. Software tools facilitate these computations; for instance, the Fractal Dimension Calculator implements box-counting for binary images by systematically overlaying grids and tallying non-empty boxes across scales. To derive the dimension value, log-log regression is applied to the collected data: points consisting of log(1/ϵi)\log(1/\epsilon_i) on the horizontal axis and logN(ϵi)\log N(\epsilon_i) on the vertical axis, where ϵi\epsilon_i are the box sizes and N(ϵi)N(\epsilon_i) the corresponding counts, are fitted with a straight line, and the slope of this line yields the estimated . This least-squares fit assumes over the chosen scales, with the regression's reliability depending on the linearity of the plot. Several challenges arise in these estimations, including finite resolution limits that cause deviations at small scales due to or sampling density, potentially inflating or deflating the . The selection of the [ϵ](/page/Epsilon)[\epsilon](/page/Epsilon) range is critical, as including non-scaling regimes (e.g., very large or tiny boxes) can lead to biased slopes, often addressed by automated sliding-window optimization to identify the optimal linear segment. Additionally, in the data increases the apparent , as random perturbations fill more boxes, while discretization artifacts from grid alignment introduce systematic errors, necessitating preprocessing like or techniques. These issues are particularly pronounced in real-world datasets, where the box-counting procedure must be adapted to handle irregularities beyond ideal theoretical sets. In applications, such estimations are widely used in image analysis to quantify or texture complexity, such as in material science for evaluating fuel nozzle morphologies via 3D point clouds. For time series data, including EEG signals, the method helps detect pathological patterns by computing the dimension of signal embeddings, aiding in seizure classification through features like generalized .

References

Add your contribution
Related Hubs
User Avatar
No comments yet.