Hubbry Logo
DiscretizationDiscretizationMain
Open search
Discretization
Community hub
Discretization
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Discretization
Discretization
from Wikipedia
A solution to a discretized partial differential equation, obtained with the finite element method.

In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers. Dichotomization is the special case of discretization in which the number of discrete classes is 2, which can approximate a continuous variable as a binary variable (creating a dichotomy for modeling purposes, as in binary classification).

Discretization is also related to discrete mathematics, and is an important component of granular computing. In this context, discretization may also refer to modification of variable or category granularity, as when multiple discrete variables are aggregated or multiple discrete categories fused.

Whenever continuous data is discretized, there is always some amount of discretization error. The goal is to reduce the amount to a level considered negligible for the modeling purposes at hand.

The terms discretization and quantization often have the same denotation but not always identical connotations. (Specifically, the two terms share a semantic field.) The same is true of discretization error and quantization error.

Mathematical methods relating to discretization include the Euler–Maruyama method and the zero-order hold.

Discretization of linear state space models

[edit]

Discretization is also concerned with the transformation of continuous differential equations into discrete difference equations, suitable for numerical computing.

The following continuous-time state space model

where v and w are continuous zero-mean white noise sources with power spectral densities

can be discretized, assuming zero-order hold for the input u and continuous integration for the noise v, to

with covariances

where

and T is the sample time. If A is nonsingular,

The equation for the discretized measurement noise is a consequence of the continuous measurement noise being defined with a power spectral density.[1]

A clever trick to compute Ad and Bd in one step is by utilizing the following property:[2]: p. 215 

Where Ad and Bd are the discretized state-space matrices.

Discretization of process noise

[edit]

Numerical evaluation of Qd is a bit trickier due to the matrix exponential integral. It can, however, be computed by first constructing a matrix, and computing the exponential of it[3] The discretized process noise is then evaluated by multiplying the transpose of the lower-right partition of G with the upper-right partition of G:

Derivation

[edit]

Starting with the continuous model we know that the matrix exponential is and by premultiplying the model we get which we recognize as and by integrating, which is an analytical solution to the continuous model.

Now we want to discretise the above expression. We assume that u is constant during each timestep. We recognize the bracketed expression as , and the second term can be simplified by substituting with the function . Note that . We also assume that u is constant during the integral, which in turn yields

which is an exact solution to the discretization problem.

When A is singular, the latter expression can still be used by replacing by its Taylor expansion, This yields which is the form used in practice.

Approximations

[edit]

Exact discretization may sometimes be intractable due to the heavy matrix exponential and integral operations involved. It is much easier to calculate an approximate discrete model, based on that for small timesteps . The approximate solution then becomes:

This is also known as the Euler method, which is also known as the forward Euler method. Other possible approximations are , otherwise known as the backward Euler method and , which is known as the bilinear transform, or Tustin transform. Each of these approximations has different stability properties. The bilinear transform preserves the instability of the continuous-time system.

Discretization of continuous features

[edit]

In statistics and machine learning, discretization refers to the process of converting continuous features or variables to discretized or nominal features. This can be useful when creating probability mass functions.

Discretization of smooth functions

[edit]

In generalized functions theory, discretization arises as a particular case of the Convolution Theorem on tempered distributions

where is the Dirac comb, is discretization, is periodization, is a rapidly decreasing tempered distribution (e.g. a Dirac delta function or any other compactly supported function), is a smooth, slowly growing ordinary function (e.g. the function that is constantly or any other band-limited function) and is the (unitary, ordinary frequency) Fourier transform. Functions which are not smooth can be made smooth using a mollifier prior to discretization.

As an example, discretization of the function that is constantly yields the sequence which, interpreted as the coefficients of a linear combination of Dirac delta functions, forms a Dirac comb. If additionally truncation is applied, one obtains finite sequences, e.g. . They are discrete in both, time and frequency.

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Discretization is the process of converting continuous mathematical objects—such as functions, variables, domains, or —into discrete approximations, allowing for numerical and where exact continuous solutions are infeasible. This technique bridges the gap between theoretical continuous models and practical discrete implementations on computers, minimizing information loss while simplifying problem-solving. In , discretization plays a central role in solving differential equations by replacing derivatives with finite differences or integrals over discrete elements, transforming partial differential equations (PDEs) into systems of algebraic equations. Key methods include the finite difference method (FDM), which approximates derivatives using point-wise differences on a grid; the finite element method (FEM), which divides the domain into subdomains (elements) for variational approximations, particularly suited for complex geometries; and the finite volume method (FVM), which ensures conservation properties by integrating over control volumes, widely used in . These approaches are critical for modeling physical phenomena like fluid flow, , and , with accuracy depending on grid resolution and error analysis techniques such as local estimation. Stability, consistency, and convergence are fundamental properties evaluated to ensure reliable solutions, as per the Lax equivalence theorem in numerical PDE theory. In and , discretization preprocesses continuous attributes by partitioning them into finite intervals or bins, converting numerical data into categorical forms to enhance algorithm efficiency and interpretability. Common techniques encompass methods like equal-width binning, which divides the range into uniform intervals, and equal-frequency binning, which ensures equal numbers of instances per bin; supervised approaches, such as entropy-based minimum length (MDLP), leverage class labels to optimize cuts for predictive accuracy. This process reduces data complexity, accelerates learning in algorithms like decision trees, and uncovers patterns in datasets, though it risks information loss if bins are poorly chosen. Applications span analysis, where it simplifies trend detection, to in tasks. Overall, discretization's versatility underscores its foundational role across disciplines, balancing computational tractability with fidelity to continuous realities, and continues to evolve with advances in adaptive gridding and hybrid methods.

General Concepts

Definition and Purpose

Discretization refers to the process of approximating continuous mathematical models, functions, or domains—such as real numbers representing , or other variables—by mapping them onto finite or countable discrete sets, while aiming to preserve key properties like structural integrity or statistical characteristics. This transformation converts infinite-dimensional continuous problems into manageable finite representations suitable for computational handling. In essence, it bridges the gap between theoretical continuous frameworks and practical discrete implementations in fields like and . Historically, discretization traces its roots to early numerical methods for solving differential equations, with Leonhard Euler introducing a foundational approach in 1768 through his work on integral calculus, where he approximated solutions by stepping through discrete increments. This method, now known as Euler's method, marked an initial shift toward discretizing continuous dynamics for practical computation, predating widespread digital tools but laying groundwork for modern simulations driven by the need for efficiency in processing complex systems. The primary purpose of discretization is to enable the solution of continuous problems on digital computers, which inherently operate with discrete , thereby simplifying and reducing infinite problems to finite, solvable ones. For instance, it facilitates converting analog signals into digital formats for or approximating via grid-based models in simulations, making intractable continuous computations feasible. In and dynamical systems, it supports and stability assessments by transforming raw continuous inputs into discrete structures. A key trade-off in discretization involves balancing loss of precision—manifested as approximation errors that deviate from the exact continuous solution—with gains in tractability, allowing efficient numerical and storage.

Fundamental Methods

Sampling represents a primary technique for discretizing continuous-time signals by capturing their values at discrete instants. In uniform sampling, samples are taken at fixed time intervals TT, resulting in a sampling rate fs=1/Tf_s = 1/T. To faithfully represent the signal without , the Nyquist-Shannon sampling theorem requires that fsf_s exceed twice the highest component fmaxf_{\max} in the signal's , i.e., fs>2fmaxf_s > 2f_{\max}. Non-uniform sampling, by contrast, employs irregular intervals, which can reduce the total number of samples for bandlimited signals while preserving reconstructibility under certain conditions, such as when samples are sufficiently dense on average. Quantization discretizes the range of a continuous or sampled signal by mapping values to a of representation levels. Uniform quantization divides the range into equally spaced intervals of width Δ\Delta, each value to the nearest level; the resulting distortion is commonly quantified by the (MSE), approximated as Δ212\frac{\Delta^2}{12} for signals uniformly distributed over the interval assuming overload is negligible. Non-uniform schemes, such as logarithmic quantization, use smaller intervals for low- values and larger ones for high amplitudes to better match perceptual or statistical signal characteristics, thereby reducing MSE for non-uniform distributions at the same . Partitioning extends discretization to domains by segmenting a continuous range into discrete bins or cells. Equal-width partitioning divides the domain into intervals of identical length, promoting simplicity and uniformity in coverage regardless of . Equal-frequency partitioning, conversely, adjusts interval boundaries so each bin contains roughly the same number of observations, which helps balance representation in datasets with varying densities but may produce uneven interval widths. Interpolation serves as the inverse operation to discretization, reconstructing an approximate continuous signal from discrete points. assigns to any point the value of its closest sample, offering computational efficiency but introducing discontinuities. , a method, estimates values along straight lines connecting adjacent samples, providing smoother results with minimal overhead. These techniques approximate the ideal reconstruction process, which for bandlimited signals involves interpolation to achieve perfect recovery when the Nyquist criterion is met. Such methods also find brief application in discretizing continuous state-space models by selecting time steps for approximation.

Discretization in Dynamical Systems

Continuous to Discrete Time Conversion

The conversion of continuous-time dynamical systems to discrete-time equivalents is a fundamental step in digital control and , enabling the of algorithms on sampled-data platforms. This typically assumes a (ZOH) on the input, where the control signal remains constant between sampling instants, transforming differential equations into difference equations. Under the ZOH assumption, the input u(t)u(t) is held fixed at uku_k for kTt<(k+1)TkT \leq t < (k+1)T, where TT is the sampling period, allowing exact derivation of the discrete model for linear systems and approximate methods for nonlinear ones. For linear systems described by x˙=Ax+Bu\dot{x} = Ax + Bu, the exact discrete-time equivalent under ZOH is obtained by solving the differential equation over one sampling period. The state evolution becomes xk+1=eAΔtxk+0ΔteA(Δtτ)Bu(τ)dτx_{k+1} = e^{A \Delta t} x_k + \int_0^{\Delta t} e^{A(\Delta t - \tau)} B u(\tau) \, d\tau, where Δt=T\Delta t = T is the sampling interval and the integral accounts for the input contribution. Since ZOH holds u(τ)=uku(\tau) = u_k constant, the input term simplifies to (0ΔteA(Δtτ)Bdτ)uk\left( \int_0^{\Delta t} e^{A(\Delta t - \tau)} B \, d\tau \right) u_k, yielding the discrete matrices Ad=eAΔtA_d = e^{A \Delta t} and Bd=0ΔteAσBdτB_d = \int_0^{\Delta t} e^{A \sigma} B \, d\tau (via substitution σ=Δtτ\sigma = \Delta t - \tau). This formulation provides an exact sampled equivalent without approximation errors in the state updates at sampling instants, though inter-sample behavior is not captured. The choice of sampling period Δt\Delta t critically influences the accuracy and stability of the discrete model. It is typically selected based on the system's bandwidth or rise time, with a common guideline of Δt1/10\Delta t \approx 1/10 of the rise time (from 10% to 90% of steady-state response) to ensure sufficient resolution of dynamics. Smaller Δt\Delta t enhances approximation accuracy by closely mimicking continuous behavior and preserving stability (as discrete poles zi=esiΔtz_i = e^{s_i \Delta t} remain inside the unit circle if continuous poles sis_i have negative real parts), but increases computational demands. Conversely, larger Δt\Delta t may introduce aliasing, degrade accuracy, or induce instability if it causes discrete poles to exceed modulus 1, particularly in systems with fast modes. Discretizing nonlinear systems presents greater challenges, as closed-form solutions like the matrix exponential do not exist, necessitating numerical integration over each Δt\Delta t. Methods such as Runge-Kutta schemes approximate the state transition by evaluating the vector field at multiple points within the interval; for instance, a second-order Runge-Kutta discretization converts a continuous nonlinear model x˙=f(x,u)\dot{x} = f(x, u) into a discrete form xk+1=xk+Δt2[f(xk,uk)+f(xk+Δtf(xk,uk),uk)]x_{k+1} = x_k + \frac{\Delta t}{2} [f(x_k, u_k) + f(x_k + \Delta t f(x_k, u_k), u_k)], assuming ZOH on uu. Higher-order variants, like fourth-order Runge-Kutta, offer improved accuracy for stiff or highly nonlinear dynamics but require careful tuning of Δt\Delta t to balance error accumulation and stability, often verified through Lyapunov analysis.

Linear State Space Models

Linear state space models provide a framework for representing dynamical systems in continuous time, where the state evolution and output are described by differential equations. The continuous-time linear time-invariant (LTI) model is given by x˙(t)=Ax(t)+Bu(t)+w(t),\dot{x}(t) = A x(t) + B u(t) + w(t), y(t)=Cx(t)+v(t),y(t) = C x(t) + v(t), where x(t)Rnx(t) \in \mathbb{R}^n is the state vector, u(t)Rmu(t) \in \mathbb{R}^m is the input, y(t)Rpy(t) \in \mathbb{R}^p is the output, w(t)w(t) represents process noise, v(t)v(t) is measurement noise, and AA, BB, CC are constant matrices of appropriate dimensions. The solution to the state equation, starting from an initial state x(0)x(0), is x(t)=eAtx(0)+0teA(tτ)(Bu(τ)+w(τ))dτ,x(t) = e^{A t} x(0) + \int_0^t e^{A(t-\tau)} \left( B u(\tau) + w(\tau) \right) d\tau, which captures the propagation of the state through the matrix exponential and the integrated effect of inputs and noise. Discretization arises when implementing control on digital computers, converting the continuous model to a discrete-time equivalent sampled at intervals Δt=T\Delta t = T. Assuming a zero-order hold (ZOH) on the input, where u(t)u(t) remains constant between samples (u(t)=uku(t) = u_k for kTt<(k+1)TkT \leq t < (k+1)T), the discrete state update becomes xk+1=Φxk+Γuk+wk,x_{k+1} = \Phi x_k + \Gamma u_k + w_k, with Φ=eAT\Phi = e^{A T} as the state transition matrix, Γ=0TeAτBdτ\Gamma = \int_0^T e^{A \tau} B \, d\tau as the input matrix, and wkw_k the integrated process noise over the interval. The output at sample times is yk=Cxk+vky_k = C x_k + v_k. This ZOH assumption yields an exact discrete equivalent for the deterministic part of the LTI system, preserving the inter-sample behavior under piecewise constant inputs. Computing Φ\Phi and Γ\Gamma involves evaluating the matrix exponential, which can be done via the power series expansion eAT=I+AT+(AT)22!+e^{A T} = I + A T + \frac{(A T)^2}{2!} + \cdots, suitable for small TT or when AA has favorable structure. For general cases, diagonalization (if AA is diagonalizable) yields eAT=VeDTV1e^{A T} = V e^{D T} V^{-1}, where DD is diagonal with eigenvalues of AA; alternatively, the Cayley-Hamilton theorem allows polynomial approximation using the characteristic equation of AA. These methods ensure numerical stability for moderate-sized systems. The discretization maps continuous-time stability to discrete-time stability: if all eigenvalues λi\lambda_i of AA satisfy Re(λi)<0\operatorname{Re}(\lambda_i) < 0, then the eigenvalues μi=eλiT\mu_i = e^{\lambda_i T} of Φ\Phi satisfy μi<1|\mu_i| < 1, preserving asymptotic stability for sufficiently small T>0T > 0. Pathological sampling periods where eλiT=1e^{\lambda_i T} = 1 for some unstable λi\lambda_i must be avoided, but such cases are isolated. Discretization also affects system properties like controllability and observability. The discrete pair (Φ,Γ)(\Phi, \Gamma) is controllable if the continuous pair (A,B)(A, B) is controllable and the sampling period avoids values where the rank of the discrete controllability matrix drops, specifically when rank[ΓΦΓΦn1Γ]=n\operatorname{rank} \begin{bmatrix} \Gamma & \Phi \Gamma & \cdots & \Phi^{n-1} \Gamma \end{bmatrix} = n. Similarly, observability of (Φ,C)( \Phi, C ) holds if rank[CCΦCΦn1]=n\operatorname{rank} \begin{bmatrix} C \\ C \Phi \\ \vdots \\ C \Phi^{n-1} \end{bmatrix} = n
Add your contribution
Related Hubs
User Avatar
No comments yet.