Hubbry Logo
Sign functionSign functionMain
Open search
Sign function
Community hub
Sign function
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Sign function
Sign function
from Wikipedia
Signum function

In mathematics, the sign function or signum function (from signum, Latin for "sign") is a function that has the value −1, +1 or 0 according to whether the sign of a given real number is positive or negative, or the given number is itself zero. In mathematical notation the sign function is often represented as or .[1]

Definition

[edit]

The signum function of a real number is a piecewise function which is defined as follows:[1]

The law of trichotomy states that every real number must be positive, negative or zero. The signum function denotes which unique category a number falls into by mapping it to one of the values −1, +1 or 0, which can then be used in mathematical expressions or further calculations.

For example:

Basic properties

[edit]

Any real number can be expressed as the product of its absolute value and its sign:

It follows that whenever is not equal to 0 we have

Similarly, for any real number , We can also be certain that: and so

Some algebraic identities

[edit]

The signum can also be written using the Iverson bracket notation:

The signum can also be written using the floor and the absolute value functions: If is accepted to be equal to 1, the signum can also be written for all real numbers as

Properties in mathematical analysis

[edit]

Discontinuity at zero

[edit]
The sign function is not continuous at .

Although the sign function takes the value −1 when is negative, the ringed point (0, −1) in the plot of indicates that this is not the case when . Instead, the value jumps abruptly to the solid point at (0, 0) where . There is then a similar jump to when is positive. Either jump demonstrates visually that the sign function is discontinuous at zero, even though it is continuous at any point where is either positive or negative.

These observations are confirmed by any of the various equivalent formal definitions of continuity in mathematical analysis. A function , such as is continuous at a point if the value can be approximated arbitrarily closely by the sequence of values where the make up any infinite sequence which becomes arbitrarily close to as becomes sufficiently large. In the notation of mathematical limits, continuity of at requires that as for any sequence for which The arrow symbol can be read to mean approaches, or tends to, and it applies to the sequence as a whole.

This criterion fails for the sign function at . For example, we can choose to be the sequence which tends towards zero as increases towards infinity. In this case, as required, but and for each so that . This counterexample confirms more formally the discontinuity of at zero that is visible in the plot.

Despite the sign function having a very simple form, the step change at zero causes difficulties for traditional calculus techniques, which are quite stringent in their requirements. Continuity is a frequent constraint. One solution can be to approximate the sign function by a smooth continuous function; others might involve less stringent approaches that build on classical methods to accommodate larger classes of function.

Smooth approximations and limits

[edit]

The signum function can be given as a number of different (pointwise) limits: Here, is the hyperbolic tangent, and is the inverse tangent. The last of these is the derivative of . This is inspired from the fact that the above is exactly equal for all nonzero if , and has the advantage of simple generalization to higher-dimensional analogues of the sign function (for example, the partial derivatives of ).

See Heaviside step function § Analytic approximations.

Differentiation

[edit]

The signum function is differentiable everywhere except when Its derivative is zero when is non-zero:

This follows from the differentiability of any constant function, for which the derivative is always zero on its domain of definition. The signum acts as a constant function when it is restricted to the negative open region where it equals −1. It can similarly be regarded as a constant function within the positive open region where the corresponding constant is +1. Although these are two different constant functions, their derivative is equal to zero in each case.

It is not possible to define a classical derivative at , because there is a discontinuity there.

Although it is not differentiable at in the ordinary sense, under the generalized notion of differentiation in distribution theory, the derivative of the signum function is two times the Dirac delta function. This can be demonstrated using the identity [2] where is the Heaviside step function using the standard formalism. Using this identity, it is easy to derive the distributional derivative:[3]

Integration

[edit]

The signum function has a definite integral between any pair of finite values a and b, even when the interval of integration includes zero. The resulting integral for a and b is then equal to the difference between their absolute values:

In fact, the signum function is the derivative of the absolute value function, except where there is an abrupt change in gradient at zero:

We can understand this as before by considering the definition of the absolute value on the separate regions and For example, the absolute value function is identical to in the region whose derivative is the constant value +1, which equals the value of there.

Because the absolute value is a convex function, there is at least one subderivative at every point, including at the origin. Everywhere except zero, the resulting subdifferential consists of a single value, equal to the value of the sign function. In contrast, there are many subderivatives at zero, with just one of them taking the value . A subderivative value 0 occurs here because the absolute value function is at a minimum. The full family of valid subderivatives at zero constitutes the subdifferential interval , which might be thought of informally as "filling in" the graph of the sign function with a vertical line through the origin, making it continuous as a two dimensional curve.

In integration theory, the signum function is a weak derivative of the absolute value function. Weak derivatives are equivalent if they are equal almost everywhere, making them impervious to isolated anomalies at a single point. This includes the change in gradient of the absolute value function at zero, which prohibits there being a classical derivative.

Fourier transform

[edit]

The Fourier transform of the signum function is[4] where means taking the Cauchy principal value.

Generalizations

[edit]

Complex signum

[edit]

The signum function can be generalized to complex numbers as: for any complex number except . The signum of a given complex number is the point on the unit circle of the complex plane that is nearest to . Then, for , where is the complex argument function.

For reasons of symmetry, and to keep this a proper generalization of the signum function on the reals, also in the complex domain one usually defines, for :

Another generalization of the sign function for real and complex expressions is ,[5] which is defined as: where is the real part of and is the imaginary part of .

We then have (for ):

Polar decomposition of matrices

[edit]

Thanks to the Polar decomposition theorem, a matrix ( and ) can be decomposed as a product where is a unitary matrix and is a self-adjoint, or Hermitian, positive definite matrix, both in . If is invertible then such a decomposition is unique and plays the role of 's signum. A dual construction is given by the decomposition where is unitary, but generally different than . This leads to each invertible matrix having a unique left-signum and right-signum .

In the special case where and the (invertible) matrix , which identifies with the (nonzero) complex number , then the signum matrices satisfy and identify with the complex signum of , . In this sense, polar decomposition generalizes to matrices the signum-modulus decomposition of complex numbers.

Signum as a generalized function

[edit]

At real values of , it is possible to define a generalized function–version of the signum function, such that everywhere, including at the point , unlike , for which . This generalized signum allows construction of the algebra of generalized functions, but the price of such generalization is the loss of commutativity. In particular, the generalized signum anticommutes with the Dirac delta function[6] in addition, cannot be evaluated at ; and the special name, is necessary to distinguish it from the function . ( is not defined, but .)

See also

[edit]

Notes

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The sign function, also known as the signum function and denoted \sgn(x)\sgn(x), is a fundamental piecewise-defined function in mathematics that determines the sign of a real number xx, returning 11 if x>0x > 0, 1-1 if x<0x < 0, and 00 if x=0x = 0. This function can also be expressed as \sgn(x)=xx\sgn(x) = \frac{x}{|x|} for x0x \neq 0, with the value at zero specified separately to ensure completeness. It serves as a basic building block in real analysis, providing a simple way to categorize numbers based on their polarity relative to zero. Key properties of the signum function include its odd symmetry, where \sgn(x)=\sgn(x)\sgn(-x) = -\sgn(x) for all real xx, reflecting its antisymmetric behavior around the origin. The function is discontinuous at x=0x = 0, as the left-hand limit limx0\sgn(x)=1\lim_{x \to 0^-} \sgn(x) = -1 and the right-hand limit limx0+\sgn(x)=1\lim_{x \to 0^+} \sgn(x) = 1 differ, while \sgn(0)=0\sgn(0) = 0. Additionally, it relates directly to other elementary functions, such as the absolute value x=x\sgn(x)|x| = x \cdot \sgn(x) and the Heaviside step function, often defined as H(x)=1+\sgn(x)2H(x) = \frac{1 + \sgn(x)}{2} for x0x \neq 0 (yielding 0 for x<0x < 0 and 1 for x>0x > 0), with H(0)=1/2H(0) = 1/2 or 1 depending on convention. These relations highlight its role in constructing more complex expressions in and beyond. The signum function finds extensive applications across and , including in the definition and integration of piecewise continuous functions, where it helps handle discontinuities in integrands. In and communications, it is used to extract the polarity of signals, facilitating tasks like rectification and phase detection. It also appears in control systems for modeling switching behaviors and in , where the in the involves multiplication by i\sgn(ω)-i \sgn(\omega). Furthermore, approximations of \sgn(x)\sgn(x) by polynomials or entire functions are studied for numerical methods and optimization problems.

Definition

Real-Valued Case

The sign function for real numbers, commonly denoted as sgn(x)\operatorname{sgn}(x), is defined piecewise as follows: sgn(x)=1\operatorname{sgn}(x) = 1 if x>0x > 0, sgn(x)=1\operatorname{sgn}(x) = -1 if x<0x < 0, and sgn(x)=0\operatorname{sgn}(x) = 0 if x=0x = 0. This definition captures the essential sign of xx, assigning discrete values that reflect whether the number is positive, negative, or zero. An equivalent formulation expresses the sign function in terms of the absolute value: sgn(x)=xx\operatorname{sgn}(x) = \frac{x}{|x|} for x0x \neq 0, with sgn(0)=0\operatorname{sgn}(0) = 0. This expression highlights its dependence on the magnitude-normalized direction of xx, excluding the origin where it is conventionally zero to maintain consistency. The term "signum" is derived from the Latin word for "sign." The modern notation and use of the sign function were introduced by the German mathematician in the late 19th century. In one-dimensional mathematical contexts, the sign function acts as an indicator of direction or orientation, distinguishing the positive and negative halves of the real line while neutralizing zero. This property makes it fundamental for analyzing symmetry, polarity, and stepwise behaviors in real analysis.

Notation and Conventions

The sign function is most commonly denoted using the operator name sgn(x)\operatorname{sgn}(x) or the spelled-out form sign(x)\operatorname{sign}(x) in mathematical texts, where xx is a real number. This notation emphasizes the function's role in extracting the sign of its argument, with sgn\operatorname{sgn} being the more compact and widely adopted variant in analysis and related fields. Alternative notations and expressions arise in specific contexts, such as the relation to the Heaviside step function H(x)H(x), where sgn(x)=2H(x)1\operatorname{sgn}(x) = 2H(x) - 1 for x0x \neq 0, providing a piecewise construction that aligns the sign function with indicator-like behaviors in integration and distribution theory. Conventions regarding the value at zero vary across fields and historical periods; in contemporary real analysis, sgn(0)=0\operatorname{sgn}(0) = 0 is the standard definition to ensure consistency with properties like multiplicativity and to facilitate applications in limits and derivatives. However, some older mathematical treatments leave sgn(0)\operatorname{sgn}(0) undefined, focusing solely on the behavior for nonzero xx to highlight the function's discontinuity at the origin. This distinction influences usage in rigorous proofs versus computational implementations. In typesetting, the notation is rendered using mathematical operator commands, such as sgn\operatorname{sgn} in LaTeX, to distinguish it as a function symbol rather than plain text; no dedicated Unicode character exists for "sgn" itself, relying instead on standard Latin letters within mathematical delimiters for clarity in digital and print media.

Fundamental Properties

Algebraic Characteristics

The sign function demonstrates multiplicativity as a key algebraic property. For all real numbers xx and yy, it satisfies the relation \sgn(xy)=\sgn(x)\sgn(y).\sgn(xy) = \sgn(x) \sgn(y). This holds because multiplying two positive numbers yields a positive product, two negatives yield a positive product, and one positive and one negative yield a negative product, mirroring the multiplication of their respective signs ±1\pm 1. In contrast, the sign function lacks additivity. Generally, \sgn(x+y)\sgn(x)+\sgn(y)\sgn(x + y) \neq \sgn(x) + \sgn(y). For instance, taking x=1x = 1 and y=2y = -2 gives \sgn(1+(2))=\sgn(1)=1\sgn(1 + (-2)) = \sgn(-1) = -1, whereas \sgn(1)+\sgn(2)=1+(1)=0\sgn(1) + \sgn(-2) = 1 + (-1) = 0. Such discrepancies occur because addition can cause sign changes or cancellations that the simple sum of signs fails to capture, highlighting the function's nonlinear nature under addition. The sign function has the property that [\sgn(x)]2=1[\sgn(x)]^2 = 1 for all x0x \neq 0, and [\sgn(0)]2=0[\sgn(0)]^2 = 0. This arises directly from the range of the function, where non-zero inputs map to ±1\pm 1, and the square of either is 1, while zero maps to itself under squaring. Furthermore, the sign function inherently preserves sign information for expressions. The value of \sgn(f(x))\sgn(f(x)) directly indicates the sign of f(x)f(x): positive if f(x)>0f(x) > 0, negative if f(x)<0f(x) < 0, and zero if f(x)=0f(x) = 0.

Relation to Absolute Value

The sign function, denoted sgn(x)\operatorname{sgn}(x), provides a fundamental decomposition of any real number xx into its sign and magnitude components. Specifically, for all real xx, the identity x=sgn(x)xx = \operatorname{sgn}(x) \cdot |x| holds, where x|x| denotes the absolute value of xx. This decomposition separates the directional aspect (captured by sgn(x)\operatorname{sgn}(x), which is 1-1, 00, or 11) from the non-negative magnitude x|x|. Conversely, the absolute value can be reconstructed from xx and its sign for x0x \neq 0 via x=x/sgn(x)|x| = x / \operatorname{sgn}(x). This relation follows directly from the definition of sgn(x)=x/x\operatorname{sgn}(x) = x / |x| for x0x \neq 0, ensuring the operations are inverses in this context. To verify the equivalence, consider that multiplying sgn(x)x\operatorname{sgn}(x) \cdot |x| yields xx for x>0x > 0 (since sgn(x)=1\operatorname{sgn}(x) = 1), x-|x| for x<0x < 0 (since sgn(x)=[1](/page/1)\operatorname{sgn}(x) = [-1](/page/−1)), and 00 for x=[0](/page/0)x = [0](/page/0). This uniquely recovers xx because x|x| preserves the magnitude while sgn(x)\operatorname{sgn}(x) restores the original sign, with no other pair of sign and magnitude values producing the same result. This decomposition extends naturally to vectors in Rn\mathbb{R}^n, where the sign function applies component-wise: for x=(x1,,xn)\mathbf{x} = (x_1, \dots, x_n), sgn(x)=(sgn(x1),,sgn(xn))\operatorname{sgn}(\mathbf{x}) = (\operatorname{sgn}(x_1), \dots, \operatorname{sgn}(x_n)) and x=(x1,,xn)|\mathbf{x}| = (|x_1|, \dots, |x_n|), satisfying x=sgn(x)x\mathbf{x} = \operatorname{sgn}(\mathbf{x}) \odot |\mathbf{x}| with \odot denoting element-wise multiplication. In this setting, the relation connects to vector norms, such as the 1\ell_1-norm x1=i=1nxi\|\mathbf{x}\|_1 = \sum_{i=1}^n |x_i|, which aggregates the magnitudes while the sign vector encodes directional information across components. The uniqueness follows analogously, as component-wise recovery ensures no ambiguity in reconstructing x\mathbf{x}.

Algebraic Identities

Identities Involving Products

One fundamental identity involving the product of sign functions is the multiplicative property: for all real numbers xx and yy, \sgn(xy)=\sgn(x)\sgn(y).\sgn(xy) = \sgn(x) \sgn(y). This holds even when one or both arguments are zero, as \sgn(0)=0\sgn(0) = 0 ensures the product is zero in those cases. To prove this identity, consider cases based on the signs of xx and yy. If x>0x > 0 and y>0y > 0, then xy>0xy > 0, so \sgn(xy)=1=(1)(1)=\sgn(x)\sgn(y)\sgn(xy) = 1 = (1)(1) = \sgn(x) \sgn(y). If x>0x > 0 and y<0y < 0, then xy<0xy < 0, so \sgn(xy)=1=(1)(1)=\sgn(x)\sgn(y)\sgn(xy) = -1 = (1)(-1) = \sgn(x) \sgn(y). If x>0x > 0 and y=0y = 0, then xy=0xy = 0, so \sgn(xy)=0=(1)(0)=\sgn(x)\sgn(y)\sgn(xy) = 0 = (1)(0) = \sgn(x) \sgn(y). The cases x<0x < 0 and y>0y > 0, x<0x < 0 and y<0y < 0, x<0x < 0 and y=0y = 0, x=0x = 0 and y>0y > 0, x=0x = 0 and y<0y < 0, and x=0x = 0 and y=0y = 0 follow similarly, yielding \sgn(xy)=1\sgn(xy) = -1, 11, 00, 00, 00, and 00, respectively, matching \sgn(x)\sgn(y)\sgn(x) \sgn(y) in each instance. A related identity concerns quotients: for real numbers xx and yy with y0y \neq 0, \sgn(xy)=\sgn(x)\sgn(y)=\sgn(x)\sgn(y).\sgn\left(\frac{x}{y}\right) = \frac{\sgn(x)}{\sgn(y)} = \sgn(x) \sgn(y). The second equality follows because \sgn(y)2=1\sgn(y)^2 = 1 for y0y \neq 0, so 1/\sgn(y)=\sgn(y)1 / \sgn(y) = \sgn(y). This can be derived from the product identity by noting that x/y=x(1/y)x / y = x \cdot (1/y) and \sgn(1/y)=\sgn(y)\sgn(1/y) = \sgn(y) for y0y \neq 0, since the reciprocal preserves the sign of a nonzero real number; thus, \sgn(x/y)=\sgn(x)\sgn(1/y)=\sgn(x)\sgn(y)\sgn(x/y) = \sgn(x) \sgn(1/y) = \sgn(x) \sgn(y). For compositions of functions, under the assumption that the outer function ff is differentiable and monotonic (so ff' has constant sign), the sign of the composition satisfies \sgn(f(g(x)))=\sgn(f(g(x)))\sgn(g(x))\sgn(f(g(x))) = \sgn(f'(g(x))) \sgn(g(x)) for points where g(x)0g(x) \neq 0 and the expressions are defined. Here, \sgn(f(g(x)))\sgn(f'(g(x))) is constant (+1+1 if ff is strictly increasing, 1-1 if strictly decreasing), reflecting whether the composition preserves or reverses the sign of g(x)g(x). This algebraic relation leverages the product identity applied to the local behavior determined by the derivative's sign./03%3A_Differentiation/3.05%3A_Monotonicity_and_Concavity) These product and quotient identities extend naturally to determining the sign of rational functions. For a rational function r(x)=p(x)/q(x)r(x) = p(x)/q(x) where pp and qq are polynomials with q(x)0q(x) \neq 0, the sign is given by \sgn(r(x))=\sgn(p(x))\sgn(q(x))\sgn(r(x)) = \sgn(p(x)) \sgn(q(x)), since \sgn(1/q(x))=\sgn(q(x))\sgn(1/q(x)) = \sgn(q(x)). Each polynomial can be factored into linear terms, and repeated application of the product rule yields the overall sign as the product of the signs of the factors, adjusted for multiplicities (noting that even powers contribute positively via \sgn(z2k)=1\sgn(z^{2k}) = 1 for z0z \neq 0, though power-specific details are addressed elsewhere). This approach is essential for sign charts in algebraic analysis, enabling determination of intervals where r(x)r(x) is positive or negative without evaluating the full expression.

Identities with Powers and Exponents

The sign function interacts with powers in a manner determined by the parity of the exponent. For a positive even integer exponent 2k2k where kk is a positive integer, sgn(x2k)=1\operatorname{sgn}(x^{2k}) = 1 for all real x0x \neq 0. This holds because x2k=(xk)2>0x^{2k} = (x^k)^2 > 0 when x0x \neq 0, and sgn(y)=1\operatorname{sgn}(y) = 1 for any positive real yy. At x=[0](/page/0)x = [0](/page/0), sgn(02k)=sgn([0](/page/0))=0\operatorname{sgn}(0^{2k}) = \operatorname{sgn}([0](/page/0)) = 0. In contrast, for a positive odd integer exponent 2k+12k+1, sgn(x2k+1)=sgn(x)\operatorname{sgn}(x^{2k+1}) = \operatorname{sgn}(x) for all real x0x \neq 0. This identity follows from expressing the power as a product: x2k+1=x2kxx^{2k+1} = x^{2k} \cdot x, so sgn(x2k+1)=sgn(x2k)sgn(x)=1sgn(x)=sgn(x)\operatorname{sgn}(x^{2k+1}) = \operatorname{sgn}(x^{2k}) \cdot \operatorname{sgn}(x) = 1 \cdot \operatorname{sgn}(x) = \operatorname{sgn}(x), leveraging the multiplicative property of the sign function. At x=0x = 0, sgn(02k+1)=0=sgn(0)\operatorname{sgn}(0^{2k+1}) = 0 = \operatorname{sgn}(0). A general form for positive integer exponents nn is sgn(xn)=[sgn(x)]n\operatorname{sgn}(x^n) = [\operatorname{sgn}(x)]^n for x0x \neq 0, derived from sgn(xn)=xn/xn=xn/xn=(x/x)n=[sgn(x)]n\operatorname{sgn}(x^n) = x^n / |x^n| = x^n / |x|^n = (x / |x|)^n = [\operatorname{sgn}(x)]^n. For even nn, [sgn(x)]n=(±1)n=1[\operatorname{sgn}(x)]^n = (\pm 1)^n = 1; for odd nn, it equals sgn(x)\operatorname{sgn}(x). This aligns with the even and odd cases above. Regarding roots, the sign of the principal real nth root x=x1/n\sqrt{x} = x^{1/n}
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.