Hubbry Logo
logo
Function of a real variable
Community hub

Function of a real variable

logo
0 subscribers
Read side by side
from Wikipedia

In mathematical analysis, and applications in geometry, applied mathematics, engineering, and natural sciences, a function of a real variable is a function whose domain is the real numbers , or a subset of that contains an interval of positive length. Most real functions that are considered and studied are differentiable in some interval. The most widely considered such functions are the real functions, which are the real-valued functions of a real variable, that is, the functions of a real variable whose codomain is the set of real numbers.

Nevertheless, the codomain of a function of a real variable may be any set. However, it is often assumed to have a structure of -vector space over the reals. That is, the codomain may be a Euclidean space, a coordinate vector, the set of matrices of real numbers of a given size, or an -algebra, such as the complex numbers or the quaternions. The structure -vector space of the codomain induces a structure of -vector space on the functions. If the codomain has a structure of -algebra, the same is true for the functions.

The image of a function of a real variable is a curve in the codomain. In this context, a function that defines curve is called a parametric equation of the curve.

When the codomain of a function of a real variable is a finite-dimensional vector space, the function may be viewed as a sequence of real functions. This is often used in applications.

Real function

[edit]
The graph of a real function

A real function is a function from a subset of to where denotes as usual the set of real numbers. That is, the domain of a real function is a subset , and its codomain is It is generally assumed that the domain contains an interval of positive length.

Basic examples

[edit]

For many commonly used real functions, the domain is the whole set of real numbers, and the function is continuous and differentiable at every point of the domain. One says that these functions are defined, continuous and differentiable everywhere. This is the case of:

Some functions are defined everywhere, but not continuous at some points. For example

Some functions are defined and continuous everywhere, but not everywhere differentiable. For example

  • The absolute value is defined and continuous everywhere, and is differentiable everywhere, except for zero.
  • The cubic root is defined and continuous everywhere, and is differentiable everywhere, except for zero.

Many common functions are not defined everywhere, but are continuous and differentiable everywhere where they are defined. For example:

  • A rational function is a quotient of two polynomial functions, and is not defined at the zeros of the denominator.
  • The tangent function is not defined for where k is any integer.
  • The logarithm function is defined only for positive values of the variable.

Some functions are continuous in their whole domain, and not differentiable at some points. This is the case of:

  • The square root is defined only for nonnegative values of the variable, and not differentiable at 0 (it is differentiable for all positive values of the variable).

General definition

[edit]

A real-valued function of a real variable is a function that takes as input a real number, commonly represented by the variable x, for producing another real number, the value of the function, commonly denoted f(x). For simplicity, in this article a real-valued function of a real variable will be simply called a function. To avoid any ambiguity, the other types of functions that may occur will be explicitly specified.

Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable is taken in a subset X of , the domain of the function, which is always supposed to contain an interval of positive length. In other words, a real-valued function of a real variable is a function

such that its domain X is a subset of that contains an interval of positive length.

A simple example of a function in one variable could be:

which is the square root of x.

Image

[edit]

The image of a function is the set of all values of f when the variable x runs in the whole domain of f. For a continuous (see below for a definition) real-valued function with a connected domain, the image is either an interval or a single value. In the latter case, the function is a constant function.

The preimage of a given real number y is the set of the solutions of the equation y = f(x).

Domain

[edit]

The domain of a function of several real variables is a subset of that is sometimes explicitly defined. In fact, if one restricts the domain X of a function f to a subset YX, one gets formally a different function, the restriction of f to Y, which is denoted f|Y. In practice, it is often not harmful to identify f and f|Y, and to omit the subscript |Y.

Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation. This means that it is not worthy to explicitly define the domain of a function of a real variable.

Algebraic structure

[edit]

The arithmetic operations may be applied to the functions in the following way:

  • For every real number r, the constant function , is everywhere defined.
  • For every real number r and every function f, the function has the same domain as f (or is everywhere defined if r = 0).
  • If f and g are two functions of respective domains X and Y such that XY contains an open subset of , then and are functions that have a domain containing XY.

It follows that the functions of n variables that are everywhere defined and the functions of n variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals (-algebras).

One may similarly define which is a function only if the set of the points (x) in the domain of f such that f(x) ≠ 0 contains an open subset of . This constraint implies that the above two algebras are not fields.

Continuity and limit

[edit]
Limit of a real function of a real variable.

Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of a real variable are ubiquitous in mathematics, it is worth defining this notion without reference to the general notion of continuous maps between topological space.

For defining the continuity, it is useful to consider the distance function of , which is an everywhere defined function of 2 real variables:

A function f is continuous at a point which is interior to its domain, if, for every positive real number ε, there is a positive real number δ such that for all such that In other words, δ may be chosen small enough for having the image by f of the interval of radius δ centered at contained in the interval of length 2ε centered at A function is continuous if it is continuous at every point of its domain.

The limit of a real-valued function of a real variable is as follows.[1] Let a be a point in topological closure of the domain X of the function f. The function, f has a limit L when x tends toward a, denoted

if the following condition is satisfied: For every positive real number ε > 0, there is a positive real number δ > 0 such that

for all x in the domain such that

If the limit exists, it is unique. If a is in the interior of the domain, the limit exists if and only if the function is continuous at a. In this case, we have

When a is in the boundary of the domain of f, and if f has a limit at a, the latter formula allows to "extend by continuity" the domain of f to a.

Calculus

[edit]

One can collect a number of functions each of a real variable, say

into a vector parametrized by x:

The derivative of the vector y is the vector derivatives of fi(x) for i = 1, 2, ..., n:

One can also perform line integrals along a space curve parametrized by x, with position vector r = r(x), by integrating with respect to the variable x:

where · is the dot product, and x = a and x = b are the start and endpoints of the curve.

Theorems

[edit]

With the definitions of integration and derivatives, key theorems can be formulated, including the fundamental theorem of calculus, integration by parts, and Taylor's theorem. Evaluating a mixture of integrals and derivatives can be done by using theorem differentiation under the integral sign.

Implicit functions

[edit]

A real-valued implicit function of a real variable is not written in the form "y = f(x)". Instead, the mapping is from the space 2 to the zero element in (just the ordinary zero 0):

and

is an equation in the variables. Implicit functions are a more general way to represent functions, since if:

then we can always define:

but the converse is not always possible, i.e. not all implicit functions have the form of this equation.

One-dimensional space curves in n

[edit]
Space curve in 3d. The position vector r is parametrized by a scalar t. At r = a the red line is the tangent to the curve, and the blue plane is normal to the curve.

Formulation

[edit]

Given the functions r1 = r1(t), r2 = r2(t), ..., rn = rn(t) all of a common variable t, so that:

or taken together:

then the parametrized n-tuple,

describes a one-dimensional space curve.

Tangent line to curve

[edit]

At a point r(t = c) = a = (a1, a2, ..., an) for some constant t = c, the equations of the one-dimensional tangent line to the curve at that point are given in terms of the ordinary derivatives of r1(t), r2(t), ..., rn(t), and r with respect to t:

Normal plane to curve

[edit]

The equation of the n-dimensional hyperplane normal to the tangent line at r = a is:

or in terms of the dot product:

where p = (p1, p2, ..., pn) are points in the plane, not on the space curve.

Relation to kinematics

[edit]
Kinematic quantities of a classical particle: mass m, position r, velocity v, acceleration a.

The physical and geometric interpretation of dr(t)/dt is the "velocity" of a point-like particle moving along the path r(t), treating r as the spatial position vector coordinates parametrized by time t, and is a vector tangent to the space curve for all t in the instantaneous direction of motion. At t = c, the space curve has a tangent vector dr(t)/dt|t = c, and the hyperplane normal to the space curve at t = c is also normal to the tangent at t = c. Any vector in this plane (pa) must be normal to dr(t)/dt|t = c.

Similarly, d2r(t)/dt2 is the "acceleration" of the particle, and is a vector normal to the curve directed along the radius of curvature.

Matrix valued functions

[edit]

A matrix can also be a function of a single variable. For example, the rotation matrix in 2d:

is a matrix valued function of rotation angle of about the origin. Similarly, in special relativity, the Lorentz transformation matrix for a pure boost (without rotations):

is a function of the boost parameter β = v/c, in which v is the relative velocity between the frames of reference (a continuous variable), and c is the speed of light, a constant.

Banach and Hilbert spaces and quantum mechanics

[edit]

Generalizing the previous section, the output of a function of a real variable can also lie in a Banach space or a Hilbert space. In these spaces, division and multiplication and limits are all defined, so notions such as derivative and integral still apply. This occurs especially often in quantum mechanics, where one takes the derivative of a ket or an operator. This occurs, for instance, in the general time-dependent Schrödinger equation:

where one takes the derivative of a wave function, which can be an element of several different Hilbert spaces.

Complex-valued function of a real variable

[edit]

A complex-valued function of a real variable may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values.

If f(x) is such a complex valued function, it may be decomposed as

f(x) = g(x) + ih(x),

where g and h are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions.

Cardinality of sets of functions of a real variable

[edit]

The cardinality of the set of real-valued functions of a real variable, , is , which is strictly larger than the cardinality of the continuum (i.e., set of all real numbers). This fact is easily verified by cardinal arithmetic:

Furthermore, if is a set such that , then the cardinality of the set is also , since

However, the set of continuous functions has a strictly smaller cardinality, the cardinality of the continuum, . This follows from the fact that a continuous function is completely determined by its value on a dense subset of its domain.[2] Thus, the cardinality of the set of continuous real-valued functions on the reals is no greater than the cardinality of the set of real-valued functions of a rational variable. By cardinal arithmetic:

On the other hand, since there is a clear bijection between and the set of constant functions , which forms a subset of , must also hold. Hence, .

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In mathematics, a function of a real variable is a rule that assigns to each element in a subset of the real numbers (the domain) a unique real number (the output), formally denoted as f:DRf: D \to \mathbb{R} where DRD \subseteq \mathbb{R}.[1] This concept forms the cornerstone of real analysis and single-variable calculus, enabling the study of properties such as limits, continuity, differentiability, and integrability.[2] The historical development of functions of a real variable traces back to early dependencies in ancient computations, such as Babylonian tables of squares around 2000 BCE, but the modern notion emerged in the 17th century amid the invention of calculus.[3] Gottfried Wilhelm Leibniz introduced the term "function" in 1673 to describe quantities formed from variables and constants, while Johann Bernoulli formalized it in 1694 as "a quantity somehow formed from indeterminate and constant quantities."[3] Leonhard Euler advanced the idea significantly in his 1748 work Introductio in analysin infinitorum, initially defining a function as an analytic expression involving the variable, and later in 1755 broadening it to any dependent quantity: "If some quantities so depend on other quantities that if the latter are changed the former undergoes change, then the former quantities are called functions of the latter."[3] Key 19th-century refinements addressed pathological behaviors, challenging earlier assumptions of smoothness. Joseph Fourier's 1805 introduction of Fourier series demonstrated that discontinuous functions could be represented analytically, expanding the scope beyond Euler's analytic expressions.[3] Augustin-Louis Cauchy provided a dependence-based definition in 1821: "If variable quantities are so joined between themselves that, the value of one of these being given, one can conclude the values of all the others."[3] Peter Gustav Lejeune Dirichlet's 1837 example of a function discontinuous everywhere (equal to 0 at rationals and 1 at irrationals) and Karl Weierstrass's 1872 construction of a continuous but nowhere differentiable function solidified the rigorous study of arbitrary real functions, laying the groundwork for modern real analysis.[3] These functions are indispensable in modeling continuous phenomena in physics, engineering, and economics, where real-valued inputs and outputs approximate measurable quantities.[4] Their theory underpins advanced topics like measure theory, Lebesgue integration, and functional analysis, facilitating the handling of broader classes of functions essential for applications such as signal processing and probability.[5]

Introduction to Real Functions

Definition and Basic Concepts

A function $ f: \mathbb{R} \to \mathbb{R} $ is a relation that assigns to each element $ x $ in the domain $ \mathbb{R} $ a unique element $ f(x) $ in the codomain $ \mathbb{R} $, formally defined as a subset of $ \mathbb{R} \times \mathbb{R} $ such that no two distinct elements in the subset share the same first component.[6] More generally, for a function $ f: D \to \mathbb{R} $ where $ D \subseteq \mathbb{R} $ is the domain, the assignment ensures that for every $ x \in D $, there exists exactly one $ y = f(x) \in \mathbb{R} $.[7] This set-theoretic perspective emphasizes the uniqueness of the output for each input, distinguishing functions from mere relations.[8] The notation $ f(x) $ denotes the value of the function at $ x $, with the domain $ D $ specifying the set of allowable inputs and the codomain $ \mathbb{R} $ the set containing all possible outputs.[7] In practice, the domain is often a subset of $ \mathbb{R} $ to exclude points where the function is undefined, such as divisions by zero.[9] The concept of a function originated in the 17th century amid the development of calculus, where Gottfried Wilhelm Leibniz introduced the term "function" in 1673 to describe quantities related to curves, and Isaac Newton employed similar ideas in his fluxional calculus without using the specific term.[3] It was formalized in the 19th century by Peter Gustav Lejeune Dirichlet, who in 1837 defined a function as an arbitrary correspondence between $ x $ and $ f(x) $ without requiring continuity or analytic form, and by Karl Weierstrass, whose work on uniform convergence and pathological functions further rigorousized the notion in real analysis.[3] The graph of a function $ f: D \to \mathbb{R} $ is the set $ {(x, f(x)) \mid x \in D} \subseteq \mathbb{R}^2 $, representing the function visually as a collection of points in the Cartesian plane.[9] This geometric interpretation aids in understanding the function's behavior, with the vertical line test confirming that no vertical line intersects the graph more than once, ensuring the uniqueness property.[10]

Fundamental Examples

Linear functions provide a fundamental example of real-valued functions, defined by the equation $ f(x) = ax + b $, where $ a $ and $ b $ are real constants, and their graphs form straight lines in the plane.[11] For instance, if $ a = 2 $ and $ b = -1 $, then $ f(x) = 2x - 1 $, which increases across the entire real line.[11] Polynomial functions generalize linear functions and are expressed in the form $ f(x) = a_n x^n + a_{n-1} x^{n-1} + \dots + a_1 x + a_0 $, where $ n $ is a non-negative integer known as the degree, and the $ a_i $ are real coefficients with $ a_n \neq 0 $.[11] Constant functions, such as $ f(x) = 5 $, correspond to degree 0, while quadratic functions like $ f(x) = x^2 - 3x + 2 $ (degree 2) illustrate higher-degree cases.[11] Exponential functions, such as $ f(x) = e^x $, where $ e $ is the base of the natural logarithm (approximately 2.718), map real inputs to positive real outputs and exhibit rapid growth for positive $ x $.[12] Logarithmic functions serve as their inverses, defined for $ x > 0 $ by $ f(x) = \ln(x) $, which extracts the exponent to which $ e $ must be raised to obtain $ x $.[12] Trigonometric functions include $ f(x) = \sin(x) $ and $ f(x) = \cos(x) $, defined via the unit circle where $ \sin(x) $ is the y-coordinate and $ \cos(x) $ is the x-coordinate of the point reached by rotating counterclockwise from the positive x-axis by angle $ x $ radians.[13] These functions are periodic with period $ 2\pi $.[13] Piecewise-defined functions, like the absolute value function $ f(x) = |x| $, are constructed by specifying different expressions on disjoint intervals of the domain, such as $ f(x) = x $ for $ x \geq 0 $ and $ f(x) = -x $ for $ x < 0 $, resulting in a V-shaped graph symmetric about the y-axis.[11] All these examples are continuous on their domains, with further details addressed in the section on continuity properties.

Formal Properties and Structure

Domain and Codomain

In the context of functions from the real numbers to the real numbers, the domain of a function ff, denoted D(f)D(f), is the maximal subset of R\mathbb{R} on which ff is defined, ensuring that for every xD(f)x \in D(f), the value f(x)f(x) is a well-defined real number.[14] For functions specified by explicit formulas, such as polynomials or rational expressions, the natural domain is the largest such subset where the expression is meaningful over the reals. For instance, the rational function f(x)=1/xf(x) = 1/x has natural domain R{0}\mathbb{R} \setminus \{0\}, excluding the point where division by zero occurs.[15] The codomain of ff is the target set, conventionally taken as R\mathbb{R} for real-valued functions, which contains all possible output values, though the actual outputs may form a proper subset known as the image or range.[16] A function is surjective (or onto) if its image equals the codomain; for example, the cosine function cosx\cos x has codomain R\mathbb{R} but image [1,1][-1, 1], so it is not surjective onto R\mathbb{R}.[16] This distinction highlights that the codomain is a structural choice, while the image reflects the function's actual behavior. Functions may be restricted to subdomains, which are subsets of the natural domain, to focus on specific behaviors or ensure desirable properties like injectivity. For the square root function x\sqrt{x}, the natural domain is [0,)[0, \infty) to yield nonnegative real outputs, but it can be further restricted to a closed interval like [0,1][0, 1] for applications requiring bounded inputs.[17] Such restrictions preserve the function's definition on the subdomain while altering its overall scope. Extensions of the domain are possible for certain classes of real functions, particularly real analytic ones, through analytic continuation, which enlarges the domain while maintaining analyticity on connected open subsets of R\mathbb{R}. However, this process is limited for functions of real variables, as singularities or branch points on the real line can prevent further extension along R\mathbb{R}.[18]

Image and Range

The image of a function $ f: D \to \mathbb{R} $, where $ D \subseteq \mathbb{R} $ is the domain, is defined as the set $ \operatorname{Im}(f) = { f(x) \mid x \in D } $, which consists of all possible output values attained by $ f $ and forms a subset of $ \mathbb{R} $.[19] This set captures the actual values produced by the function over its domain, independent of any larger target set. The term "range" is often used interchangeably with "image" in mathematical contexts, though "image" is preferred in formal writing to emphasize the set-theoretic construction.[19] For instance, the image of the sine function $ f(x) = \sin x $ with domain $ \mathbb{R} $ is the closed interval $ [-1, 1] $, as the function oscillates between these bounds without exceeding them.[20] In contrast, a constant function $ f(x) = c $ for some real $ c $ and domain $ D $ has an image consisting of the singleton set $ { c } $.[21] Properties of the image relate to key classifications of functions based on how they map inputs to outputs. A function $ f: D \to \mathbb{R} $ is injective, or one-to-one, if distinct inputs produce distinct outputs, formally $ f(x_1) = f(x_2) $ implies $ x_1 = x_2 $ for all $ x_1, x_2 \in D $.[22] This ensures that the function does not "collapse" multiple domain elements into the same image value, preserving uniqueness in the mapping. A function is surjective, or onto, if its image equals the codomain, meaning every element in the codomain is attained by at least one input in the domain, or $ \operatorname{Im}(f) = \mathbb{R} $ when the codomain is specified as $ \mathbb{R} $.[22] For example, the identity function $ f(x) = x $ on $ \mathbb{R} $ is surjective onto $ \mathbb{R} $, as its image covers the entire codomain.[19] A function that is both injective and surjective is bijective, establishing a perfect one-to-one correspondence between the domain and codomain.[22] Bijective functions admit an inverse function $ f^{-1} $ such that $ f^{-1}(f(x)) = x $ for all $ x \in D $ and $ f(f^{-1}(y)) = y $ for all $ y \in \operatorname{Im}(f) $, enabling reversible mappings.[22] The exponential function $ f(x) = e^x $ with domain $ \mathbb{R} $ and codomain $ (0, \infty) $ is bijective, with image exactly matching the positive reals.[23]

Algebraic Operations

The algebraic operations on functions of a real variable treat these functions as algebraic objects, enabling the formation of new functions through arithmetic combinations and composition. These operations are defined pointwise for arithmetic and via substitution for composition, with domains adjusted accordingly to ensure well-definedness. Pointwise addition of two functions f,g:DRf, g: D \to \mathbb{R}, where DRD \subseteq \mathbb{R}, is given by (f+g)(x)=f(x)+g(x)(f + g)(x) = f(x) + g(x) for all xDx \in D. Similarly, pointwise multiplication is defined as (fg)(x)=f(x)g(x)(f \cdot g)(x) = f(x) g(x) for all xDx \in D. Scalar multiplication by a real number cRc \in \mathbb{R} yields (cf)(x)=cf(x)(c f)(x) = c f(x) for all xDx \in D. These operations extend naturally to subtraction via fg=f+(1)gf - g = f + (-1) g. The set of all functions from R\mathbb{R} to R\mathbb{R}, denoted RR\mathbb{R}^\mathbb{R}, forms a commutative ring with identity under pointwise addition and multiplication, where the additive identity is the zero function and the multiplicative identity is the constant function 1.[24] This ring structure arises because addition and multiplication are associative, commutative, and distributive in R\mathbb{R}, and these properties transfer pointwise to the functions.[24] However, RR\mathbb{R}^\mathbb{R} is not an integral domain, as it contains zero divisors; for instance, functions that are 1 on disjoint nonempty subsets of R\mathbb{R} and 0 elsewhere multiply to the zero function.[25] Function composition provides another key operation: for functions f:ERf: E \to \mathbb{R} and g:DRg: D \to \mathbb{R} with DRD \subseteq \mathbb{R} and ERE \subseteq \mathbb{R}, the composition fgf \circ g is defined on the domain {xDg(x)E}\{x \in D \mid g(x) \in E\} by (fg)(x)=f(g(x))(f \circ g)(x) = f(g(x))./01%3A_Functions/1.04%3A_Composition_of_Functions) Composition is associative but not necessarily commutative, and it requires careful domain restriction to avoid undefined values./01%3A_Functions/1.04%3A_Composition_of_Functions) A representative example of pointwise operations is the sum of the sine and cosine functions: (sin+cos)(x)=sinx+cosx(\sin + \cos)(x) = \sin x + \cos x for all xRx \in \mathbb{R}./17%3A_Functions/17.03%3A_Operations_with_Functions/17.3.01%3A_Arithmetic_Operations_with_Functions) For composition, consider sin(x2)\sin(x^2), which is (sinq)(x)=sin(x2)(\sin \circ q)(x) = \sin(x^2) where q(x)=x2q(x) = x^2, defined on all of R\mathbb{R} since the range of qq lies in the domain of sin\sin./01%3A_Functions/1.04%3A_Composition_of_Functions) These operations preserve continuity when applied to continuous functions.

Analysis and Continuity

Limits of Real Functions

The limit of a function f:DRf: D \to \mathbb{R}, where DRD \subseteq \mathbb{R}, at a point aa in the domain or on its boundary, captures the behavior of f(x)f(x) as xx approaches aa without necessarily evaluating f(a)f(a). This concept formalizes the intuitive idea that f(x)f(x) gets arbitrarily close to a value LL when xx is sufficiently close to aa, excluding x=ax = a itself. Limits serve as a foundational tool in real analysis, enabling the study of continuity, where a function is continuous at aa if the limit exists and equals f(a)f(a)./02%3A_Limits/2.01%3A_The_Limit_of_a_Function) The precise definition of the limit, known as the epsilon-delta formulation, states that limxaf(x)=L\lim_{x \to a} f(x) = L if for every ϵ>0\epsilon > 0, there exists a δ>0\delta > 0 such that whenever 0<xa<δ0 < |x - a| < \delta and xDx \in D, it follows that f(x)L<ϵ|f(x) - L| < \epsilon. This definition, introduced by Karl Weierstrass in his 1861 lecture notes on calculus, ensures the limit is independent of the value at aa and quantifies the notion of "arbitrarily close" through positive real numbers ϵ\epsilon and δ\delta.[26][27] One-sided limits extend this idea to approach from a specific direction. The right-hand limit limxa+f(x)=L\lim_{x \to a^+} f(x) = L holds if for every ϵ>0\epsilon > 0, there exists δ>0\delta > 0 such that when a<x<a+δa < x < a + \delta and xDx \in D, then f(x)L<ϵ|f(x) - L| < \epsilon. Similarly, the left-hand limit limxaf(x)=L\lim_{x \to a^-} f(x) = L requires the condition for aδ<x<aa - \delta < x < a. The two-sided limit exists if both one-sided limits exist and are equal./01%3A_Limits/1.04%3A_One_Sided_Limits)[28] Infinite limits describe unbounded behavior near a point. The limit limxaf(x)=+\lim_{x \to a} f(x) = +\infty means that for every M>0M > 0, there exists δ>0\delta > 0 such that when 0<xa<δ0 < |x - a| < \delta and xDx \in D, then f(x)>Mf(x) > M; an analogous definition applies for -\infty. One-sided infinite limits follow the same directional restrictions. Limits at infinity, such as limx+f(x)=L\lim_{x \to +\infty} f(x) = L, require that for every ϵ>0\epsilon > 0, there exists N>0N > 0 such that when x>Nx > N, f(x)L<ϵ|f(x) - L| < \epsilon./02%3A_Limits/2.03%3A_Infinite_Limits) A classic example is limx0sinxx=1\lim_{x \to 0} \frac{\sin x}{x} = 1, which can be proved using the squeeze theorem: since $ \cos x \leq \frac{\sin x}{x} \leq 1 $ for 0<x<π20 < x < \frac{\pi}{2}, and both bounds approach 1 as x0+x \to 0^+, the limit follows by symmetry for the left side. Another is limx+1x=0\lim_{x \to +\infty} \frac{1}{x} = 0, as for any ϵ>0\epsilon > 0, choosing N=1ϵN = \frac{1}{\epsilon} ensures x>Nx > N implies 1x0<ϵ\left| \frac{1}{x} - 0 \right| < \epsilon.[29][30]

Continuity Properties

A function $ f: D \subseteq \mathbb{R} \to \mathbb{R} $ is continuous at a point $ a \in D $ if the limit $ \lim_{x \to a} f(x) = f(a) $. This condition ensures that the function values approach the function's value at $ a $ as the input approaches $ a $. Equivalently, using the epsilon-delta formulation, $ f $ is continuous at $ a $ if for every $ \epsilon > 0 $, there exists $ \delta > 0 $ such that whenever $ x \in D $ and $ 0 < |x - a| < \delta $, it follows that $ |f(x) - f(a)| < \epsilon $.[31] The function $ f $ is said to be continuous on a subset $ S \subseteq D $ if it is continuous at every point in $ S $. Uniform continuity strengthens the notion of continuity by requiring a uniform choice of $ \delta $ across the entire domain. Specifically, $ f: S \to \mathbb{R} $ is uniformly continuous on a nonempty subset $ S \subseteq \mathbb{R} $ if for every $ \epsilon > 0 $, there exists $ \delta > 0 $ (independent of position in $ S $) such that for all $ x, y \in S $ with $ |x - y| < \delta $, $ |f(x) - f(y)| < \epsilon $.[32] On compact intervals like $ [a, b] $, every continuous function is uniformly continuous, which has significant implications for approximation and extension properties. For instance, constant functions and linear functions like $ f(x) = x $ are uniformly continuous on $ \mathbb{R} $, as the difference $ |f(x) - f(y)| $ directly bounds by $ |x - y| $.[32] One key consequence of continuity is the Intermediate Value Theorem, which guarantees that continuous functions preserve intermediate values on connected domains. If $ f $ is continuous on the closed interval $ [a, b] $ and $ k $ lies between $ f(a) $ and $ f(b) $ (assume without loss of generality $ f(a) < k < f(b) $), then there exists at least one $ c \in (a, b) $ such that $ f(c) = k $.[33] This theorem, first proved by Bernard Bolzano in 1817, underscores the "connectedness" of the image of an interval under a continuous map, ensuring no "gaps" in the range over $ [a, b] $. It applies to functions like polynomials, where continuity ensures the graph crosses any horizontal line between endpoint heights. Functions exhibit varying degrees of continuity across their domains. Polynomials, such as $ f(x) = x^2 + 3x + 1 $, are continuous everywhere on $ \mathbb{R} $, as they are finite sums and products of the identity function $ x $, which is continuous, and continuity operations preserve these properties under addition, subtraction, multiplication, and scalar multiplication.[34] In contrast, the Dirichlet function, defined by $ f(x) = 1 $ if $ x $ is rational and $ f(x) = 0 $ if $ x $ is irrational, is nowhere continuous on $ \mathbb{R} $. At any point $ a $, every neighborhood contains both rationals and irrationals densely, so $ f $ oscillates between 0 and 1, preventing the limit from existing.[35] Discontinuities in real functions are classified based on the behavior of limits at the point of failure. A removable discontinuity occurs at $ a $ if $ \lim_{x \to a} f(x) $ exists (finite) but does not equal $ f(a) $, or if $ f(a) $ is undefined while the limit exists; the function can be made continuous by redefining $ f(a) $ to the limit value, as in $ f(x) = \frac{\sin x}{x} $ at $ x = 0 $.[36] A jump discontinuity arises when the one-sided limits exist but differ, such as $ \lim_{x \to a^-} f(x) = L $ and $ \lim_{x \to a^+} f(x) = M $ with $ L \neq M $; the function "jumps" across $ a $, exemplified by the step function $ f(x) = \lfloor x \rfloor $ at integers.[36] Essential discontinuities are more severe, where $ \lim_{x \to a} f(x) $ fails to exist finitely, often due to unbounded oscillation (like $ f(x) = \sin(1/x) $ at $ x = 0 $) or vertical asymptotes (like $ f(x) = 1/x $ at $ x = 0 $); these cannot be removed by simple redefinition.[36]

Calculus of Real Functions

Differentiation Fundamentals

The derivative of a function f:DRf: D \to \mathbb{R}, where DRD \subseteq \mathbb{R} is the domain, at a point aDa \in D is defined as
f(a)=limh0f(a+h)f(a)h, f'(a) = \lim_{h \to 0} \frac{f(a + h) - f(a)}{h},
provided the limit exists.[37] This definition, formalized by Augustin-Louis Cauchy in 1821, quantifies the instantaneous rate of change of ff at aa.[38] A function is differentiable at aa if f(a)f'(a) exists, and differentiability at a point implies continuity at that point.[39] The converse does not hold; for example, the absolute value function f(x)=xf(x) = |x| is continuous everywhere but not differentiable at x=0x = 0, as the limit limh0hh\lim_{h \to 0} \frac{|h|}{h} fails to exist.[37] Basic differentiation rules facilitate computation for composite and algebraic expressions. The power rule states that for nRn \in \mathbb{R},
ddxxn=nxn1, \frac{d}{dx} x^n = n x^{n-1},
valid where defined.[40] The product rule for functions ff and gg gives
(fg)(x)=f(x)g(x)+f(x)g(x). (fg)'(x) = f'(x)g(x) + f(x)g'(x).
[41] The chain rule, attributed to Gottfried Wilhelm Leibniz, for f(g(x))f(g(x)) yields
ddxf(g(x))=f(g(x))g(x). \frac{d}{dx} f(g(x)) = f'(g(x)) \cdot g'(x).
[42] Higher-order derivatives extend this by iterated differentiation; the second derivative f(x)f''(x) is the derivative of f(x)f'(x), and the nnth derivative is denoted f(n)(x)f^{(n)}(x).[43] This notation supports analysis of curvature and acceleration in applications.

Key Theorems in Calculus

The Mean Value Theorem states that if a real-valued function ff is continuous on the closed interval [a,b][a, b] and differentiable on the open interval (a,b)(a, b), then there exists at least one c(a,b)c \in (a, b) such that
f(c)=f(b)f(a)ba. f'(c) = \frac{f(b) - f(a)}{b - a}.
[44] This theorem, first proved by Joseph-Louis Lagrange for analytic functions in 1797, establishes that the instantaneous rate of change equals the average rate of change at some interior point, providing a bridge between local and global behavior of differentiable functions.[45] In its modern rigorous form for general continuous and differentiable functions, it underpins many proofs in analysis, such as those for monotonicity and convexity.[44] Rolle's Theorem is a special case of the Mean Value Theorem, applicable when f(a)=f(b)f(a) = f(b). It asserts that if ff is continuous on [a,b][a, b], differentiable on (a,b)(a, b), and f(a)=f(b)f(a) = f(b), then there exists c(a,b)c \in (a, b) such that f(c)=0f'(c) = 0.[44] Originally stated and proved by Michel Rolle in 1691 using algebraic methods for polynomial roots, it highlights the existence of critical points for functions returning to the same value, serving as a foundational tool for proving the Mean Value Theorem and analyzing extrema.[46] The Fundamental Theorem of Calculus comprises two parts that link differentiation and integration. The first part states that if ff is continuous on [a,b][a, b] and F(x)=axf(t)dtF(x) = \int_a^x f(t) \, dt, then FF is differentiable on (a,b)(a, b) and F(x)=f(x)F'(x) = f(x).[44] The second part asserts that if FF is differentiable on [a,b][a, b] with FF' integrable, then abF(x)dx=F(b)F(a)\int_a^b F'(x) \, dx = F(b) - F(a).[44] Independently discovered by Isaac Newton around 1666 and Gottfried Wilhelm Leibniz, who published a proof in 1686, this theorem demonstrates that integration is the inverse operation of differentiation, enabling practical computation of areas and antiderivatives.[47] Taylor's Theorem approximates a function near a point using its derivatives. It states that if ff is n+1n+1 times differentiable on an interval containing aa and xx, then
f(x)=k=0nf(k)(a)k!(xa)k+Rn(x), f(x) = \sum_{k=0}^n \frac{f^{(k)}(a)}{k!} (x - a)^k + R_n(x),
where the remainder Rn(x)R_n(x) satisfies, in Lagrange form,
Rn(x)=f(n+1)(ξ)(n+1)!(xa)n+1 R_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} (x - a)^{n+1}
for some ξ\xi between aa and xx.[44] First introduced by Brook Taylor in 1715 for series expansions, this theorem facilitates local approximations of functions like exponentials and sines, with applications in numerical analysis and error estimation.

Advanced Representations

Implicit Functions

In mathematics, an implicit function defines a relationship between variables where one variable, typically $ y $, is not expressed explicitly as a function of the other, $ x $, but rather through an equation of the form $ F(x, y) = 0 $, assuming $ F $ is a sufficiently smooth function.[48] This approach is useful when solving for $ y $ in terms of $ x $ explicitly is impractical or impossible, allowing analysis of the relationship without algebraic isolation.[49] The cornerstone for studying such functions is the implicit function theorem, which guarantees the local existence and differentiability of an explicit function under certain conditions. Specifically, suppose $ F: \mathbb{R}^2 \to \mathbb{R} $ is continuously differentiable, and there exists a point $ (x_0, y_0) $ such that $ F(x_0, y_0) = 0 $ and $ \frac{\partial F}{\partial y}(x_0, y_0) \neq 0 $. Then, there exist open neighborhoods $ U $ around $ x_0 $ and $ V $ around $ y_0 $ such that for every $ x \in U $, there is a unique $ y = f(x) \in V $ satisfying $ F(x, f(x)) = 0 $, and $ f $ is continuously differentiable on $ U $.[50] Moreover, the derivative is given by
dydx=FxFy, \frac{dy}{dx} = -\frac{\frac{\partial F}{\partial x}}{\frac{\partial F}{\partial y}},
evaluated at $ (x, y) $, which follows from applying the chain rule to the equation $ F(x, y(x)) = 0 $.[51] This formula enables differentiation without explicit solving, as long as the partial derivative condition holds.[50]
A classic example is the unit circle defined by $ x^2 + y^2 = 1 $, where $ F(x, y) = x^2 + y^2 - 1 = 0 $. Here, $ \frac{\partial F}{\partial y} = 2y \neq 0 $ except at points where $ y = 0 $, so locally near points with $ y \neq 0 $, $ y $ can be expressed as a differentiable function of $ x $, yielding the upper and lower semicircles $ y = \pm \sqrt{1 - x^2} $.[50] This implicit form is particularly valuable in applications like solving polynomial equations or modeling physical constraints, where explicit solutions may involve complex radicals or be unavailable.[49] Regarding uniqueness, the theorem ensures a unique local solution $ y = f(x) $ in the specified neighborhoods when the partial derivative condition is met, preventing multiple values for the same $ x $ in that region.[51] However, globally or near points where $ \frac{\partial F}{\partial y} = 0 $, multiple branches may exist; for instance, in the circle example, the full relation consists of two distinct branches meeting at $ (\pm 1, 0) $, each uniquely solvable locally away from those points.[48] This branching behavior highlights the theorem's local nature and the need for careful domain selection in applications.[52]

Parametric Curves in R^n

Parametric curves in Rn\mathbb{R}^n provide a means to describe one-dimensional paths in higher-dimensional Euclidean space through explicit parameterization by a real variable. Formally, such a curve is defined by a continuous function γ:IRn\gamma: I \to \mathbb{R}^n, where IRI \subseteq \mathbb{R} is an interval, and γ(t)=(x1(t),x2(t),,xn(t))\gamma(t) = (x_1(t), x_2(t), \dots, x_n(t)) for tIt \in I, with each component function xi:IRx_i: I \to \mathbb{R} being real-valued and continuous.[53] This representation allows for the study of curves that may not be easily expressed via implicit equations, enabling analysis of their geometric and analytic properties in multi-dimensional settings.[54] A key geometric property of parametric curves is their arc length, which measures the total length of the path traced by γ\gamma over a subinterval [a,b]I[a, b] \subseteq I. Assuming γ\gamma is differentiable on [a,b][a, b], the arc length LL is given by the integral
L=abγ(t)dt, L = \int_a^b \|\gamma'(t)\| \, dt,
where γ(t)=(x1(t),x2(t),,xn(t))\gamma'(t) = (x_1'(t), x_2'(t), \dots, x_n'(t)) is the derivative vector and \|\cdot\| denotes the Euclidean norm i=1n(xi(t))2\sqrt{\sum_{i=1}^n (x_i'(t))^2}. This formula arises from approximating the curve by small line segments and taking the limit, providing an intrinsic measure independent of the specific parameterization.[53][55] For curves without singularities, the notion of regularity ensures well-behaved local geometry. A parametric curve γ\gamma is regular on II if γ(t)0\gamma'(t) \neq 0 for all tIt \in I, meaning the derivative vector is nowhere zero, which implies that the curve has a well-defined tangent direction at every point and avoids cusps or stops. Points where γ(t)=0\gamma'(t) = 0 are singular, potentially leading to self-intersections or undefined tangents, but regular curves admit reparameterizations, such as by arc length, that preserve their smoothness.[54] The components' derivatives, as covered in the fundamentals of differentiation, form the basis for γ(t)\gamma'(t).[55] A classic example of a parametric curve in R3\mathbb{R}^3 is the helix, parameterized by γ(t)=(cost,sint,t)\gamma(t) = (\cos t, \sin t, t) for tRt \in \mathbb{R}. This curve spirals uniformly around the z-axis with radius 1 and constant speed along the axis, illustrating a regular curve since γ(t)=(sint)2+(cost)2+12=20\|\gamma'(t)\| = \sqrt{(-\sin t)^2 + (\cos t)^2 + 1^2} = \sqrt{2} \neq 0. The arc length of one full turn from t=0t = 0 to t=2πt = 2\pi is 2π22\pi \sqrt{2}, highlighting its helical geometry in three dimensions.[56][53]

Specialized Function Types

Matrix-Valued Functions

A matrix-valued function is a mapping $ f: \mathbb{R} \to M_{m \times n}(\mathbb{R}) $, where $ M_{m \times n}(\mathbb{R}) $ denotes the vector space of all $ m \times n $ matrices with real entries. Such a function assigns to each real number $ t $ an $ m \times n $ matrix $ f(t) = [f_{ij}(t)]{i=1, \dots, m}^{j=1, \dots, n} $, with each entry $ f{ij}: \mathbb{R} \to \mathbb{R} $ being a real-valued function of the scalar variable $ t $. This structure allows matrix-valued functions to extend scalar real functions to higher-dimensional linear algebraic contexts while preserving entrywise properties like continuity and differentiability.[57] Basic operations on matrix-valued functions follow the standard rules of matrix algebra applied pointwise. Addition and scalar multiplication are defined entrywise: for compatible functions $ f $ and $ g $, $ (f + g)(t) = f(t) + g(t) $ and $ (\alpha f)(t) = \alpha f(t) $ for $ \alpha \in \mathbb{R} $. If $ f: \mathbb{R} \to M_{m \times p}(\mathbb{R}) $ and $ g: \mathbb{R} \to M_{p \times n}(\mathbb{R}) $, their product is $ (f g)(t) = f(t) g(t) $, satisfying the product rule for differentiation: $ \frac{d}{dt} [f(t) g(t)] = f'(t) g(t) + f(t) g'(t) $, where the derivative $ f'(t) = [f_{ij}'(t)] $ is computed entrywise, assuming each $ f_{ij} $ is differentiable. These operations ensure that the set of matrix-valued functions forms a module over the real scalars, with differentiation behaving linearly.[57][58] A canonical example is the 2D rotation matrix function $ f(t) = \begin{pmatrix} \cos t & -\sin t \ \sin t & \cos t \end{pmatrix} $, which parameterizes rotations in the plane by angle $ t $ radians and is orthogonal for all $ t $ with determinant 1. Each entry is a trigonometric real function, and the derivative $ f'(t) = \begin{pmatrix} -\sin t & -\cos t \ \cos t & -\sin t \end{pmatrix} $ corresponds entrywise to the velocities of rotation. This function illustrates how matrix-valued functions encode geometric transformations depending on a real parameter.[59] Matrix-valued functions play a central role in solving systems of linear ordinary differential equations (ODEs) of the form $ \mathbf{x}'(t) = A(t) \mathbf{x}(t) $, where $ A(t) $ is an $ n \times n $ matrix-valued function and $ \mathbf{x}(t) $ is vector-valued. For constant $ A $, the fundamental solution is given by the matrix exponential $ e^{A t} = \sum_{k=0}^{\infty} \frac{(A t)^k}{k!} $, a matrix-valued function satisfying $ \frac{d}{dt} e^{A t} = A e^{A t} $ with $ e^{A \cdot 0} = I $, yielding the general solution $ \mathbf{x}(t) = e^{A t} \mathbf{x}(0) $. More broadly, they arise in time-dependent linear systems, control theory, and stability analysis, where properties like the spectrum of $ A(t) $ determine solution behavior.[60][57]

Functions in Banach and Hilbert Spaces

Functions from the real line to a Banach space BB are defined as mappings f:RBf: \mathbb{R} \to B, where BB is a complete normed vector space equipped with a norm B\|\cdot\|_B. Such functions extend the notion of real-valued functions to infinite-dimensional settings, preserving vector space operations componentwise. Continuity of ff at a point aRa \in \mathbb{R} is characterized by the condition that for every ϵ>0\epsilon > 0, there exists δ>0\delta > 0 such that f(x)f(a)B<ϵ\|f(x) - f(a)\|_B < \epsilon whenever xa<δ|x - a| < \delta, mirroring the metric-induced topology of the domain R\mathbb{R}.[61] Hilbert spaces represent a special class of Banach spaces where the norm arises from an inner product ,\langle \cdot, \cdot \rangle, inducing orthogonality and enabling expansions via orthonormal bases. A prototypical example is L2(R)L^2(\mathbb{R}), the space of square-integrable real-valued functions on R\mathbb{R} with inner product f,g=f(t)g(t)dt\langle f, g \rangle = \int_{-\infty}^{\infty} f(t) g(t) \, dt and norm fL2=f,f\|f\|_{L^2} = \sqrt{\langle f, f \rangle}. Every separable Hilbert space admits a countable orthonormal basis {en}n=1\{e_n\}_{n=1}^{\infty}, allowing elements fHf \in H to be represented as f=n=1f,enenf = \sum_{n=1}^{\infty} \langle f, e_n \rangle e_n with convergence in the norm, which facilitates analysis of functions valued in such spaces.[62][63] In quantum mechanics, functions valued in Hilbert spaces model the state of quantum systems, where the wave function ψ(t)\psi(t) evolves in a complex Hilbert space HH (such as L2(R3)L^2(\mathbb{R}^3) for position space) parameterized by real time tRt \in \mathbb{R}. The time-dependent Schrödinger equation governs this evolution: iψt=Hψi \hbar \frac{\partial \psi}{\partial t} = H \psi, with HH a self-adjoint Hamiltonian operator and \hbar the reduced Planck constant, ensuring unitary dynamics that preserve the inner product and probabilities. This framework ties the real-variable domain to the infinite-dimensional structure, enabling predictions of observables via expectation values ψ(t),Aψ(t)\langle \psi(t), A \psi(t) \rangle for self-adjoint operators AA.[64] A concrete example is the Gaussian function f(t)=et2f(t) = e^{-t^2} in L2(R)L^2(\mathbb{R}), which belongs to the space since fL22=e2t2dt=π/2<\|f\|_{L^2}^2 = \int_{-\infty}^{\infty} e^{-2t^2} \, dt = \sqrt{\pi/2} < \infty[65], serving as a basis element or approximant in orthonormal expansions for signal processing or quantum ground states. Continuity in the L2L^2-norm aligns with the finite-dimensional case, as f(x)f(y)L20\|f(x) - f(y)\|_{L^2} \to 0 implies pointwise convergence under suitable conditions.[63]

Extensions and Applications

Complex-Valued Functions of Real Variables

A complex-valued function of a real variable is a mapping f:RCf: \mathbb{R} \to \mathbb{C} that assigns to each real number xx a complex number f(x)f(x). Such a function can be decomposed into its real and imaginary parts as f(x)=u(x)+iv(x)f(x) = u(x) + i v(x), where u:RRu: \mathbb{R} \to \mathbb{R} and v:RRv: \mathbb{R} \to \mathbb{R} are real-valued functions.[66] Continuity of ff at a point x0x_0 requires both uu and vv to be continuous at x0x_0, while differentiability follows componentwise: if the limits exist, then f(x)=u(x)+iv(x)f'(x) = u'(x) + i v'(x).[66] Integration of ff over an interval is similarly defined by integrating uu and vv separately.[66] In complex analysis, analyticity for a complex-valued function of a real variable typically refers to it being the restriction to the real line of a holomorphic function defined on an open set in the complex plane containing that line segment. For f(x)=u(x)+iv(x)f(x) = u(x) + i v(x) to admit such an extension to a holomorphic F(z)=U(x,y)+iV(x,y)F(z) = U(x,y) + i V(x,y) with U(x,0)=u(x)U(x,0) = u(x) and V(x,0)=v(x)V(x,0) = v(x), the extended real and imaginary parts UU and VV must satisfy the Cauchy-Riemann equations Ux=Vy\frac{\partial U}{\partial x} = \frac{\partial V}{\partial y} and Uy=Vx\frac{\partial U}{\partial y} = -\frac{\partial V}{\partial x} in the domain, along with the necessary continuity conditions on the partial derivatives. These equations ensure complex differentiability of the extension, distinguishing such functions from merely differentiable ones on R\mathbb{R}, as the latter may not extend holomorphically. Holomorphic functions restricted to the real line are somewhat restrictive, as the extension imposes rigid constraints via the Cauchy-Riemann equations, but notable examples exist. For instance, f(x)=eix=cosx+isinxf(x) = e^{ix} = \cos x + i \sin x is the restriction of the entire holomorphic function F(z)=eizF(z) = e^{iz}, where the extended parts are U(x,y)=eycosxU(x,y) = e^{-y} \cos x and V(x,y)=eysinxV(x,y) = e^{-y} \sin x. These satisfy the Cauchy-Riemann equations, as Ux=eysinx=Vy\frac{\partial U}{\partial x} = -e^{-y} \sin x = \frac{\partial V}{\partial y} and Uy=eycosx=Vx\frac{\partial U}{\partial y} = -e^{-y} \cos x = -\frac{\partial V}{\partial x}.[66] Such restrictions inherit properties like infinite differentiability and power series representations from their holomorphic extensions. Applications of complex-valued functions of real variables abound in analysis and engineering, particularly where phase and amplitude information is crucial. A key example is the Fourier transform, which takes a complex-valued function f:RCf: \mathbb{R} \to \mathbb{C} (often arising from real signals via Euler's formula) and produces another such function f^(ξ)=f(x)ei2πxξdx\hat{f}(\xi) = \int_{-\infty}^{\infty} f(x) e^{-i 2\pi x \xi} \, dx, decomposing the input into frequency components.[67] In signal processing, this enables efficient filtering, compression, and analysis of waveforms, as the complex representation captures both magnitude and phase shifts essential for reconstructing signals without distortion.[67] The convolution theorem further amplifies its utility, equating convolution in the time domain to multiplication in the frequency domain, streamlining operations like system response modeling.[67]

Cardinality of Function Sets

The set of all functions from the real numbers to the real numbers, denoted RR\mathbb{R}^\mathbb{R}, has cardinality 2c2^{\mathfrak{c}}, where c=R\mathfrak{c} = |\mathbb{R}| denotes the cardinality of the continuum. This follows from cardinal arithmetic in set theory: since R=20|\mathbb{R}| = 2^{\aleph_0}, it holds that RR=(20)20=2020=220=2c|\mathbb{R}^\mathbb{R}| = (2^{\aleph_0})^{2^{\aleph_0}} = 2^{\aleph_0 \cdot 2^{\aleph_0}} = 2^{2^{\aleph_0}} = 2^{\mathfrak{c}}.[68][69] In contrast, the set of continuous functions from R\mathbb{R} to R\mathbb{R}, denoted C(R)C(\mathbb{R}), has cardinality c\mathfrak{c}. Continuous functions are uniquely determined by their restriction to the rational numbers Q\mathbb{Q}, which is a countable dense subset of R\mathbb{R}; the restriction map C(R)RQC(\mathbb{R}) \to \mathbb{R}^\mathbb{Q} is injective, and RQ=c0=(20)0=200=20=c|\mathbb{R}^\mathbb{Q}| = \mathfrak{c}^{\aleph_0} = (2^{\aleph_0})^{\aleph_0} = 2^{\aleph_0 \cdot \aleph_0} = 2^{\aleph_0} = \mathfrak{c}. Moreover, there are at least c\mathfrak{c} many continuous functions, as the constant functions provide an injection from R\mathbb{R} to C(R)C(\mathbb{R}). By the Schröder–Bernstein theorem, C(R)=c|C(\mathbb{R})| = \mathfrak{c}.[70] The set of polynomial functions with rational coefficients is countable. Such polynomials are finite sums k=0nakxk\sum_{k=0}^n a_k x^k where each akQa_k \in \mathbb{Q} and nNn \in \mathbb{N}; the set of all finite sequences of rational numbers is a countable union of countable sets (one for each degree nn), hence countable.[70] The cardinality of RR\mathbb{R}^\mathbb{R} strictly exceeds that of R\mathbb{R}, as there is an obvious injection RRR\mathbb{R} \hookrightarrow \mathbb{R}^\mathbb{R} (via constant functions) but no surjection RRR\mathbb{R} \twoheadrightarrow \mathbb{R}^\mathbb{R} by Cantor's theorem on the strict inequality R<P(R)|\mathbb{R}| < | \mathcal{P}(\mathbb{R}) | and the identification RR=P(R×R)=2c|\mathbb{R}^\mathbb{R}| = | \mathcal{P}(\mathbb{R} \times \mathbb{R}) | = 2^{\mathfrak{c}}.[68]

References

User Avatar
No comments yet.