Hubbry Logo
MathematicsMathematicsMain
Open search
Mathematics
Community hub
Mathematics
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Mathematics
Mathematics
from Wikipedia

Mathematics is a field of study that discovers and organizes methods, theories, and theorems that are developed and proved for the needs of empirical sciences and mathematics itself. There are many areas of mathematics, which include number theory (the study of numbers), algebra (the study of formulas and related structures), geometry (the study of shapes and spaces that contain them), analysis (the study of continuous changes), and set theory (presently used as a foundation for all mathematics).

Mathematics involves the description and manipulation of abstract objects that consist of either abstractions from nature or—in modern mathematics—purely abstract entities that are stipulated to have certain properties, called axioms. Mathematics uses pure reason to prove properties of objects, a proof consisting of a succession of applications of deductive rules to already established results. These results, called theorems, include previously proved theorems, axioms, and—in case of abstraction from nature—some basic properties that are considered true starting points of the theory under consideration.[1]

Mathematics is essential in the natural sciences, engineering, medicine, finance, computer science, and the social sciences. Although mathematics is extensively used for modeling phenomena, the fundamental truths of mathematics are independent of any scientific experimentation. Some areas of mathematics, such as statistics and game theory, are developed in close correlation with their applications and are often grouped under applied mathematics. Other areas are developed independently from any application (and are therefore called pure mathematics) but often later find practical applications.[2][3]

Historically, the concept of a proof and its associated mathematical rigour first appeared in Greek mathematics, most notably in Euclid's Elements.[4] Since its beginning, mathematics was primarily divided into geometry and arithmetic (the manipulation of natural numbers and fractions), until the 16th and 17th centuries, when algebra[a] and infinitesimal calculus were introduced as new fields. Since then, the interaction between mathematical innovations and scientific discoveries has led to a correlated increase in the development of both.[5] At the end of the 19th century, the foundational crisis of mathematics led to the systematization of the axiomatic method,[6] which heralded a dramatic increase in the number of mathematical areas and their fields of application. The contemporary Mathematics Subject Classification lists more than sixty first-level areas of mathematics.

Areas of mathematics

[edit]

Before the Renaissance, mathematics was divided into two main areas: arithmetic, regarding the manipulation of numbers, and geometry, regarding the study of shapes.[7] Some types of pseudoscience, such as numerology and astrology, were not then clearly distinguished from mathematics.[8]

During the Renaissance, two more areas appeared. Mathematical notation led to algebra which, roughly speaking, consists of the study and the manipulation of formulas. Calculus, consisting of the two subfields differential calculus and integral calculus, is the study of continuous functions, which model the typically nonlinear relationships between varying quantities, as represented by variables. This division into four main areas—arithmetic, geometry, algebra, and calculus[9]—endured until the end of the 19th century. Areas such as celestial mechanics and solid mechanics were then studied by mathematicians, but now are considered as belonging to physics.[10] The subject of combinatorics has been studied for much of recorded history, yet did not become a separate branch of mathematics until the seventeenth century.[11]

At the end of the 19th century, the foundational crisis in mathematics and the resulting systematization of the axiomatic method led to an explosion of new areas of mathematics.[12][6] The 2020 Mathematics Subject Classification contains no less than sixty-three first-level areas.[13] Some of these areas correspond to the older division, as is true regarding number theory (the modern name for higher arithmetic) and geometry. Several other first-level areas have "geometry" in their names or are otherwise commonly considered part of geometry. Algebra and calculus do not appear as first-level areas but are respectively split into several first-level areas. Other first-level areas emerged during the 20th century or had not previously been considered as mathematics, such as mathematical logic and foundations.[14]

Number theory

[edit]
This is the Ulam spiral, which illustrates the distribution of prime numbers. The dark diagonal lines in the spiral hint at the hypothesized approximate independence between being prime and being a value of a quadratic polynomial, a conjecture now known as Hardy and Littlewood's Conjecture F.

Number theory began with the manipulation of numbers, that is, natural numbers and later expanded to integers and rational numbers Number theory was once called arithmetic, but nowadays this term is mostly used for numerical calculations.[15] Number theory dates back to ancient Babylon and probably China. Two prominent early number theorists were Euclid of ancient Greece and Diophantus of Alexandria.[16] The modern study of number theory in its abstract form is largely attributed to Pierre de Fermat and Leonhard Euler. The field came to full fruition with the contributions of Adrien-Marie Legendre and Carl Friedrich Gauss.[17]

Many easily stated number problems have solutions that require sophisticated methods, often from across mathematics. A prominent example is Fermat's Last Theorem. This conjecture was stated in 1637 by Pierre de Fermat, but it was proved only in 1994 by Andrew Wiles, who used tools including scheme theory from algebraic geometry, category theory, and homological algebra.[18] Another example is Goldbach's conjecture, which asserts that every even integer greater than 2 is the sum of two prime numbers. Stated in 1742 by Christian Goldbach, it remains unproven despite considerable effort.[19]

Number theory includes several subareas, including analytic number theory, algebraic number theory, geometry of numbers (method oriented), Diophantine analysis, and transcendence theory (problem oriented).[14]

Geometry

[edit]
On the surface of a sphere, Euclidean geometry only applies as a local approximation. For larger scales the sum of the angles of a triangle is not equal to 180°.

Geometry is one of the oldest branches of mathematics. It started with empirical recipes concerning shapes, such as lines, angles and circles, which were developed mainly for the needs of surveying and architecture, but has since blossomed out into many other subfields.[20]

A fundamental innovation was the ancient Greeks' introduction of the concept of proofs, which require that every assertion must be proved. For example, it is not sufficient to verify by measurement that, say, two lengths are equal; their equality must be proven via reasoning from previously accepted results (theorems) and a few basic statements. The basic statements are not subject to proof because they are self-evident (postulates), or are part of the definition of the subject of study (axioms). This principle, foundational for all mathematics, was first elaborated for geometry, and was systematized by Euclid around 300 BC in his book Elements.[21][22]

The resulting Euclidean geometry is the study of shapes and their arrangements constructed from lines, planes and circles in the Euclidean plane (plane geometry) and the three-dimensional Euclidean space.[b][20]

Euclidean geometry was developed without change of methods or scope until the 17th century, when René Descartes introduced what is now called Cartesian coordinates. This constituted a major change of paradigm: Instead of defining real numbers as lengths of line segments (see number line), it allowed the representation of points using their coordinates, which are numbers. Algebra (and later, calculus) can thus be used to solve geometrical problems. Geometry was split into two new subfields: synthetic geometry, which uses purely geometrical methods, and analytic geometry, which uses coordinates systemically.[23]

Analytic geometry allows the study of curves unrelated to circles and lines. Such curves can be defined as the graph of functions, the study of which led to differential geometry. They can also be defined as implicit equations, often polynomial equations (which spawned algebraic geometry). Analytic geometry also makes it possible to consider Euclidean spaces of higher than three dimensions.[20]

In the 19th century, mathematicians discovered non-Euclidean geometries, which do not follow the parallel postulate. By questioning that postulate's truth, this discovery has been viewed as joining Russell's paradox in revealing the foundational crisis of mathematics. This aspect of the crisis was solved by systematizing the axiomatic method, and adopting that the truth of the chosen axioms is not a mathematical problem.[24][6] In turn, the axiomatic method allows for the study of various geometries obtained either by changing the axioms or by considering properties that do not change under specific transformations of the space.[25]

Today's subareas of geometry include:[14]

Algebra

[edit]
refer to caption
The quadratic formula, which concisely expresses the solutions of all quadratic equations
A shuffled 3x3 rubik's cube
The Rubik's Cube group is a concrete application of group theory.[26]

Algebra is the art of manipulating equations and formulas. Diophantus (3rd century) and al-Khwarizmi (9th century) were the two main precursors of algebra.[27][28] Diophantus solved some equations involving unknown natural numbers by deducing new relations until he obtained the solution.[29] Al-Khwarizmi introduced systematic methods for transforming equations, such as moving a term from one side of an equation into the other side.[30] The term algebra is derived from the Arabic word al-jabr meaning 'the reunion of broken parts' that he used for naming one of these methods in the title of his main treatise.[31][32]

Algebra became an area in its own right only with François Viète (1540–1603), who introduced the use of variables for representing unknown or unspecified numbers.[33] Variables allow mathematicians to describe the operations that have to be done on the numbers represented using mathematical formulas.[34]

Until the 19th century, algebra consisted mainly of the study of linear equations (presently linear algebra), and polynomial equations in a single unknown, which were called algebraic equations (a term still in use, although it may be ambiguous). During the 19th century, mathematicians began to use variables to represent things other than numbers (such as matrices, modular integers, and geometric transformations), on which generalizations of arithmetic operations are often valid.[35] The concept of algebraic structure addresses this, consisting of a set whose elements are unspecified, of operations acting on the elements of the set, and rules that these operations must follow. The scope of algebra thus grew to include the study of algebraic structures. This object of algebra was called modern algebra or abstract algebra, as established by the influence and works of Emmy Noether,[36] and popularized by Van der Waerden's book Moderne Algebra.

Some types of algebraic structures have useful and often fundamental properties, in many areas of mathematics. Their study became autonomous parts of algebra, and include:[14]

The study of types of algebraic structures as mathematical objects is the purpose of universal algebra and category theory.[37] The latter applies to every mathematical structure (not only algebraic ones). At its origin, it was introduced, together with homological algebra for allowing the algebraic study of non-algebraic objects such as topological spaces; this particular area of application is called algebraic topology.[38]

Calculus and analysis

[edit]
A Cauchy sequence consists of elements such that all subsequent terms of a term become arbitrarily close to each other as the sequence progresses (from left to right).

Calculus, formerly called infinitesimal calculus, was introduced independently and simultaneously by 17th-century mathematicians Newton and Leibniz.[39] It is fundamentally the study of the relationship between variables that depend continuously on each other. Calculus was expanded in the 18th century by Euler with the introduction of the concept of a function and many other results.[40] Presently, "calculus" refers mainly to the elementary part of this theory, and "analysis" is commonly used for advanced parts.[41]

Analysis is further subdivided into real analysis, where variables represent real numbers, and complex analysis, where variables represent complex numbers. Analysis includes many subareas shared by other areas of mathematics which include:[14]

Discrete mathematics

[edit]
A diagram representing a two-state Markov chain. The states are represented by 'A' and 'E'. The numbers are the probability of flipping the state.

Discrete mathematics, broadly speaking, is the study of individual, countable mathematical objects. An example is the set of all integers.[42] Because the objects of study here are discrete, the methods of calculus and mathematical analysis do not directly apply.[c] Algorithms—especially their implementation and computational complexity—play a major role in discrete mathematics.[43]

The four color theorem and optimal sphere packing were two major problems of discrete mathematics solved in the second half of the 20th century.[44] The P versus NP problem, which remains open to this day, is also important for discrete mathematics, since its solution would potentially impact a large number of computationally difficult problems.[45]

Discrete mathematics includes:[14]

Mathematical logic and set theory

[edit]
A blue and pink circle and their intersection labeled
The Venn diagram is a commonly used method to illustrate the relations between sets.

The two subjects of mathematical logic and set theory have belonged to mathematics since the end of the 19th century.[46][47] Before this period, sets were not considered to be mathematical objects, and logic, although used for mathematical proofs, belonged to philosophy and was not specifically studied by mathematicians.[48]

Before Cantor's study of infinite sets, mathematicians were reluctant to consider actually infinite collections, and considered infinity to be the result of endless enumeration. Cantor's work offended many mathematicians not only by considering actually infinite sets[49] but by showing that this implies different sizes of infinity, per Cantor's diagonal argument. This led to the controversy over Cantor's set theory.[50] In the same period, various areas of mathematics concluded the former intuitive definitions of the basic mathematical objects were insufficient for ensuring mathematical rigour.[51]

This became the foundational crisis of mathematics.[52] It was eventually solved in mainstream mathematics by systematizing the axiomatic method inside a formalized set theory. Roughly speaking, each mathematical object is defined by the set of all similar objects and the properties that these objects must have.[12] For example, in Peano arithmetic, the natural numbers are defined by "zero is a number", "each number has a unique successor", "each number but zero has a unique predecessor", and some rules of reasoning.[53] This mathematical abstraction from reality is embodied in the modern philosophy of formalism, as founded by David Hilbert around 1910.[54]

The "nature" of the objects defined this way is a philosophical problem that mathematicians leave to philosophers, even if many mathematicians have opinions on this nature, and use their opinion—sometimes called "intuition"—to guide their study and proofs. The approach allows considering "logics" (that is, sets of allowed deducing rules), theorems, proofs, etc. as mathematical objects, and to prove theorems about them. For example, Gödel's incompleteness theorems assert, roughly speaking that, in every consistent formal system that contains the natural numbers, there are theorems that are true (that is provable in a stronger system), but not provable inside the system.[55] This approach to the foundations of mathematics was challenged during the first half of the 20th century by mathematicians led by Brouwer, who promoted intuitionistic logic, which explicitly lacks the law of excluded middle.[56][57]

These problems and debates led to a wide expansion of mathematical logic, with subareas such as model theory (modeling some logical theories inside other theories), proof theory, type theory, computability theory and computational complexity theory.[14] Although these aspects of mathematical logic were introduced before the rise of computers, their use in compiler design, formal verification, program analysis, proof assistants and other aspects of computer science, contributed in turn to the expansion of these logical theories.[58]

Statistics and other decision sciences

[edit]
Whatever the form of a random population distribution (μ), the sampling mean (x̄) tends to a Gaussian distribution and its variance (σ) is given by the central limit theorem of probability theory.[59]

The field of statistics is a mathematical application that is employed for the collection and processing of data samples, using procedures based on mathematical methods especially probability theory. Statisticians generate data with random sampling or randomized experiments.[60]

Statistical theory studies decision problems such as minimizing the risk (expected loss) of a statistical action, such as using a procedure in, for example, parameter estimation, hypothesis testing, and selecting the best. In these traditional areas of mathematical statistics, a statistical-decision problem is formulated by minimizing an objective function, like expected loss or cost, under specific constraints. For example, designing a survey often involves minimizing the cost of estimating a population mean with a given level of confidence.[61] Because of its use of optimization, the mathematical theory of statistics overlaps with other decision sciences, such as operations research, control theory, and mathematical economics.[62]

Computational mathematics

[edit]

Computational mathematics is the study of mathematical problems that are typically too large for human, numerical capacity.[63][64] Numerical analysis studies methods for problems in analysis using functional analysis and approximation theory; numerical analysis broadly includes the study of approximation and discretization with special focus on rounding errors.[65] Numerical analysis and, more broadly, scientific computing also study non-analytic topics of mathematical science, especially algorithmic-matrix-and-graph theory. Other areas of computational mathematics include computer algebra and symbolic computation.

History

[edit]

Etymology

[edit]

The word mathematics comes from the Ancient Greek word máthēma (μάθημα), meaning 'something learned, knowledge, mathematics', and the derived expression mathēmatikḗ tékhnē (μαθηματικὴ τέχνη), meaning 'mathematical science'. It entered the English language during the Late Middle English period through French and Latin.[66]

Similarly, one of the two main schools of thought in Pythagoreanism was known as the mathēmatikoi (μαθηματικοί)—which at the time meant "learners" rather than "mathematicians" in the modern sense. The Pythagoreans were likely the first to constrain the use of the word to just the study of arithmetic and geometry. By the time of Aristotle (384–322 BC) this meaning was fully established.[67]

In Latin and English, until around 1700, the term mathematics more commonly meant "astrology" (or sometimes "astronomy") rather than "mathematics"; the meaning gradually changed to its present one from about 1500 to 1800. This change has resulted in several mistranslations: For example, Saint Augustine's warning that Christians should beware of mathematici, meaning "astrologers", is sometimes mistranslated as a condemnation of mathematicians.[68]

The apparent plural form in English goes back to the Latin neuter plural mathematica (Cicero), based on the Greek plural ta mathēmatiká (τὰ μαθηματικά) and means roughly "all things mathematical", although it is plausible that English borrowed only the adjective mathematic(al) and formed the noun mathematics anew, after the pattern of physics and metaphysics, inherited from Greek.[69] In English, the noun mathematics takes a singular verb. It is often shortened to maths[70] or, in North America, math.[71]

Ancient

[edit]
The Babylonian mathematical tablet Plimpton 322, dated to 1800 BC

In addition to recognizing how to count physical objects, prehistoric peoples may have also known how to count abstract quantities, like time—days, seasons, or years.[72][73] Evidence for more complex mathematics does not appear until around 3000 BC, when the Babylonians and Egyptians began using arithmetic, algebra, and geometry for taxation and other financial calculations, for building and construction, and for astronomy.[74] The oldest mathematical texts from Mesopotamia and Egypt are from 2000 to 1800 BC.[75] Many early texts mention Pythagorean triples and so, by inference, the Pythagorean theorem seems to be the most ancient and widespread mathematical concept after basic arithmetic and geometry. It is in Babylonian mathematics that elementary arithmetic (addition, subtraction, multiplication, and division) first appear in the archaeological record. The Babylonians also possessed a place-value system and used a sexagesimal numeral system which is still in use today for measuring angles and time.[76]

In the 6th century BC, Greek mathematics began to emerge as a distinct discipline and some Ancient Greeks such as the Pythagoreans appeared to have considered it a subject in its own right.[77] Around 300 BC, Euclid organized mathematical knowledge by way of postulates and first principles, which evolved into the axiomatic method that is used in mathematics today, consisting of definition, axiom, theorem, and proof.[78] His book, Elements, is widely considered the most successful and influential textbook of all time.[79] The greatest mathematician of antiquity is often held to be Archimedes (c. 287 – c. 212 BC) of Syracuse.[80] He developed formulas for calculating the surface area and volume of solids of revolution and used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, in a manner not too dissimilar from modern calculus.[81] Other notable achievements of Greek mathematics are conic sections (Apollonius of Perga, 3rd century BC),[82] trigonometry (Hipparchus of Nicaea, 2nd century BC),[83] and the beginnings of algebra (Diophantus, 3rd century AD).[84]

The numerals used in the Bakhshali manuscript, dated between the 2nd century BC and the 2nd century AD

The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today, evolved over the course of the first millennium AD in India and were transmitted to the Western world via Islamic mathematics.[85] Other notable developments of Indian mathematics include the modern definition and approximation of sine and cosine, and an early form of infinite series.[86][87]

Medieval and later

[edit]
A page from al-Khwarizmi's Al-Jabr

During the Golden Age of Islam, especially during the 9th and 10th centuries, mathematics saw many important innovations building on Greek mathematics. The most notable achievement of Islamic mathematics was the development of algebra. Other achievements of the Islamic period include advances in spherical trigonometry and the addition of the decimal point to the Arabic numeral system.[88] Many notable mathematicians from this period were Persian, such as Al-Khwarizmi, Omar Khayyam and Sharaf al-Dīn al-Ṭūsī.[89] The Greek and Arabic mathematical texts were in turn translated to Latin during the Middle Ages and made available in Europe.[90]

During the early modern period, mathematics began to develop at an accelerating pace in Western Europe, with innovations that revolutionized mathematics, such as the introduction of variables and symbolic notation by François Viète (1540–1603), the introduction of logarithms by John Napier in 1614, which greatly simplified numerical calculations, especially for astronomy and marine navigation, the introduction of coordinates by René Descartes (1596–1650) for reducing geometry to algebra, and the development of calculus by Isaac Newton (1643–1727) and Gottfried Leibniz (1646–1716). Leonhard Euler (1707–1783), the most notable mathematician of the 18th century, unified these innovations into a single corpus with a standardized terminology, and completed them with the discovery and the proof of numerous theorems.[91]

Carl Friedrich Gauss

Perhaps the foremost mathematician of the 19th century was the German mathematician Carl Gauss, who made numerous contributions to fields such as algebra, analysis, differential geometry, matrix theory, number theory, and statistics.[92] In the early 20th century, Kurt Gödel transformed mathematics by publishing his incompleteness theorems, which show in part that any consistent axiomatic system—if powerful enough to describe arithmetic—will contain true propositions that cannot be proved.[55]

Mathematics has since been greatly extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made to this very day. According to Mikhail B. Sevryuk, in the January 2006 issue of the Bulletin of the American Mathematical Society, "The number of papers and books included in the Mathematical Reviews (MR) database since 1940 (the first year of operation of MR) is now more than 1.9 million, and more than 75 thousand items are added to the database each year. The overwhelming majority of works in this ocean contain new mathematical theorems and their proofs."[93]

Symbolic notation and terminology

[edit]
An explanation of the sigma (Σ) summation notation

Mathematical notation is widely used in science and engineering for representing complex concepts and properties in a concise, unambiguous, and accurate way. This notation consists of symbols used for representing operations, unspecified numbers, relations and any other mathematical objects, and then assembling them into expressions and formulas.[94] More precisely, numbers and other mathematical objects are represented by symbols called variables, which are generally Latin or Greek letters, and often include subscripts. Operation and relations are generally represented by specific symbols or glyphs,[95] such as + (plus), × (multiplication), (integral), = (equal), and < (less than).[96] All these symbols are generally grouped according to specific rules to form expressions and formulas.[97] Normally, expressions and formulas do not appear alone, but are included in sentences of the current language, where expressions play the role of noun phrases and formulas play the role of clauses.

Mathematics has developed a rich terminology covering a broad range of fields that study the properties of various abstract, idealized objects and how they interact. It is based on rigorous definitions that provide a standard foundation for communication. An axiom or postulate is a mathematical statement that is taken to be true without need of proof. If a mathematical statement has yet to be proven (or disproven), it is termed a conjecture. Through a series of rigorous arguments employing deductive reasoning, a statement that is proven to be true becomes a theorem. A specialized theorem that is mainly used to prove another theorem is called a lemma. A proven instance that forms part of a more general finding is termed a corollary.[98]

Numerous technical terms used in mathematics are neologisms, such as polynomial and homeomorphism.[99] Other technical terms are words of the common language that are used in an accurate meaning that may differ slightly from their common meaning. For example, in mathematics, "or" means "one, the other or both", while, in common language, it is either ambiguous or means "one or the other but not both" (in mathematics, the latter is called "exclusive or"). Finally, many mathematical terms are common words that are used with a completely different meaning.[100] This may lead to sentences that are correct and true mathematical assertions, but appear to be nonsense to people who do not have the required background. For example, "every free module is flat" and "a field is always a ring".

Relationship with sciences

[edit]

Mathematics is used in most sciences for modeling phenomena, which then allows predictions to be made from experimental laws.[101] The independence of mathematical truth from any experimentation implies that the accuracy of such predictions depends only on the adequacy of the model.[102] Inaccurate predictions, rather than being caused by invalid mathematical concepts, imply the need to change the mathematical model used.[103] For example, the perihelion precession of Mercury could only be explained after the emergence of Einstein's general relativity, which replaced Newton's law of gravitation as a better mathematical model.[104]

There is still a philosophical debate whether mathematics is a science. However, in practice, mathematicians are typically grouped with scientists, and mathematics shares much in common with the physical sciences. Like them, it is falsifiable, which means in mathematics that, if a result or a theory is wrong, this can be proved by providing a counterexample. Similarly as in science, theories and results (theorems) are often obtained from experimentation.[105] In mathematics, the experimentation may consist of computation on selected examples or of the study of figures or other representations of mathematical objects (often mind representations without physical support). For example, when asked how he came about his theorems, Gauss once replied "durch planmässiges Tattonieren" (through systematic experimentation).[106] However, some authors emphasize that mathematics differs from the modern notion of science by not relying on empirical evidence.[107][108][109][110]

Pure and applied mathematics

[edit]
Isaac Newton
Gottfried Wilhelm von Leibniz
Isaac Newton (left) and Gottfried Wilhelm Leibniz developed infinitesimal calculus.

Until the 19th century, the development of mathematics in the West was mainly motivated by the needs of technology and science, and there was no clear distinction between pure and applied mathematics.[111] For example, the natural numbers and arithmetic were introduced for the need of counting, and geometry was motivated by surveying, architecture, and astronomy. Later, Isaac Newton introduced infinitesimal calculus for explaining the movement of the planets with his law of gravitation. Moreover, most mathematicians were also scientists, and many scientists were also mathematicians.[112] However, a notable exception occurred with the tradition of pure mathematics in Ancient Greece.[113] The problem of integer factorization, for example, which goes back to Euclid in 300 BC, had no practical application before its use in the RSA cryptosystem, now widely used for the security of computer networks.[114]

In the 19th century, mathematicians such as Karl Weierstrass and Richard Dedekind increasingly focused their research on internal problems, that is, pure mathematics.[111][115] This led to split mathematics into pure mathematics and applied mathematics, the latter being often considered as having a lower value among mathematical purists. However, the lines between the two are frequently blurred.[116]

The aftermath of World War II led to a surge in the development of applied mathematics in the US and elsewhere.[117][118] Many of the theories developed for applications were found interesting from the point of view of pure mathematics, and many results of pure mathematics were shown to have applications outside mathematics; in turn, the study of these applications may give new insights on the "pure theory".[119][120]

An example of the first case is the theory of distributions, introduced by Laurent Schwartz for validating computations done in quantum mechanics, which became immediately an important tool of (pure) mathematical analysis.[121] An example of the second case is the decidability of the first-order theory of the real numbers, a problem of pure mathematics that was proved true by Alfred Tarski, with an algorithm that is impossible to implement because of a computational complexity that is much too high.[122] For getting an algorithm that can be implemented and can solve systems of polynomial equations and inequalities, George Collins introduced the cylindrical algebraic decomposition that became a fundamental tool in real algebraic geometry.[123]

In the present day, the distinction between pure and applied mathematics is more a question of personal research aim of mathematicians than a division of mathematics into broad areas.[124][125] The Mathematics Subject Classification has a section for "general applied mathematics" but does not mention "pure mathematics".[14] However, these terms are still used in names of some university departments, such as at the Faculty of Mathematics at the University of Cambridge.

Unreasonable effectiveness

[edit]

The unreasonable effectiveness of mathematics is a phenomenon that was named and first made explicit by physicist Eugene Wigner.[3] It is the fact that many mathematical theories (even the "purest") have applications outside their initial object. These applications may be completely outside their initial area of mathematics, and may concern physical phenomena that were completely unknown when the mathematical theory was introduced.[126] Examples of unexpected applications of mathematical theories can be found in many areas of mathematics.

A notable example is the prime factorization of natural numbers that was discovered more than 2,000 years before its common use for secure internet communications through the RSA cryptosystem.[127] A second historical example is the theory of ellipses. They were studied by the ancient Greek mathematicians as conic sections (that is, intersections of cones with planes). It was almost 2,000 years later that Johannes Kepler discovered that the trajectories of the planets are ellipses.[128]

In the 19th century, the internal development of geometry (pure mathematics) led to definition and study of non-Euclidean geometries, spaces of dimension higher than three and manifolds. At this time, these concepts seemed totally disconnected from the physical reality, but at the beginning of the 20th century, Albert Einstein developed the theory of relativity that uses fundamentally these concepts. In particular, spacetime of special relativity is a non-Euclidean space of dimension four, and spacetime of general relativity is a (curved) manifold of dimension four.[129][130]

A striking aspect of the interaction between mathematics and physics is when mathematics drives research in physics. This is illustrated by the discoveries of the positron and the baryon In both cases, the equations of the theories had unexplained solutions, which led to conjecture of the existence of an unknown particle, and the search for these particles. In both cases, these particles were discovered a few years later by specific experiments.[131][132][133]

Specific sciences

[edit]

Physics

[edit]
Diagram of a pendulum

Mathematics and physics have influenced each other over their modern history. Modern physics uses mathematics abundantly,[134] and is also considered to be the motivation of major mathematical developments.[135]

Computing

[edit]

Computing is closely related to mathematics in several ways.[136] Theoretical computer science is considered to be mathematical in nature.[137] Communication technologies apply branches of mathematics that may be very old (e.g., arithmetic), especially with respect to transmission security, in cryptography and coding theory. Discrete mathematics is useful in many areas of computer science, such as complexity theory, information theory, and graph theory.[138] In 1998, the Kepler conjecture on sphere packing seemed to also be partially proven by computer.[139]

Biology and chemistry

[edit]
The skin of this giant pufferfish exhibits a Turing pattern, which can be modeled by reaction–diffusion systems.

Biology uses probability extensively in fields such as ecology or neurobiology.[140] Most discussion of probability centers on the concept of evolutionary fitness.[140] Ecology heavily uses modeling to simulate population dynamics,[140][141] study ecosystems such as the predator-prey model, measure pollution diffusion,[142] or to assess climate change.[143] The dynamics of a population can be modeled by coupled differential equations, such as the Lotka–Volterra equations.[144]

Statistical hypothesis testing, is run on data from clinical trials to determine whether a new treatment works.[145] Since the start of the 20th century, chemistry has used computing to model molecules in three dimensions.[146]

Earth sciences

[edit]

Structural geology and climatology use probabilistic models to predict the risk of natural catastrophes.[147] Similarly, meteorology, oceanography, and planetology also use mathematics due to their heavy use of models.[148][149][150]

Social sciences

[edit]

Areas of mathematics used in the social sciences include probability/statistics and differential equations. These are used in linguistics, economics, sociology,[151] and psychology.[152]

Supply and demand curves, like this one, are a staple of mathematical economics.

Often the fundamental postulate of mathematical economics is that of the rational individual actor – Homo economicus (lit.'economic man').[153] In this model, the individual seeks to maximize their self-interest,[153] and always makes optimal choices using perfect information.[154] This atomistic view of economics allows it to relatively easily mathematize its thinking, because individual calculations are transposed into mathematical calculations. Such mathematical modeling allows one to probe economic mechanisms. Some reject or criticise the concept of Homo economicus. Economists note that real people have limited information, make poor choices, and care about fairness and altruism, not just personal gain.[155]

Without mathematical modeling, it is hard to go beyond statistical observations or untestable speculation. Mathematical modeling allows economists to create structured frameworks to test hypotheses and analyze complex interactions. Models provide clarity and precision, enabling the translation of theoretical concepts into quantifiable predictions that can be tested against real-world data.[156]

At the start of the 20th century, there was a development to express historical movements in formulas. In 1922, Nikolai Kondratiev discerned the ~50-year-long Kondratiev cycle, which explains phases of economic growth or crisis.[157] Towards the end of the 19th century, mathematicians extended their analysis into geopolitics.[158] Peter Turchin developed cliodynamics in the 1990s.[159]

Mathematization of the social sciences is not without risk. In the controversial book Fashionable Nonsense (1997), Sokal and Bricmont denounced the unfounded or abusive use of scientific terminology, particularly from mathematics or physics, in the social sciences.[160] The study of complex systems (evolution of unemployment, business capital, demographic evolution of a population, etc.) uses mathematical knowledge. However, the choice of counting criteria, particularly for unemployment, or of models, can be subject to controversy.[161][162]

Philosophy

[edit]

Reality

[edit]

The connection between mathematics and material reality has led to philosophical debates since at least the time of Pythagoras. The ancient philosopher Plato argued that abstractions that reflect material reality have themselves a reality that exists outside space and time. As a result, the philosophical view that mathematical objects somehow exist on their own in abstraction is often referred to as Platonism. Independently of their possible philosophical opinions, modern mathematicians may be generally considered as Platonists, since they think of and talk of their objects of study as real objects.[163]

Armand Borel summarized this view of mathematics reality as follows, and provided quotations of G. H. Hardy, Charles Hermite, Henri Poincaré and Albert Einstein that support his views.[131]

Something becomes objective (as opposed to "subjective") as soon as we are convinced that it exists in the minds of others in the same form as it does in ours and that we can think about it and discuss it together.[164] Because the language of mathematics is so precise, it is ideally suited to defining concepts for which such a consensus exists. In my opinion, that is sufficient to provide us with a feeling of an objective existence, of a reality of mathematics ...

Nevertheless, Platonism and the concurrent views on abstraction do not explain the unreasonable effectiveness of mathematics (as Platonism assumes mathematics exists independently, but does not explain why it matches reality).[165]

Proposed definitions

[edit]

There is no general consensus about the definition of mathematics or its epistemological status—that is, its place inside knowledge. A great many professional mathematicians take no interest in a definition of mathematics, or consider it undefinable. There is not even consensus on whether mathematics is an art or a science. Some just say, "mathematics is what mathematicians do".[166][167] A common approach is to define mathematics by its object of study.[168][169][170][171]

Aristotle defined mathematics as "the science of quantity" and this definition prevailed until the 18th century. However, Aristotle also noted a focus on quantity alone may not distinguish mathematics from sciences like physics; in his view, abstraction and studying quantity as a property "separable in thought" from real instances set mathematics apart.[172] In the 19th century, when mathematicians began to address topics—such as infinite sets—which have no clear-cut relation to physical reality, a variety of new definitions were given.[173] With the large number of new areas of mathematics that have appeared since the beginning of the 20th century, defining mathematics by its object of study has become increasingly difficult.[174] For example, in lieu of a definition, Saunders Mac Lane in Mathematics, form and function summarizes the basics of several areas of mathematics, emphasizing their inter-connectedness, and observes:[175]

the development of Mathematics provides a tightly connected network of formal rules, concepts, and systems. Nodes of this network are closely bound to procedures useful in human activities and to questions arising in science. The transition from activities to the formal Mathematical systems is guided by a variety of general insights and ideas.

Another approach for defining mathematics is to use its methods. For example, an area of study is often qualified as mathematics as soon as one can prove theorems—assertions whose validity relies on a proof, that is, a purely logical deduction.[d][176][failed verification]

Rigor

[edit]

Mathematical reasoning requires rigor. This means that the definitions must be absolutely unambiguous and the proofs must be reducible to a succession of applications of inference rules,[e] without any use of empirical evidence and intuition.[f][177] Rigorous reasoning is not specific to mathematics, but, in mathematics, the standard of rigor is much higher than elsewhere. Despite mathematics' concision, rigorous proofs can require hundreds of pages to express, such as the 255-page Feit–Thompson theorem.[g] The emergence of computer-assisted proofs has allowed proof lengths to further expand.[h][178] The result of this trend is a philosophy of the quasi-empiricist proof that can not be considered infallible, but has a probability attached to it.[6]

The concept of rigor in mathematics dates back to ancient Greece, where their society encouraged logical, deductive reasoning. However, this rigorous approach would tend to discourage exploration of new approaches, such as irrational numbers and concepts of infinity. The method of demonstrating rigorous proof was enhanced in the sixteenth century through the use of symbolic notation. In the 18th century, social transition led to mathematicians earning their keep through teaching, which led to more careful thinking about the underlying concepts of mathematics. This produced more rigorous approaches, while transitioning from geometric methods to algebraic and then arithmetic proofs.[6]

At the end of the 19th century, it appeared that the definitions of the basic concepts of mathematics were not accurate enough for avoiding paradoxes (non-Euclidean geometries and Weierstrass function) and contradictions (Russell's paradox). This was solved by the inclusion of axioms with the apodictic inference rules of mathematical theories; the re-introduction of axiomatic method pioneered by the ancient Greeks.[6] It results that "rigor" is no more a relevant concept in mathematics, as a proof is either correct or erroneous, and a "rigorous proof" is simply a pleonasm. Where a special concept of rigor comes into play is in the socialized aspects of a proof, wherein it may be demonstrably refuted by other mathematicians. After a proof has been accepted for many years or even decades, it can then be considered as reliable.[179]

Nevertheless, the concept of "rigor" may remain useful for teaching to beginners what is a mathematical proof.[180]

Training and practice

[edit]

Education

[edit]

Mathematics has a remarkable ability to cross cultural boundaries and time periods. As a human activity, the practice of mathematics has a social side, which includes education, careers, recognition, popularization, and so on. In education, mathematics is a core part of the curriculum and forms an important element of the STEM academic disciplines. Prominent careers for professional mathematicians include mathematics teacher or professor, statistician, actuary, financial analyst, economist, accountant, commodity trader, or computer consultant.[181]

Archaeological evidence shows that instruction in mathematics occurred as early as the second millennium BCE in ancient Babylonia.[182] Comparable evidence has been unearthed for scribal mathematics training in the ancient Near East and then for the Greco-Roman world starting around 300 BCE.[183] The oldest known mathematics textbook is the Rhind papyrus, dated from c. 1650 BCE in Egypt.[184] Due to a scarcity of books, mathematical teachings in ancient India were communicated using memorized oral tradition since the Vedic period (c. 1500 – c. 500 BCE).[185] In Imperial China during the Tang dynasty (618–907 CE), a mathematics curriculum was adopted for the civil service exam to join the state bureaucracy.[186]

Following the Dark Ages, mathematics education in Europe was provided by religious schools as part of the Quadrivium. Formal instruction in pedagogy began with Jesuit schools in the 16th and 17th century. Most mathematical curricula remained at a basic and practical level until the nineteenth century, when it began to flourish in France and Germany. The oldest journal addressing instruction in mathematics was L'Enseignement Mathématique, which began publication in 1899.[187] The Western advancements in science and technology led to the establishment of centralized education systems in many nation-states, with mathematics as a core component—initially for its military applications.[188] While the content of courses varies, in the present day nearly all countries teach mathematics to students for significant amounts of time.[189]

During school, mathematical capabilities and positive expectations have a strong association with career interest in the field. Extrinsic factors such as feedback motivation by teachers, parents, and peer groups can influence the level of interest in mathematics.[190] Some students studying mathematics may develop an apprehension or fear about their performance in the subject. This is known as mathematical anxiety, and is considered the most prominent of the disorders impacting academic performance. Mathematical anxiety can develop due to various factors such as parental and teacher attitudes, social stereotypes, and personal traits. Help to counteract the anxiety can come from changes in instructional approaches, by interactions with parents and teachers, and by tailored treatments for the individual.[191]

Psychology (aesthetic, creativity and intuition)

[edit]

The validity of a mathematical theorem relies only on the rigor of its proof, which could theoretically be done automatically by a computer program. This does not mean that there is no place for creativity in a mathematical work. On the contrary, many important mathematical results (theorems) are solutions of problems that other mathematicians failed to solve, and the invention of a way for solving them may be a fundamental way of the solving process.[192][193] An extreme example is Apery's theorem: Roger Apery provided only the ideas for a proof, and the formal proof was given only several months later by three other mathematicians.[194]

Creativity and rigor are not the only psychological aspects of the activity of mathematicians. Some mathematicians can see their activity as a game, more specifically as solving puzzles.[195] This aspect of mathematical activity is emphasized in recreational mathematics.

Mathematicians can find an aesthetic value to mathematics. Like beauty, it is hard to define, it is commonly related to elegance, which involves qualities like simplicity, symmetry, completeness, and generality. G. H. Hardy in A Mathematician's Apology expressed the belief that the aesthetic considerations are, in themselves, sufficient to justify the study of pure mathematics. He also identified other criteria such as significance, unexpectedness, and inevitability, which contribute to mathematical aesthetics.[196] Paul Erdős expressed this sentiment more ironically by speaking of "The Book", a supposed divine collection of the most beautiful proofs. The 1998 book Proofs from THE BOOK, inspired by Erdős, is a collection of particularly succinct and revelatory mathematical arguments. Some examples of particularly elegant results included are Euclid's proof that there are infinitely many prime numbers and the fast Fourier transform for harmonic analysis.[197]

Some feel that to consider mathematics a science is to downplay its artistry and history in the seven traditional liberal arts.[198] One way this difference of viewpoint plays out is in the philosophical debate as to whether mathematical results are created (as in art) or discovered (as in science).[131] The popularity of recreational mathematics is another sign of the pleasure many find in solving mathematical questions.

Cultural impact

[edit]

Artistic expression

[edit]

Notes that sound well together to a Western ear are sounds whose fundamental frequencies of vibration are in simple ratios. For example, an octave doubles the frequency and a perfect fifth multiplies it by .[199][200]

Fractal with a scaling symmetry and a central symmetry

Humans, as well as some other animals, find symmetric patterns to be more beautiful.[201] Mathematically, the symmetries of an object form a group known as the symmetry group.[202] For example, the group underlying mirror symmetry is the cyclic group of two elements, . A Rorschach test is a figure invariant by this symmetry,[203] as are butterfly and animal bodies more generally (at least on the surface).[204] Waves on the sea surface possess translation symmetry: moving one's viewpoint by the distance between wave crests does not change one's view of the sea.[205] Fractals possess self-similarity.[206][207]

Popularization

[edit]

Popular mathematics is the act of presenting mathematics without technical terms.[208] Presenting mathematics may be hard since the general public suffers from mathematical anxiety and mathematical objects are highly abstract.[209] However, popular mathematics writing can overcome this by using applications or cultural links.[210] Despite this, mathematics is rarely the topic of popularization in printed or televised media.

Awards and prize problems

[edit]
The front side of the Fields Medal with an illustration of the Greek polymath Archimedes

The most prestigious award in mathematics is the Fields Medal,[211][212] established by Canadian John Charles Fields in 1936 and awarded every four years (except around World War II) to up to four individuals.[213][214] It is considered the mathematical equivalent of the Nobel Prize.[214]

Other prestigious mathematics awards include:[215]

A famous list of 23 open problems, called "Hilbert's problems", was compiled in 1900 by German mathematician David Hilbert.[223] This list has achieved great celebrity among mathematicians,[224] and at least thirteen of the problems (depending how some are interpreted) have been solved.[223]

A new list of seven important problems, titled the "Millennium Prize Problems", was published in 2000. Only one of them, the Riemann hypothesis, duplicates one of Hilbert's problems. A solution to any of these problems carries a 1 million dollar reward.[225] To date, only one of these problems, the Poincaré conjecture, has been solved, by the Russian mathematician Grigori Perelman.[226]

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Mathematics (/ˌmæθ.əˈmæt.ɪks/ (British English) or /ˌmæθˈmæt.ɪks/ (American English); sounding like "math-uh-MAT-iks" with primary stress on the third syllable ("mat")) is the study of quantity, , space, and change. It is often described as an abstract science of numbers, quantity, and space, where mathematicians seek out patterns, formulate conjectures, and establish truths through rigorous reasoning. The discipline is traditionally divided into pure mathematics and applied mathematics. Pure mathematics focuses on , theoretical possibilities, and proof-based exploration independent of immediate practical applications. In contrast, applied mathematics develops models, techniques, and methods for practical use in fields such as science, engineering, and other disciplines. This distinction highlights mathematics' dual nature as both a quest for fundamental understanding and a powerful tool for addressing real-world problems.

Etymology and definitions

Etymology

The word mathematics derives from the Ancient Greek μάθημα (máthēma), meaning "that which is learned," "learning," "study," or "science." In ancient Greek usage, the plural τὰ μαθήματα (ta mathēmata) referred to "the things learned" or "the mathematical sciences," encompassing arithmetic, geometry, astronomy, and harmonics (), as these were grouped among the sciences concerned with number, magnitude, and proportion. The term passed into Latin as mathematica (or mathematicae in plural), retaining a similar broad sense. From Latin it entered as mathematique and then , appearing as mathematics in the late 16th century, initially and predominantly in plural form to reflect the Greek and Latin plural construction. The related adjective mathematical derives from Latin mathematicus, itself from Greek mathematikos ("pertaining to learning" or "mathematical"). The modern narrow sense of mathematics as the study of numbers, quantities, structures, space, and change developed gradually, especially from the onward, distinguishing it from the wider ancient inclusion of astronomy and music among the "mathematical" disciplines.

Definitions

Mathematics is commonly defined as the study of quantity, structure, space, and change, developed through abstraction and from practices such as counting, measuring, and describing shapes. This characterization encompasses core areas ranging from and algebra to geometry and analysis. Alternative descriptions emphasize mathematics as the study of abstract patterns, structures, and relationships, highlighting its focus on logical connections and quantitative aspects. Prominent mathematicians have offered distinct perspectives on its essential nature. G. H. Hardy stressed the , asserting that mathematical patterns must possess beauty comparable to those created by painters or poets, and that "beauty is the first test: there is no permanent place in this world for ugly mathematics." In the early 20th century, foundational debates crystallized around three major schools of thought in the philosophy of mathematics. Logicism sought to reduce all mathematical concepts and truths to pure logic. Intuitionism viewed mathematics as fundamentally a mental activity of construction, rejecting non-constructive proofs and the independent existence of mathematical objects. Formalism, advanced by , treated mathematics as the manipulation of meaningless symbols according to explicit rules, with consistency serving as the key criterion for validity—Hilbert held that consistency suffices for existence within formal systems.

Pure versus applied mathematics

Pure mathematics is the study of mathematical concepts, structures, and theories for their own sake, driven by intellectual curiosity, aesthetic appeal, and the pursuit of abstract truth rather than immediate practical utility. It emphasizes rigorous proof, logical deduction, and the exploration of general properties of mathematical objects, often without concern for real-world applications at the time of discovery. Examples include areas like number theory, topology, and abstract algebra, where the primary goal is to understand fundamental mathematical principles. Applied mathematics, in contrast, develops and employs mathematical methods to solve problems arising in other disciplines, such as physics, engineering, economics, computer science, biology, and finance. It focuses on constructing models, deriving solutions, and using computational or analytical techniques to address concrete, often real-world questions. Applied mathematics frequently involves , numerical methods, optimization, and data analysis tailored to practical needs. The boundary between pure and applied mathematics is fluid and not always sharply defined. Advances in pure mathematics frequently find unexpected and profound applications in other fields, often long after their initial development. Conversely, applied problems can inspire new pure mathematical research and lead to the creation of novel abstract theories. This interplay demonstrates the deep unity of mathematics, where abstract ideas motivated by internal logic can ultimately prove essential to understanding and advancing knowledge in the natural and social sciences.

History

Prehistoric and ancient mathematics

The earliest traces of mathematical activity appear in through artifacts demonstrating basic counting and recording of quantities, such as notched bones used for tallying, though systematic mathematics developed with the rise of ancient civilizations. In ancient Egypt, mathematics served practical purposes like land measurement, construction, and taxation. The Rhind Mathematical Papyrus, copied around 1650 BCE by scribe Ahmes from an older text, is a primary source documenting these techniques. It contains 84 problems involving arithmetic operations, (expressing fractions as sums of distinct unit fractions), geometry for areas and volumes, and linear equations. Examples include calculating the area of circles using an approximation of π as (16/9)^2 and solving problems related to pyramid volumes. The Egyptians employed a decimal system with and methods for multiplication and division based on repeated doubling and addition. Babylonian mathematics, flourishing from the third millennium BCE, utilized a sexagesimal (base-60) positional numeral system that supported fractional and large-number calculations more flexibly than many contemporaries. Clay tablets preserve multiplication tables, reciprocal tables, and algorithms for square and cube roots. A remarkable example is Plimpton 322, a tablet dated to approximately 1800 BCE, listing Pythagorean triples—pairs of numbers satisfying the relation corresponding to side lengths—demonstrating practical knowledge of such relations in surveying or construction. The sexagesimal system's legacy persists in modern time (60 seconds per minute, 60 minutes per hour) and angular measurement. In ancient India, the Sulba Sutras (c. 800–200 BCE), part of Vedic literature, focused on geometric constructions for ritual altars. These texts include rules equivalent to the Pythagorean theorem for right triangles and accurate approximations for √2 (such as 1 + 1/3 + 1/(3×4) - 1/(3×4×34)), as well as constructions transforming rectangles to squares. Ancient Chinese mathematics developed an early decimal positional system using counting rods on boards, enabling addition, subtraction, multiplication, and division. Evidence includes sophisticated decimal multiplication tables from around 310 BCE, and the system supported practical computations in and administration.

Classical antiquity

saw the emergence of mathematics as a deductive discipline in ancient Greece and the Hellenistic world, roughly from the 6th century BCE to the . Greek mathematicians shifted from practical calculation and empirical observation to rigorous proof based on axioms and , establishing a foundation that distinguished mathematics as an abstract science. Thales of Miletus (c. 624–546 BCE) is credited with the earliest known deductive proofs in geometry, demonstrating theorems such as the fact that the diameter divides a circle into two equal semicircles and that vertical angles are equal. The Pythagorean school, led by Pythagoras (c. 570–495 BCE), further developed geometry and number theory, with the famous theorem stating that in a right triangle the square of the hypotenuse equals the sum of the squares of the other two sides. Around 300 BCE, Euclid compiled the Elements, a systematic treatise in 13 books that organized plane and solid geometry, number theory, and proportions into a deductive structure beginning from five postulates, five common notions, and definitions. The work's axiomatic approach, with every proposition proved from prior results, became the paradigm for mathematical reasoning. Archimedes (c. 287–212 BCE) employed the method of exhaustion to rigorously determine areas and volumes, such as the area of the circle and the surface area and volume of the sphere, by approximating them with inscribed and circumscribed polygons and passing to the limit. Apollonius of Perga (c. 262–190 BCE) produced the definitive study of conic sections in his eight-book Conics, defining the , parabola, and hyperbola geometrically and deriving their properties through deduction. Greek number theory focused on properties of integers, including perfect numbers, amicable pairs, and primes, with Euclid proving that there are infinitely many primes and developing the algorithm for the greatest common divisor. Early algebraic techniques emerged in the , particularly in solving determinate and indeterminate equations, as later exemplified in the work of Diophantus (c. 250 CE).

Medieval and Islamic mathematics

During the Islamic Golden Age, roughly spanning the 8th to the 13th centuries, mathematics flourished in the Islamic world, with scholars translating and preserving ancient Greek mathematical texts into Arabic, thereby safeguarding works by Euclid, Archimedes, , and others for later generations. These translation efforts, centered in institutions such as the House of Wisdom in Baghdad, enabled Islamic mathematicians to build upon Greek geometry and number theory while integrating concepts from . A major contribution was the adoption and dissemination of the , including the use of zero and , which replaced Roman numerals in many calculations and facilitated advanced arithmetic. Muhammad ibn Musa al-Khwarizmi (c. 780–850) played a central role in this process through his treatise on Indian calculation methods, which introduced these numerals to the Islamic world and, later, to Europe. His name also gave rise to the term "algorithm" due to the Latinized form of his name and the systematic procedures he described. Al-Khwarizmi's most influential work was Al-Kitab al-mukhtasar fi hisab al-jabr wa'l-muqabala (The Compendious Book on Calculation by Completion and Balancing), which established algebra as a distinct discipline focused on solving linear and quadratic equations through systematic methods of balancing and completion. This text classified quadratic equations and provided both arithmetic and geometric solutions, laying foundational principles for the field. Trigonometric advances progressed significantly, with scholars refining sine tables and applying to astronomy and spherical geometry. , a polymath, advanced as an independent mathematical subject, producing accurate sine tables and developing spherical trigonometric identities that surpassed earlier work. (1048–1131) made notable contributions to algebra by solving cubic equations geometrically through intersections of conic sections, combining algebraic techniques with Euclidean geometry and approximation methods. He also explored the parallel postulate in ways that anticipated later . Nasir al-Din al-Tusi further contributed with the Tusi couple, a geometric construction that produced straight-line motion from circular components, influencing planetary models and mathematical mechanics. These achievements in algebra, , and trigonometry, alongside the preservation of classical knowledge, formed a bridge between ancient mathematics and later European developments.

Renaissance to 18th century

The in Europe, beginning in the 14th century and peaking in the 15th and 16th centuries, marked a revival of classical Greek mathematical knowledge through the translation of Arabic and Byzantine texts, combined with new innovations driven by practical needs in art, commerce, navigation, and astronomy. One major development was the invention of linear perspective in art and architecture. Filippo Brunelleschi demonstrated its principles around 1415 through experiments with mirrors and drawings, enabling realistic representation of three-dimensional space on a two-dimensional surface. Leon Battista Alberti systematized these ideas in his 1435 treatise Della Pittura (On Painting), providing geometric rules for perspective construction. In the 16th century, algebra advanced significantly with the solution of cubic equations by Scipione del Ferro, Niccolò Tartaglia, and Gerolamo Cardano, culminating in Cardano's publication Ars Magna (1545). Rafael Bombelli extended this work in his Algebra (1572) by introducing rules for operations involving —early steps toward complex numbers—to resolve irreducible cubics. John Napier introduced logarithms in his 1614 work Mirifici Logarithmorum Canonis Descriptio, transforming tedious multiplications and divisions into simpler additions and subtractions, which proved invaluable for astronomical and navigational computations. René Descartes established analytic geometry in (1637), an appendix to his Discourse on Method. By assigning coordinates to points and expressing geometric curves through algebraic equations, he unified algebra and geometry, enabling the algebraic solution of geometric problems. and Pierre de Fermat founded probability theory through their 1654 correspondence on the "problem of points" in games of chance, developing concepts of expected value and combinatorial methods that formed the basis of modern probability. In the late 17th century, Isaac Newton and independently developed calculus (the basic ideas and notation are detailed in the Analysis section). Newton formulated his method of fluxions in the 1660s and applied it in (1687), while Leibniz published his differential and integral calculus in papers from 1684 to 1686. The 18th century was dominated by Leonhard Euler, whose vast output touched nearly every area of mathematics. He introduced influential notations including e for the base of natural logarithms, for the square root of -1, f(x) for function notation, and Σ for summation. Euler advanced infinite series, (notably Euler's formula e^{i\pi} + 1 = 0), number theory, differential equations, and graph theory with his 1736 solution to the problem.

19th and 20th centuries

The 19th century witnessed a profound shift toward rigor and abstraction in mathematics. The foundations of , long accepted on intuitive grounds, were placed on a firmer footing through the work of Augustin-Louis Cauchy and Karl Weierstrass. Cauchy, in his 1821 Cours d'analyse, provided rigorous definitions of limits and continuity, reducing reliance on . Weierstrass, in the 1860s and 1870s, introduced the epsilon-delta definition of limits and continuity, precise definitions of the real numbers, and uniform convergence, resolving issues in earlier analysis. Simultaneously, the discovery of non-Euclidean geometries fundamentally altered understanding of space. Independently, Nikolai Lobachevsky (1829) and János Bolyai (1832) constructed hyperbolic geometries that satisfied all of Euclid's axioms except the , showing that the postulate was independent. Carl Friedrich Gauss had developed similar ideas privately earlier, while Bernhard Riemann's 1854 habilitation lecture introduced elliptic geometry and the concept of manifolds with intrinsic curvature, laying groundwork for later . These developments demonstrated that geometry was not uniquely determined by a priori intuition, opening the door to abstract structural approaches. Algebra saw the emergence of group theory, stemming from Évariste Galois's work in the 1830s on the , which introduced the idea of groups of permutations. Arthur Cayley formalized the abstract notion of a group in 1854, and the subject expanded rapidly, with contributions from Camille Jordan and others, establishing groups as fundamental structures. Georg Cantor founded in the 1870s, developing the theory of infinite sets, transfinite cardinals, and ordinals. His demonstration that the real numbers are uncountable (1874) and the formulation of the marked a radical expansion of mathematical objects, though it also generated paradoxes that prompted foundational crises. In the 20th century, these trends intensified. Kurt Gödel's incompleteness theorems (1931) proved that any consistent formal system powerful enough to describe arithmetic is either incomplete (containing true but unprovable statements) or inconsistent, shattering David Hilbert's program for a complete and consistent axiomatization of mathematics. The , active from 1939, sought to rebuild mathematics on rigorous set-theoretic foundations, producing the multi-volume Éléments de mathématique that emphasized abstract structures (such as , topology, and integration) over specific examples, influencing mathematical presentation and education for decades. The advent of electronic computers enabled new forms of proof. The , stating that four colors suffice to color any planar map so no adjacent regions share a color, was proved in 1976 by Kenneth Appel and Wolfgang Haken using computer verification of an extensive case analysis, marking the first major theorem established with essential machine assistance. Such methods expanded further in the late 20th century, though they raised philosophical questions about the nature of proof.

21st-century developments

The 21st century has seen major breakthroughs in pure mathematics, alongside the rapid emergence of computational and that are reshaping research. One landmark achievement was Grigori Perelman's proof of the Poincaré conjecture, posted in three preprints in 2002–2003 using with surgery to classify three-dimensional manifolds. This resolved one of the seven Millennium Prize Problems posed by the Clay Mathematics Institute. The Langlands program, a far-reaching framework connecting number theory, geometry, and , has advanced significantly. Notable developments include the 2024 proof of the by a team of nine mathematicians, establishing a key pillar of the program. New areas have gained prominence. Tropical geometry, which replaces classical algebraic operations with to yield combinatorial analogs of geometric objects, has undergone rapid development since the early 2000s and found applications in and beyond. Homotopy type theory, pioneered by Vladimir Voevodsky, integrates with dependent type theory to provide a foundation for mathematics that supports computational formalization and . Machine learning has increasingly contributed to mathematical discovery. Collaborations involving DeepMind have used neural networks to identify new patterns and conjectures in areas such as knot theory and representation theory, demonstrating AI's potential as a research partner. These developments reflect a broader trend toward hybrid approaches combining rigorous proof with computational tools and novel perspectives.

Foundations

Mathematical logic

is the study of formal systems of logic, their expressive power, , and limitations, providing the rigorous foundation for reasoning in mathematics and computer science. Propositional logic, also known as sentential logic, formalizes reasoning using propositions combined with logical connectives including negation (¬), conjunction (∧), disjunction (∨), implication (→), and equivalence (↔). Its semantics are defined via , which assign truth values to compound formulas based on the values of . Propositional logic is decidable, as any formula can be mechanically checked for validity or using or efficient algorithms like the Davis–Putnam–Logemann–Loveland procedure. Predicate logic, or first-order logic, extends propositional logic by including variables, quantifiers (universal ∀ and existential ∃), and predicates that express properties and relations among objects. It allows statements such as ∀x (P(x) → Q(x)), expressing "for all x, if P holds for x then Q holds for x." The syntax includes terms, atomic formulas, and recursive formation of complex formulas using and quantifiers. First-order logic is the standard logic for most mathematical theories due to its balance of expressive power and manageability. Kurt Gödel's completeness theorem (1930) establishes that every logically valid sentence in first-order logic has a proof in the standard Hilbert-style axiomatization, meaning that semantic entailment coincides with . In contrast, Gödel's first incompleteness theorem (1931) shows that any consistent formal system capable of expressing basic arithmetic (such as Peano arithmetic) is incomplete: there exist true statements in the language of the system that cannot be proved within it. The second incompleteness theorem states that such a system cannot prove its own consistency, assuming it is consistent. These results demonstrated fundamental limitations of formal axiomatic systems. Computability theory, pioneered by Alan Turing, addresses which functions can be mechanically computed. Turing introduced the Turing machine in 1936 as an abstract model of computation consisting of an infinite tape, a read/write head, and a finite set of states with transition rules. A function is computable if there exists a Turing machine that computes it. The Church-Turing thesis posits that any effectively calculable function can be computed by a Turing machine (or equivalent models such as lambda calculus or recursive functions). Undecidable problems, such as the halting problem, show that no algorithm exists to determine whether an arbitrary Turing machine halts on a given input. Decidability concerns whether a problem has an algorithmic solution: a decision problem is decidable if there is a Turing machine that halts on all inputs and correctly outputs yes or no. Many fundamental questions in mathematics are undecidable, such as the Entscheidungsproblem posed by , which Turing and Church independently proved undecidable in 1936. Recursion theory, closely related to computability, classifies functions into recursive (computable) and recursively enumerable sets, and studies Turing degrees measuring relative computability. Model theory investigates the relationships between formal languages and their interpretations in structures. A model of a theory is a structure (domain with interpretations of constants, functions, and relations) that satisfies all sentences of the theory. Key results include the compactness theorem, which states that a set of first-order sentences has a model if every finite subset does, and Löwenheim–Skolem theorems, which guarantee models of various cardinalities. Model theory provides tools to study algebraic structures, definability, and classification of theories. but since no Wikipedia, perhaps skip citation. Mathematical logic underpins set theory by providing the formal language in which are expressed, though detailed set-theoretic axioms are treated separately.

Set theory

Set theory is the branch of mathematics that studies sets, abstract collections of distinct objects considered independently of their internal structure or order. It provides a foundational framework for most of modern mathematics by allowing the definition of mathematical objects—such as numbers, functions, and spaces—as sets and by formalizing proofs in terms of set membership and . The standard axiomatic system for set theory is Zermelo-Fraenkel set theory with the axiom of choice (ZFC), which includes axioms for extensionality, the empty set, pairing, , , , separation, replacement, regularity (foundation), and the axiom of choice. ZFC avoids paradoxes like Russell's by restricting set formation and enables the rigorous construction of mathematical structures. Within ZFC, ordinal numbers are well-ordered sets used to measure the of well-orderings, while measure the size of sets. Ordinals are well-ordered by membership, and cardinals are initial ordinals (ordinals not equinumerous to any smaller ordinal). Infinite cardinals are denoted by (α\aleph_\alpha), with 0\aleph_0 being the cardinality of the natural numbers and subsequent alephs indexing larger infinite sizes. A central question in set theory is the continuum hypothesis (CH), which asserts that there is no set whose cardinality is strictly between that of the and that of the real numbers, i.e., 20=12^{\aleph_0} = \aleph_1. In 1938, Kurt Gödel proved the consistency of CH (and the generalized continuum hypothesis, ) relative to ZFC by constructing the constructible universe LL, the smallest transitive inner model of ZFC containing all ordinals, in which GCH holds and every set is definable from ordinals in a hierarchical manner. In 1963, Paul Cohen introduced the technique of forcing to prove the independence of CH from ZFC, showing that the negation of CH is also consistent with ZFC. Forcing constructs extensions of models of ZFC by adding generic sets, allowing control over cardinal arithmetic and truth of statements like CH. Together, Gödel's and Cohen's results established that CH is undecidable in ZFC: neither CH nor its negation can be proved from ZFC alone. The LL remains a key object in set theory, serving as the canonical inner model for many consistency results and forming the foundation for inner model theory. Forcing has since become a primary method for establishing independence and consistency results in set theory.

Axiomatic method

The is a cornerstone of modern mathematics, characterized by the development of a mathematical theory from a small set of precisely stated axioms (or postulates) and definitions, with all subsequent propositions derived through strict logical deduction. This approach ensures rigor, eliminates ambiguity, and establishes a clear hierarchy of truths within a given domain. By starting from minimal, self-evident or agreed-upon assumptions, it allows mathematicians to build complex structures while maintaining deductive certainty. The earliest and most influential exemplar of the axiomatic method is Euclid's Elements (circa 300 BCE), which presented plane and solid geometry in a systematic deductive framework. Euclid began with a series of definitions (e.g., , line, straight line), five postulates (including the parallel postulate), and five common notions (such as "things equal to the same thing are equal to each other"), from which he derived hundreds of propositions through logical inference. This structure not only organized existing geometric knowledge but also established the axiomatic method as a paradigm for in mathematics, influencing subsequent developments across the discipline. In the late 19th and early 20th centuries, revitalized and extended the axiomatic method as part of his formalist program. Hilbert advocated for the complete formalization of all branches of mathematics through axiomatic systems, emphasizing the independence of from intuitive content and focusing on their consistency and completeness. His approach sought to prove the consistency of such systems using finitary (finite, constructive) methods, thereby securing mathematics against contradictions. Although Kurt Gödel's incompleteness theorems (1931) demonstrated inherent limitations—showing that sufficiently powerful consistent axiomatic systems cannot prove their own consistency—Hilbert's program profoundly shaped modern mathematical logic, proof theory, and foundational studies. More recently, category theory has provided a structural extension of the axiomatic method. Introduced by Samuel Eilenberg and Saunders Mac Lane in the 1940s, category theory shifts emphasis from the internal composition of mathematical objects to the morphisms (structure-preserving maps) between them and their compositions. It axiomatizes mathematical structures in terms of universal properties, , and natural transformations, enabling a highly abstract and unified treatment of relationships across diverse branches of mathematics (such as algebra, topology, and ). This categorical perspective offers a "structural" form of axiomatization, where the focus lies on patterns of mappings and transformations rather than on the elements of the objects themselves. The , in its various forms, remains fundamental to contemporary mathematics, underpinning the clarity, generality, and logical strength of the discipline.

Philosophy of mathematics

The examines fundamental questions about the nature of mathematical objects, the source of mathematical truth, and the meaning of mathematical statements. Central to the field is the debate over whether mathematical entities—such as numbers, functions, and sets—exist independently of human minds and physical reality, or whether they are human constructs, , or fictions. Platonism, one of the most influential positions, asserts that mathematical objects exist as in their own right, independent of space, time, and human cognition. Proponents hold that mathematicians discover rather than invent these objects, and that are objective and necessary. A key argument for this view stems from , who contended that the apparent reference to in mathematical statements provides evidence for their existence. Major alternative positions include formalism, which regards mathematics as a of symbols manipulated according to syntactic rules, without any commitment to the existence of or meaning beyond consistency within the system. Intuitionism, developed by L. E. J. Brouwer, maintains that are mental constructions that must be explicitly constructed to be legitimate, rejecting non-constructive existence proofs and certain classical logical principles. Logicism, advocated by Gottlob Frege and Bertrand Russell, seeks to reduce all of mathematics to pure logic, thereby grounding mathematical truth in logical truth. More recent views include structuralism, which emphasizes that mathematics studies relational structures rather than individual objects, and fictionalism, which treats mathematical assertions as useful fictions that are not literally true but serve practical purposes in science and reasoning. The ongoing debate over the existence of reflects deep divisions between realist positions like , which affirm an independent mathematical reality, and various anti-realist alternatives, which seek to avoid ontological commitment to such entities. Gödel's incompleteness theorems have influenced this discussion by highlighting limits in , though they do not resolve the core ontological questions.

Major branches of pure mathematics

Number theory

Number theory is the branch of primarily concerned with the properties and relationships of integers, particularly the positive integers, and related concepts such as divisibility, , and . A central object of study is the , which are integers greater than 1 with no positive divisors other than 1 and themselves. The distribution and properties of primes have driven much of the development in the field. The Fundamental Theorem of Arithmetic states that every positive integer greater than 1 can be expressed uniquely as a product of prime numbers, up to the order of the factors. This unique factorization property underpins many results in number theory and related areas. Number theory also investigates , which seek integer solutions to polynomial equations, and modular arithmetic, the arithmetic of integers modulo a fixed integer, which provides tools for solving such equations and analyzing divisibility. Analytic number theory uses tools from complex analysis to study the distribution of primes and other arithmetic functions. The Prime Number Theorem, proved independently by and Charles-Jean de la Vallée Poussin in 1896, states that the number of primes less than or equal to a large number xx, denoted π(x)\pi(x), is asymptotically π(x)xlnx\pi(x) \sim \frac{x}{\ln x}. This theorem is deeply connected to the Riemann zeta function, initially defined for complex numbers ss with real part greater than 1 as ζ(s)=n=11ns.\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}. The zeta function can be analytically continued to the entire complex plane (except for a pole at s=1s=1) and its non-trivial zeros are linked to the precise distribution of .

Algebra

Algebra is the branch of pure mathematics concerned with the study of algebraic structures such as , rings, and fields, emphasizing their abstract properties and relationships rather than specific numerical computations. These structures generalize familiar number systems like the integers and rational numbers by focusing on operations that satisfy certain axioms, enabling the unification of diverse mathematical phenomena. A group consists of a set equipped with a single binary operation that is associative, has an identity element, and for which every element has an inverse. Groups capture the essence of symmetry and are foundational to many areas of mathematics. Rings extend this idea by incorporating two operations—typically addition and multiplication—where the set forms an under addition, multiplication is associative and distributive over addition, and there is usually a multiplicative identity. Fields are commutative rings in which every nonzero element has a multiplicative inverse, providing a setting analogous to the rational, real, or complex numbers. Galois theory links field extensions with group theory, associating to each extension a Galois group of automorphisms that fix the base field. This correspondence, particularly the fundamental theorem of Galois theory, determines whether polynomial equations are solvable by radicals through the solvability of the associated group. Emil Artin contributed significantly to its modern formulation by providing an elegant, axiomatic presentation that avoided certain arbitrary choices in earlier approaches. Linear algebra studies vector spaces, which are abelian groups equipped with scalar multiplication from a field, and linear transformations between them. Matrices represent these transformations relative to bases, facilitating computations such as solving systems of linear equations, finding eigenvalues, and analyzing linear independence. In abstract terms, vector spaces generalize geometric notions of direction and magnitude while serving as modules over rings in broader contexts. In the early 20th century, abstract algebra underwent profound development. Emmy Noether revolutionized the subject through her work on ring theory, particularly by introducing the concept of Noetherian rings (satisfying the ascending chain condition on ideals), which unified and generalized earlier results in commutative and noncommutative algebra. Emil Artin advanced the theory of and Artinian rings (satisfying the descending chain condition on right ideals), while also influencing the presentation of Galois theory. Together, Noether and Artin are regarded as founders of modern abstract algebra.

Geometry

Geometry is the branch of mathematics that studies the properties, measurements, and relationships of points, lines, , surfaces, and solids in . It explores shapes and their configurations in various dimensions and types of spaces. Euclidean geometry, the classical form based on the axioms presented by Euclid in his Elements (circa 300 BCE), assumes flat space where the parallel postulate holds: through a point not on a given line, exactly one parallel line can be drawn. This framework underpins much of elementary geometry and includes fundamental results such as the Pythagorean theorem, which asserts that in a , the square of the hypotenuse equals the sum of the squares of the other two sides: a2+b2=c2a^2 + b^2 = c^2. In the early 19th century, the discovery of consistent challenged the universality of Euclid's parallel postulate. Hyperbolic geometry (also known as Lobachevsky-Bolyai-Gauss geometry) allows infinitely many lines through a point parallel to a given line, while elliptic geometry (Riemannian geometry) allows none. These geometries have constant negative curvature for hyperbolic and positive for elliptic spaces. Differential geometry applies calculus to the study of , surfaces, and . It introduces intrinsic properties such as Gaussian curvature, independent of embedding, and Riemannian metrics that enable the measurement of distances and angles on curved spaces. Bernhard Riemann's work on n-dimensional manifolds with variable curvature provided the foundation for modern differential geometry and later applications in physics. investigates geometric objects defined as solutions to systems of polynomial equations. In classical algebraic geometry, these objects are algebraic varieties—zero sets of polynomials in affine or projective space—studied using tools from commutative algebra, particularly polynomial rings. differs from topology in its emphasis on metric and rigid structures rather than merely continuous properties.

Topology

Topology is the branch of mathematics that studies properties of spaces that are preserved under continuous deformations, such as stretching, crumpling, and bending, but not tearing or gluing. These properties are qualitative rather than quantitative, distinguishing topology from , which focuses on rigid measurements like distances and angles. Point-set topology, also known as , provides the foundational framework for the subject. It begins with the concept of a , which consists of a set X equipped with a collection of subsets called open sets that satisfy three axioms: the and X itself are open; arbitrary unions of open sets are open; and finite intersections of open sets are open. Closed sets are defined as complements of open sets. between topological spaces is defined topologically: a function f: X → Y is continuous if the preimage of every open set in Y is open in X. Key properties include compactness, where every open cover has a finite subcover, and connectedness, where the space cannot be expressed as the union of two disjoint nonempty open sets. Other important concepts in point-set topology include Hausdorff spaces (where distinct points have disjoint neighborhoods), bases and subbases for topologies, and separation axioms that classify spaces by how well can distinguish points. Algebraic topology employs tools from abstract algebra to distinguish topological spaces and study their properties more effectively. It associates algebraic invariants to spaces that remain unchanged under homeomorphisms. Homotopy theory examines continuous deformations of maps, leading to that classify loops and higher-dimensional analogs up to deformation. Homology theory assigns abelian groups to spaces by considering simplicial or singular chains, with homology groups detecting "holes" of various dimensions; for example, the first homology group relates to loops that cannot be contracted. Topology intersects with other areas, such as in the study of manifolds, which are topological spaces to Euclidean space (though detailed treatment of manifolds appears in differential geometry).

Analysis

Mathematical analysis is the branch of pure mathematics devoted to the rigorous study of limits and the concepts that arise from them, including continuity, differentiation, integration, and infinite series. It extends the intuitive ideas of calculus to provide a firm foundation for understanding change and accumulation in continuous settings. The field encompasses real analysis, which focuses on functions of real variables; , which deals with functions of complex variables; and advanced extensions such as functional analysis and measure theory. In real analysis, the concept of a limit is fundamental, describing the behavior of a function or sequence as its input approaches a particular value or infinity. A function f is continuous at a point a if the limit of f(x) as x approaches a equals f(a). Differentiation formalizes the notion of instantaneous rate of change through the derivative, defined as the limit of the difference quotient: f'(a) = lim_{h→0} [f(a+h) - f(a)] / h, when this limit exists. Integration, particularly the , defines the area under a curve as the limit of sums of areas of approximating rectangles. The Fundamental Theorem of Calculus establishes the deep connection between differentiation and integration, consisting of two main parts. One part states that if a function f is on [a, b] and F(x) is defined as the integral from a to x of f(t) dt, then F is differentiable and F'(x) = f(x). The other part asserts that the integral from a to b of f'(x) dx equals f(b) - f(a), provided f' is integrable. These results show that integration and differentiation are inverse operations under suitable conditions. Complex analysis extends these ideas to functions of a complex variable, where the requirement of complex differentiability (analyticity) imposes strong conditions, leading to powerful results such as Cauchy's integral theorem and the , which facilitate evaluation of real integrals and have applications in many areas of mathematics and physics. studies vector spaces endowed with topological structures, particularly infinite-dimensional spaces such as Banach spaces (complete normed vector spaces) and Hilbert spaces (complete inner product spaces). It generalizes concepts from linear algebra and analysis to handle operators on these spaces, with key applications in differential equations, quantum mechanics, and optimization. Measure theory provides a rigorous framework for generalizing the notion of length, area, and volume to abstract sets, culminating in the Lebesgue measure and Lebesgue integral. Unlike the Riemann integral, the Lebesgue integral handles a broader class of functions, including those with discontinuities on sets of measure zero, and supports powerful convergence theorems such as the dominated convergence theorem. This foundation underpins modern probability theory and advanced real analysis.

Discrete mathematics

Discrete mathematics is the branch of mathematics concerned with mathematical structures that are fundamentally discrete rather than continuous, dealing with or otherwise distinct objects such as integers, graphs, and finite sets. In contrast to continuous mathematics, which addresses phenomena varying smoothly, discrete mathematics focuses on objects that appear in separate bundles or isolated values. This field has grown in importance with the rise of computer science, serving as a foundational tool for algorithm design, computational complexity analysis, and modeling problems involving finite or countable structures. Combinatorics forms a central pillar of discrete mathematics, addressing problems of counting, arrangement, and combination of discrete objects. It encompasses techniques such as permutations and , the , , and , which provide systematic ways to enumerate possibilities and solve counting problems in diverse contexts. Graph theory studies graphs—structures consisting of vertices (nodes) connected by edges—modeling pairwise relationships among discrete entities. Key concepts include paths, cycles, trees, connectivity, , and , with applications ranging from network design to scheduling problems. Algorithms in discrete mathematics involve the design and analysis of procedures for solving problems on discrete structures, often relying on recurrence relations to express the running time or of recursive processes. Recurrence relations, such as those arising in , are solved using methods like substitution, , or the master theorem, enabling precise performance analysis of algorithms. also provides the foundational concepts for cryptography, particularly through structures that support secure key exchange and encryption schemes resistant to classical attacks, including those based on and integer factorization.

Probability and statistics

Probability theory

is the branch of mathematics that develops models for and uncertainty using rigorous deduction. It provides the formal framework to quantify the likelihood of events, analyze random phenomena, and derive probabilistic conclusions from axioms. Probability theory serves as the theoretical foundation for statistics, which focuses on from data. The contemporary foundations of were established by in his , where he axiomatized probability in a manner parallel to measure theory. A probability space consists of a Ω, a σ-algebra \mathcal{F} of events, and a probability measure P satisfying the :
  • For every event A ∈ \mathcal{F}, P(A) ≥ 0.
  • P(Ω) = 1.
  • For any countable collection of pairwise disjoint events {A_i}{i=1}^∞, P(\bigcup{i=1}^∞ A_i) = \sum_{i=1}^∞ P(A_i).
These axioms ensure probability behaves consistently under and provides the basis for all subsequent developments. A random variable is a measurable function from the sample space Ω to the real numbers \mathbb{R}, mapping outcomes to numerical values whose probabilistic behavior can be studied. The expectation (or expected value) of a random variable X, denoted E[X], represents its long-run average value over repeated trials. For a discrete random variable with possible values x_i and probabilities p_i = P(X = x_i), the expectation is E[X] = \sum x_i p_i; for continuous random variables with density f(x), it is E[X] = \int_{-\infty}^{\infty} x f(x) , dx. A fundamental result is the central limit theorem, which states that the distribution of the standardized sum (or mean) of a large number of independent and identically distributed random variables with finite mean μ and variance σ² converges to the standard normal distribution as the number of terms increases to infinity. Formally, if X_1, X_2, \dots are i.i.d. with E[X_i] = μ and Var(X_i) = σ² < ∞, then \frac{\bar{X}_n - \mu}{\sigma / \sqrt{n}} \xrightarrow{d} \mathcal{N}(0,1), where \bar{X}_n is the sample mean. This theorem explains the prevalence of normal distributions in aggregated data and underpins many approximation techniques. Probability theory extends naturally to , collections of random variables {X_t} indexed by a parameter t (often time) that model systems evolving randomly over time. A key subclass is Markov chains, stochastic processes where the future state depends only on the current state, not on prior history (the ). Markov chains provide models for systems with limited memory, such as certain or .

Statistics

Statistics is the branch of mathematics devoted to the collection, analysis, interpretation, presentation, and organization of data. It provides tools to extract meaningful insights from data subject to variation and uncertainty, enabling evidence-based conclusions in science, engineering, business, and . Statistics is often divided into and inferential branches, with the former summarizing data and the latter allowing generalizations from samples to populations. Descriptive statistics focuses on summarizing and describing the main features of a collected data set without attempting to generalize beyond it. It uses numerical summary measures such as the mean (arithmetic average), median (middle value), and mode (most frequent value) for ; the variance and standard deviation for dispersion; and measures of shape like skewness and kurtosis. Graphical tools, including histograms, box plots, , and , help visualize distributions, identify outliers, and reveal patterns or relationships in the data. Descriptive methods form the foundation for initial data exploration and reporting. Inferential statistics uses sample data to draw conclusions about a larger population or process from which the sample was drawn. It relies on probability concepts to quantify uncertainty in estimates and decisions. Key activities include (e.g., sample mean as an estimate of population mean), interval estimation (confidence intervals that contain the true parameter with a specified probability), and . These methods allow to assess whether observed effects are likely due to chance or represent real phenomena in the population. is a formal procedure in inferential statistics for evaluating claims about population parameters. It involves formulating a null hypothesis (typically no effect or no difference) and an , selecting a significance level, computing a test statistic from the sample, and calculating a to determine whether to reject the null. Common tests include t-tests, chi-square tests, and , chosen based on data type and assumptions. The approach helps control the risk of false positives while providing a framework for decision-making under uncertainty. Regression analysis estimates relationships between a dependent (response) variable and one or more independent (predictor) variables. It models how changes in predictors are associated with changes in the response, often for prediction, control, or understanding causal links. Linear regression assumes a linear relationship and is widely used due to its interpretability; extensions include multiple regression, logistic regression for , and nonlinear models. Assumptions such as linearity, independence, and must be checked to ensure valid inferences. Bayesian statistics offers an alternative framework that treats parameters as random variables with probability distributions. It uses Bayes' theorem to update prior beliefs with observed data, producing that reflect both prior knowledge and new evidence. This approach is particularly useful for incorporating expert opinion, handling small samples, and providing direct probability statements about parameters. Bayesian methods have grown in popularity with advances in computational techniques like . Many foundations of machine learning draw directly from statistical principles, including statistical decision theory, estimation, and . Techniques such as regression, classification, and clustering in machine learning extend statistical methods to , complex data sets for prediction and pattern discovery.

Applied mathematics

Mathematical physics and mechanics

Mathematical physics develops and applies advanced mathematical structures to formulate, analyze, and solve problems in , with particularly deep connections to mechanics across classical, quantum, and statistical regimes. In classical mechanics, mathematical descriptions rely on differential equations to model the motion of particles and systems. Newton's second law yields second-order ordinary differential equations relating acceleration to . Equivalent reformulations include Lagrangian mechanics, based on variational principles and the Euler-Lagrange equations, and Hamiltonian mechanics, which employs first-order equations in phase space to describe evolution and conserved quantities. These tools enable precise treatment of systems ranging from simple oscillators to celestial dynamics. Quantum mechanics adopts the Hilbert space formalism, rigorously established by John von Neumann in the 1920s and 1930s. Quantum states are represented as normalized vectors in a complex separable Hilbert space, while correspond to on that space. The follows the Schrödinger equation, an operator differential equation, and measurement probabilities arise from inner products and projection operators. This framework unified earlier approaches by Dirac, Heisenberg, and Schrödinger and provided a mathematically consistent operator-algebra interpretation of quantum theory. General relativity incorporates Riemannian geometry (more precisely, pseudo-Riemannian geometry) to describe spacetime as a curved four-dimensional manifold. The metric tensor defines distances, causal structure, and geodesics for free-fall motion, while curvature—quantified by the Riemann tensor—arises from matter and energy distributions. This geometric approach replaces the force concept of with intrinsic spacetime curvature. Statistical mechanics supplies the microscopic probabilistic foundation for thermodynamics and macroscopic behavior. Boltzmann's approach uses the Liouville equation and the ergodic hypothesis to derive irreversible evolution via the H-theorem, explaining the second law from time-reversible microscopic laws. Gibbs's ensemble method introduces probability distributions over phase space, such as the with its partition function, to compute equilibrium averages and directly. These complementary perspectives link microscopic dynamics to macroscopic laws. Such applications demonstrate the indispensable role of mathematical rigor in advancing physical understanding across mechanics and related domains.

Numerical analysis and computation

is the branch of applied mathematics concerned with the development, analysis, and implementation of algorithms that use numerical approximation to solve problems formulated in continuous mathematics, where exact analytical solutions are often infeasible or impractical. These algorithms address problems involving real or complex variables, with a focus on controlling approximation errors while ensuring computational efficiency and . A fundamental task in numerical analysis is root-finding, which seeks approximate solutions to equations of the form f(x)=0f(x) = 0. Widely used methods include the bisection method, which guarantees convergence by repeatedly halving an interval containing a root; , which offers near a simple root using the derivative; and the secant method, which approximates the derivative to achieve superlinear convergence without requiring explicit derivatives. The choice of method depends on factors such as the function's smoothness, availability of derivatives, and need for guaranteed convergence. Interpolation constructs a function that passes exactly through a set of given data points, enabling approximation of intermediate values. Common approaches include Lagrange interpolation, which produces a polynomial directly from the points; , which allows efficient addition of new points; and piecewise polynomial interpolation, such as cubic splines, which reduce oscillation issues associated with high-degree polynomials. These techniques are essential for data fitting, , and as building blocks for more advanced methods. Numerical integration, or , approximates definite integrals when analytical antiderivatives are unavailable or difficult to compute. Basic rules include the trapezoidal rule, which approximates the integrand with linear segments; Simpson's rule, which uses parabolic segments for higher accuracy; and Gaussian quadrature, which selects optimal nodes and weights to achieve high precision for polynomials of certain degrees. Adaptive methods adjust the step size based on local error estimates to improve efficiency. Matrix computation forms a cornerstone of numerical linear algebra, encompassing the solution of linear systems Ax=bAx = b, eigenvalue problems, and related tasks. Direct methods, such as Gaussian elimination with partial pivoting and its LU factorization variant, solve small to medium-sized dense systems reliably. Iterative methods, including Jacobi, Gauss-Seidel, and conjugate gradient, are preferred for large sparse systems arising in applications. Eigenvalue algorithms, such as the QR algorithm, compute spectra efficiently. Stability and conditioning of matrices are critical considerations, as small perturbations in input data can lead to large errors in output for ill-conditioned problems. Finite element methods approximate solutions to partial differential equations by subdividing the domain into simple geometric elements, typically triangles or tetrahedra, and representing the solution with piecewise polynomial basis functions. These methods are particularly powerful for complex geometries and boundary conditions, with error estimates derived from approximation theory and stability analysis. Error analysis is central to numerical analysis, distinguishing between (arising from approximating continuous problems with discrete ones) and round-off error (due to finite-precision arithmetic). Concepts such as , backward error analysis, and condition numbers quantify how errors propagate. Rigorous error bounds and guide the selection and design of reliable numerical methods.

Operations research and optimization

(), also known as operational research, employs mathematical and scientific methods to analyze the efficiency and performance of complex systems involving manpower, machinery, equipment, and policies, with the goal of supporting optimal decision-making. The discipline originated in Britain during the late 1930s, where the term "" was coined for scientific studies aimed at integrating new technologies like radar into military operations during World War II. Following the war, techniques expanded to civilian applications in industry, logistics, transportation, and management, becoming a cornerstone of applied mathematics for addressing resource allocation and problems. A central pillar of operations research is , which seeks the best possible solution from feasible alternatives according to defined criteria. Linear programming (LP) stands out as a foundational optimization technique, involving the maximization or minimization of a linear objective function subject to linear equality and inequality constraints. Early formulations of linear programming problems appeared in the late 1930s and 1939 by for production planning and resource allocation. The breakthrough came with the development of the by George B. Dantzig in 1947, an iterative algorithm that efficiently solves linear programs by moving from one basic feasible solution to another along the edges of the feasible region, enabling practical computation of large-scale problems. Extensions of linear programming include , where decision variables are restricted to integers to model discrete choices such as scheduling or facility location, and , which handles problems where the objective function or constraints are nonlinear. Game theory contributes to operations research by providing a mathematical framework for analyzing strategic interactions among rational decision-makers with conflicting or aligned interests, formalizing competitive situations and aiding decisions in multi-agent environments. Queueing theory, another key component, examines the behavior of systems involving or , enabling analysis and optimization of service processes, resource utilization, and waiting times in applications such as telecommunications, traffic flow, and service operations.

Mathematical biology and finance

Mathematical biology and are prominent fields of applied mathematics that use differential equations and to model complex real-world systems in biology and economics. In mathematical biology, are classically described by the Lotka-Volterra equations, independently developed by Alfred Lotka and in the 1920s. These ordinary differential equations model predator-prey interactions, showing oscillatory behavior in species populations due to predation and reproduction rates. The equations take the form: dxdt=αxβxy\frac{dx}{dt} = \alpha x - \beta x y dydt=δxyγy\frac{dy}{dt} = \delta x y - \gamma y where xx and yy represent and , respectively, and the parameters govern , predation, and mortality. , proposed by Alan Turing in his 1952 paper "", provide a mechanism for biological pattern formation. These partial differential equations combine local chemical reactions with diffusion, leading to spatial patterns such as stripes or spots from initially uniform conditions. In epidemiology, the SIR model, introduced by William Kermack and Anderson McKendrick in 1927, compartmentalizes a population into susceptible (SS), infected (II), and recovered (RR) groups to describe infectious disease dynamics. The model is governed by: dSdt=βSI\frac{dS}{dt} = -\beta S I dIdt=βSIγI\frac{dI}{dt} = \beta S I - \gamma I dRdt=γI\frac{dR}{dt} = \gamma I where β\beta is the transmission rate and γ\gamma the recovery rate. This framework predicts epidemic thresholds and final outbreak sizes in closed populations. In , stochastic differential equations (SDEs) model asset prices as influenced by and volatility. A common assumption is that stock prices follow geometric Brownian motion: dSt=μStdt+σStdWtdS_t = \mu S_t \, dt + \sigma S_t \, dW_t where WtW_t is a Wiener process. The Black-Scholes model, developed by Fischer Black and in 1973, uses this to derive a for the fair price of European options, enabling precise valuation and hedging in financial markets. The Black-Scholes partial differential equation is: Vt+12σ2S22VS2+rSVSrV=0\frac{\partial V}{\partial t} + \frac{1}{2} \sigma^2 S^2 \frac{\partial^2 V}{\partial S^2} + r S \frac{\partial V}{\partial S} - r V = 0 where VV is the option price, SS the underlying asset price, rr the risk-free rate, and σ\sigma volatility. This model revolutionized derivative pricing by linking stochastic calculus to practical finance.

Mathematics in society

Mathematics education

encompasses the teaching, learning, and research into how individuals acquire mathematical knowledge and skills across all levels of schooling and beyond. It draws from , psychology, and the discipline of mathematics itself to design curricula, , and assessment strategies that promote understanding, , and application. Historically, has experienced significant shifts in approach. In the United States, the "New Math" movement emerged in the late 1950s and 1960s, largely in response to Cold War-era concerns about scientific and technological competitiveness. This reform introduced abstract concepts such as set theory, modular arithmetic, and symbolic logic into elementary and secondary curricula, aiming to foster deeper conceptual understanding rather than rote computation. The movement faced widespread criticism for being too abstract and disconnected from practical needs, leading to its decline by the early 1970s and a subsequent "back-to-basics" emphasis on computational skills. In the late 20th century, efforts to reform mathematics education coalesced around standards-based approaches. The (NCTM) published influential documents, including Curriculum and Evaluation Standards for School Mathematics in 1989 and Principles and Standards for School Mathematics in 2000, which emphasized conceptual understanding, problem solving, reasoning, communication, and connections across mathematical ideas and to other disciplines. These standards promoted a balanced approach integrating skills, concepts, and applications, influencing curriculum development and policy in many countries, though implementation varied and sparked debates over rigor and equity. Contemporary debates in mathematics education often center on constructivist versus traditional instructional approaches. Traditional methods typically involve direct instruction, teacher-led demonstrations, and extensive practice to build procedural fluency and automaticity. Constructivist approaches, drawing from theories of learning that view knowledge as actively constructed by the learner, prioritize student exploration, collaborative problem-solving, , and conceptual development through . Research has shown mixed results: constructivist methods can enhance conceptual understanding and motivation in some settings, while traditional methods often produce stronger performance on procedural tasks and standardized tests, particularly in resource-constrained environments. International assessments provide comparative data on mathematics education outcomes across countries. The Programme for International Student Assessment (PISA), administered by the OECD every three years to 15-year-olds, evaluates mathematical literacy in real-world contexts. The Trends in International Mathematics and Science Study (TIMSS), conducted every four years for fourth- and eighth-graders by the International Association for the Evaluation of Educational Achievement, measures curriculum-based achievement. These studies consistently highlight performance variations, with East Asian systems frequently scoring highest and informing national policy discussions on curriculum, teacher preparation, and instructional practices. Challenges in mathematics education include addressing achievement gaps, supporting diverse learners, integrating technology, and balancing conceptual depth with skill mastery amid evolving societal needs for quantitative literacy.

Mathematical awards and prizes

Mathematics recognizes outstanding contributions through a variety of prestigious awards and prizes. The most highly regarded include the Fields Medal, the Abel Prize, the Wolf Prize in Mathematics, and the . These prizes are frequently cited as the top international honors in the field. The Abel Prize is awarded annually by the Norwegian Academy of Science and Letters to a mathematician for outstanding scientific work in the field. Established in 2002 and named after , it carries a monetary award of approximately $750,000 and is often regarded as one of the highest honors in mathematics. The Fields Medal is awarded every four years by the to up to four mathematicians under the age of 40 for exceptional achievement and promise of future contributions. It is widely considered the premier award for younger mathematicians. The , awarded since 1978 by the Wolf Foundation, is considered the third most prestigious international award in mathematics, following the Abel Prize and the Fields Medal. It honors exceptional contributions to the field and is awarded annually. The Breakthrough Prize in Mathematics, established more recently by the Breakthrough Prize Foundation, recognizes major advances in the field with a significant monetary award. It is included among the most prestigious contemporary prizes in mathematics. Other notable prizes include historical ones such as the and the Bolyai Prize, which recognize specific contributions or monographs in mathematics.

Impact on technology and culture

Mathematics has profoundly shaped modern technology and culture through its foundational role in computational systems, secure communication, data analysis, artificial intelligence, and artistic expression. In technology, mathematics underpins the theoretical foundations of computer science. Alan Turing's formulation of the Turing machine in 1936 provided a mathematical model for and computation, establishing the basis for modern programmable computers and the limits of what can be computed. This work also contributed to early concepts in artificial intelligence and automated reasoning. Mathematics is central to cryptography, enabling . Public-key systems such as the RSA cryptosystem rely on number theory, particularly the computational difficulty of factoring large composite numbers into their prime factors, combined with modular arithmetic and Euler's theorem. The RSA algorithm generates a public key for encryption and a private key for decryption, forming the basis for widely adopted cryptographic standards used in internet security protocols. Mathematics drives advancements in artificial intelligence and data science. Core techniques in machine learning and data analysis depend on linear algebra for vector and matrix operations, calculus for optimization in training models, and statistics for inference and uncertainty quantification. These tools enable pattern recognition, , and handling of large datasets across applications from recommendation systems to scientific discovery. In culture, mathematics has influenced artistic and musical expression. Ancient discoveries by Pythagoras revealed simple integer ratios underlying consonant musical intervals, such as the octave (2:1) and (3:2), laying the groundwork for Western music theory. Later developments, including Fourier analysis, provided mathematical tools to decompose complex sounds into sinusoidal components, informing modern audio processing and composition.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.