Recent from talks
Contribute something
Nothing was collected or created yet.
Functional programming
View on WikipediaIn computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program.
In functional programming, functions are treated as first-class citizens, meaning that they can be bound to names (including local identifiers), passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner.
Functional programming is sometimes treated as synonymous with purely functional programming, a subset of functional programming that treats all functions as deterministic mathematical functions, or pure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutable state or other side effects. This is in contrast with impure procedures, common in imperative programming, which can have side effects (such as modifying the program's state or taking input from a user). Proponents of purely functional programming claim that by restricting side effects, programs can have fewer bugs, be easier to debug and test, and be more suited to formal verification.[1][2]
Functional programming has its roots in academia, evolving from the lambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, including Common Lisp, Scheme,[3][4][5][6] Clojure, Wolfram Language,[7][8] Racket,[9] Erlang,[10][11][12] Elixir,[13] OCaml,[14][15] Haskell,[16][17] and F#.[18][19] Lean is a functional programming language commonly used for verifying mathematical theorems.[20] Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web,[21] R in statistics,[22][23] J, K and Q in financial analysis, and XQuery/XSLT for XML.[24][25] Domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, such as not allowing mutable values.[26] In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such as C++ (since C++11), C#,[27] Kotlin,[28] Perl,[29] PHP,[30] Python,[31] Go,[32] Rust,[33] Raku,[34] Scala,[35] and Java (since Java 8).[36]
History
[edit]The lambda calculus, developed in the 1930s by Alonzo Church, is a formal system of computation built from function application. In 1937 Alan Turing proved that the lambda calculus and Turing machines are equivalent models of computation,[37] showing that the lambda calculus is Turing complete. Lambda calculus forms the basis of all functional programming languages. An equivalent theoretical formulation, combinatory logic, was developed by Moses Schönfinkel and Haskell Curry in the 1920s and 1930s.[38]
Church later developed a weaker system, the simply typed lambda calculus, which extended the lambda calculus by assigning a data type to all terms.[39] This forms the basis for statically typed functional programming.
The first high-level functional programming language, Lisp, was developed in the late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy while at Massachusetts Institute of Technology (MIT).[40] Lisp functions were defined using Church's lambda notation, extended with a label construct to allow recursive functions.[41] Lisp first introduced many paradigmatic features of functional programming, though early Lisps were multi-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such as Scheme and Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly functional core, while Common Lisp was designed to preserve and update the paradigmatic features of the numerous older dialects it replaced.[42]
Information Processing Language (IPL), 1956, is sometimes cited as the first computer-based functional programming language.[43] It is an assembly-style language for manipulating lists of symbols. It does have a notion of generator, which amounts to a function that accepts a function as an argument, and, since it is a low-level programming language, code can be data, so IPL can be regarded as having higher-order functions. However, it relies heavily on the mutating list structure and similar imperative features.
Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book A Programming Language (ISBN 9780471430148). APL was the primary influence on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K, which is used commercially in financial industries along with its descendant Q.
In the mid-1960s, Peter Landin invented SECD machine,[44] the first abstract machine for a functional programming language,[45] described a correspondence between ALGOL 60 and the lambda calculus,[46][47] and proposed the ISWIM programming language.[48]
John Backus presented FP in his 1977 Turing Award lecture "Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs".[49] He defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs"; in modern language, this means that functional programs follow the principle of compositionality.[citation needed] Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style now associated with functional programming.
The 1973 language ML was created by Robin Milner at the University of Edinburgh, and David Turner developed the language SASL at the University of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional language NPL.[50] NPL was based on Kleene Recursion Equations and was first introduced in their work on program transformation.[51] Burstall, MacQueen and Sannella then incorporated the polymorphic type checking from ML to produce the language Hope.[52] ML eventually developed into several dialects, the most common of which are now OCaml and Standard ML.
In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as described in the Lambda Papers and the 1985 textbook Structure and Interpretation of Computer Programs. Scheme was the first dialect of lisp to use lexical scoping and to require tail-call optimization, features that encourage functional programming.
In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called constructive type theory), which associated functional programs with constructive proofs expressed as dependent types. This led to new approaches to interactive theorem proving and has influenced the development of subsequent functional programming languages.[citation needed]
The lazy functional language, Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence on Haskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing as of 1990.
More recently it has found use in niches such as parametric CAD in the OpenSCAD language built on the CGAL framework, although its restriction on reassigning values (all values are treated as constants) has led to confusion among users who are unfamiliar with functional programming as a concept.[53]
Functional programming continues to be used in commercial settings.[54][55][56]
Concepts
[edit]A number of concepts[57] and paradigms are specific to functional programming, and generally foreign to imperative programming (including object-oriented programming). However, programming languages often cater to several programming paradigms, so programmers using "mostly imperative" languages may have utilized some of these concepts.[58]
First-class and higher-order functions
[edit]Higher-order functions are functions that can either take other functions as arguments or return them as results. In calculus, an example of a higher-order function is the differential operator , which returns the derivative of a function .
Higher-order functions are closely related to first-class functions in that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term for programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values).
Higher-order functions enable partial application or currying, a technique that applies a function to its arguments one at a time, with each application returning a new function that accepts the next argument. This lets a programmer succinctly express, for example, the successor function as the addition operator partially applied to the natural number one.
Pure functions
[edit]Pure functions (or expressions) have no side effects (memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code:
- If the result of a pure expression is not used, it can be removed without affecting other expressions.
- If a pure function is called with arguments that cause no side-effects, the result is constant with respect to that argument list (sometimes called referential transparency or idempotence), i.e., calling the pure function again with the same arguments returns the same result. (This can enable caching optimizations such as memoization.)
- If there is no data dependency between two pure expressions, their order can be reversed, or they can be performed in parallel and they cannot interfere with one another (in other terms, the evaluation of any pure expression is thread-safe).
- If the entire language does not allow side-effects, then any evaluation strategy can be used; this gives the compiler freedom to reorder or combine the evaluation of expressions in a program (for example, using deforestation).
While most compilers for imperative programming languages detect pure functions and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations. Fortran 95 also lets functions be designated pure.[59] C++11 added constexpr keyword with similar semantics.
Recursion
[edit]Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, letting an operation be repeated until it reaches the base case. In general, recursion requires maintaining a stack, which consumes space in a linear amount to the depth of recursion. This could make recursion prohibitively expensive to use instead of imperative loops. However, a special form of recursion known as tail recursion can be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. Tail recursion optimization can be implemented by transforming the program into continuation passing style during compiling, among other approaches.
The Scheme language standard requires implementations to support proper tail recursion, meaning they must allow an unbounded number of active tail calls.[60][61] Proper tail recursion is not simply an optimization; it is a language feature that assures users that they can use recursion to express a loop and doing so would be safe-for-space.[62] Moreover, contrary to its name, it accounts for all tail calls, not just tail recursion. While proper tail recursion is usually implemented by turning code into imperative loops, implementations might implement it in other ways. For example, Chicken intentionally maintains a stack and lets the stack overflow. However, when this happens, its garbage collector will claim space back,[63] allowing an unbounded number of active tail calls even though it does not turn tail recursion into a loop.
Common patterns of recursion can be abstracted away using higher-order functions, with catamorphisms and anamorphisms (or "folds" and "unfolds") being the most obvious examples. Such recursion schemes play a role analogous to built-in control structures such as loops in imperative languages.
Most general purpose functional programming languages allow unrestricted recursion and are Turing complete, which makes the halting problem undecidable, can cause unsoundness of equational reasoning, and generally requires the introduction of inconsistency into the logic expressed by the language's type system. Some special purpose languages such as Rocq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is called total functional programming.[64]
Strict versus non-strict evaluation
[edit]Functional languages can be categorized by whether they use strict (eager) or non-strict (lazy) evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in the denotational semantics of expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm fails. For example, the Python statement:
print(len([2 + 1, 3 * 2, 1 / 0, 5 - 4]))
fails under strict evaluation because of the division by zero in the third element of the list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of items in the list), since evaluating it does not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself.
The usual implementation strategy for lazy evaluation in functional languages is graph reduction.[65] Lazy evaluation is used by default in several pure functional languages, including Miranda, Clean, and Haskell.
Hughes 1984 argues for lazy evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams.[2] Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes an operational semantics to aid in such analysis.[66] Harper 2009 proposes including both strict and lazy evaluation in the same language, using the language's type system to distinguish them.[67]
Type systems
[edit]Especially since the development of Hindley–Milner type inference in the 1970s, functional programming languages have tended to use typed lambda calculus, rejecting all invalid programs at compilation time and risking false positive errors, as opposed to the untyped lambda calculus, that accepts all valid programs at compilation time and risks false negative errors, used in Lisp and its variants (such as Scheme), as they reject all invalid programs at runtime when the information is enough to not reject valid programs. The use of algebraic data types makes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques like test-driven development, while type inference frees the programmer from the need to manually declare types to the compiler in most cases.
Some research-oriented functional languages such as Rocq, Agda, Cayenne, and Epigram are based on intuitionistic type theory, which lets types depend on terms. Such types are called dependent types. These type systems do not have decidable type inference and are difficult to understand and program with.[68][69][70][71] But dependent types can express arbitrary propositions in higher-order logic. Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code. While these languages are mainly of interest in academic research (including in formalized mathematics), they have begun to be used in engineering as well. Compcert is a compiler for a subset of the language C that is written in Rocq and formally verified.[72]
A limited form of dependent types called generalized algebraic data types (GADT's) can be implemented in a way that provides some of the benefits of dependently typed programming while avoiding most of its inconvenience.[73] GADT's are available in the Glasgow Haskell Compiler, in OCaml[74] and in Scala,[75] and have been proposed as additions to other languages including Java and C#.[76]
Referential transparency
[edit]Functional programs do not have assignment statements, that is, the value of a variable in a functional program never changes once defined. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. So, functional programs are referentially transparent.[77]
Consider C assignment statement x = x * 10, this changes the value assigned to the variable x. Let us say that the initial value of x was 1, then two consecutive evaluations of the variable x yields 10 and 100 respectively. Clearly, replacing x = x * 10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent.
Now, consider another function such as int plusOne(int x) { return x + 1; } is transparent, as it does not implicitly change the input x and thus has no such side effects.
Functional programs exclusively use this type of function and are therefore referentially transparent.
Data structures
[edit]Purely functional data structures are often represented in a different way to their imperative counterparts.[78] For example, the array with constant access and update times is a basic component of most imperative languages, and many imperative data-structures, such as the hash table and binary heap, are based on arrays. Arrays can be replaced by maps or random access lists, which admit purely functional implementation, but have logarithmic access and update times. Purely functional data structures have persistence, a property of keeping previous versions of the data structure unmodified. In Clojure, persistent data structures are used as functional alternatives to their imperative counterparts. Persistent vectors, for example, use trees for partial updating. Calling the insert method will result in some but not all nodes being created.[79]
Comparison to imperative programming
[edit]Functional programming is very different from imperative programming. The most significant differences stem from the fact that functional programming avoids side effects, which are used in imperative programming to implement state and I/O. Pure functional programming completely prevents side-effects and provides referential transparency.
Higher-order functions are rarely used in older imperative programming. A traditional imperative program might use a loop to traverse and modify a list. A functional program, on the other hand, would probably use a higher-order "map" function that takes a function and a list, generating and returning a new list by applying the function to each list item.
Imperative vs. functional programming
[edit]The following two examples (written in Java) achieve the same effect: they multiply all even numbers in an array by 10 and add them all, storing the final sum in the variable result.
Traditional imperative loop:
int[] numList = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
int result = 0;
for (int i: numList) {
if (i % 2 == 0) {
result += i * 10;
}
}
Functional programming with higher-order functions:
import java.util.Arrays;
int[] numList = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
int result = Arrays.stream(numList)
.filter(n -> n % 2 == 0)
.map(n -> n * 10)
.reduce(0, Integer::sum);
Sometimes the abstractions offered by functional programming might lead to development of more robust code that avoids certain issues that might arise when building upon large amount of complex, imperative code, such as off-by-one errors (see Greenspun's tenth rule).
Simulating state
[edit]There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way.
The pure functional programming language Haskell implements them using monads, derived from category theory.[80] Monads offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many students find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries).[81]
Functional languages also simulate states by passing around immutable states. This can be done by making a function accept the state as one of its parameters, and return a new state together with the result, leaving the old state unchanged.[82]
Impure functional languages usually include a more direct method of managing mutable state. Clojure, for example, uses managed references that can be updated by applying pure functions to the current state. This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations.[citation needed]
Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs. Some modern research languages use effect systems to make the presence of side effects explicit.[83]
Efficiency issues
[edit]Functional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal.[84] This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware. Flat arrays may be accessed very efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with no complex pointer chasing), or handled with SIMD instructions. It is also not easy to create their equally efficient general-purpose immutable counterparts. For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree).[85] However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such as OCaml and Clean are only slightly slower than C according to The Computer Language Benchmarks Game.[86] For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimizations.
Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities for inline expansion.[87] Even if the involved copying that may seem implicit when dealing with persistent immutable data structures might seem computationally costly, some functional programming languages, like Clojure solve this issue by implementing mechanisms for safe memory sharing between formally immutable data.[88] Rust distinguishes itself by its approach to data immutability which involves immutable references[89] and a concept called lifetimes.[90]
Immutable data with separation of identity and state and shared-nothing schemes can also potentially be more well-suited for concurrent and parallel programming by the virtue of reducing or eliminating the risk of certain concurrency hazards, since concurrent operations are usually atomic and this allows eliminating the need for locks. This is how for example java.util.concurrent classes are implemented, where some of them are immutable variants of the corresponding classes that are not suitable for concurrent use.[91] Functional programming languages often have a concurrency model that instead of shared state and synchronization, leverages message passing mechanisms (such as the actor model, where each actor is a container for state, behavior, child actors and a message queue).[92][93] This approach is common in Erlang/Elixir or Akka.
Lazy evaluation may also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introduce memory leaks if used improperly). Launchbury 1993[66] discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008[94] give some practical advice for analyzing and fixing them. However, the most general implementations of lazy evaluation making extensive use of dereferenced code and data perform poorly on modern processors with deep pipelines and multi-level caches (where a cache miss may cost hundreds of cycles) [citation needed].
Abstraction cost
[edit]Some functional programming languages might not optimize abstractions such as higher order functions like "map" or "filter" as efficiently as the underlying imperative operations. Consider, as an example, the following two ways to check if 5 is an even number in Clojure:
(even? 5)
(.equals (mod 5 2) 0)
When benchmarked using the Criterium tool on a Ryzen 7900X GNU/Linux PC in a Leiningen REPL 2.11.2, running on Java VM version 22 and Clojure version 1.11.1, the first implementation, which is implemented as:
(defn even?
"Returns true if n is even, throws an exception if n is not an integer"
{:added "1.0"
:static true}
[n] (if (integer? n)
(zero? (bit-and (clojure.lang.RT/uncheckedLongCast n) 1))
(throw (IllegalArgumentException. (str "Argument must be an integer: " n)))))
has the mean execution time of 4.76 ms, while the second one, in which .equals is a direct invocation of the underlying Java method, has a mean execution time of 2.8 μs – roughly 1700 times faster. Part of that can be attributed to the type checking and exception handling involved in the implementation of even?. For instance the lo library for Go, which implements various higher-order functions common in functional programming languages using generics. In a benchmark provided by the library's author, calling map is 4% slower than an equivalent for loop and has the same allocation profile,[95] which can be attributed to various compiler optimizations, such as inlining.[96]
One distinguishing feature of Rust are zero-cost abstractions. This means that using them imposes no additional runtime overhead. This is achieved thanks to the compiler using loop unrolling, where each iteration of a loop, be it imperative or using iterators, is converted into a standalone Assembly instruction, without the overhead of the loop controlling code. If an iterative operation writes to an array, the resulting array's elements will be stored in specific CPU registers, allowing for constant-time access at runtime.[97]
Functional programming in non-functional languages
[edit]It is possible to use a functional style of programming in languages that are not traditionally considered functional languages.[98] For example, both D[99] and Fortran 95[59] explicitly support pure functions.
JavaScript, Lua,[100] Python and Go[101] had first class functions from their inception.[102] Python had support for "lambda", "map", "reduce", and "filter" in 1994, as well as closures in Python 2.2,[103] though Python 3 relegated "reduce" to the functools standard library module.[104] First-class functions have been introduced into other mainstream languages such as Perl 5.0 in 1994, PHP 5.3, Visual Basic 9, C# 3.0, C++11, and Kotlin.[28]
In Perl, lambda, map, reduce, filter, and closures are fully supported and frequently used. The book Higher-Order Perl, released in 2005, was written to provide an expansive guide on using Perl for functional programming.
In PHP, anonymous classes, closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style.
In Java, anonymous classes can sometimes be used to simulate closures;[105] however, anonymous classes are not always proper replacements to closures because they have more limited capabilities.[106] Java 8 supports lambda expressions as a replacement for some anonymous classes.[107]
In C#, anonymous classes are not necessary, because closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#.
Many object-oriented design patterns are expressible in functional programming terms: for example, the strategy pattern simply dictates use of a higher-order function, and the visitor pattern roughly corresponds to a catamorphism, or fold.
Similarly, the idea of immutable data from functional programming is often included in imperative programming languages,[108] for example the tuple in Python, which is an immutable array, and Object.freeze() in JavaScript.[109]
Comparison to logic programming
[edit]Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations.[110] For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). Whereas functions have a strict input-output pattern of arguments, relations can be queried with any pattern of inputs and outputs. Consider the following logic program:
mother(charles, elizabeth).
mother(harry, diana).
The program can be queried, like a functional program, to generate mothers from children:
?- mother(harry, X).
X = diana.
?- mother(charles, X).
X = elizabeth.
But it can also be queried backwards, to generate children:
?- mother(X, elizabeth).
X = charles.
?- mother(X, diana).
X = harry.
It can even be used to generate all instances of the mother relation:
?- mother(X, Y).
X = charles,
Y = elizabeth.
X = harry,
Y = diana.
Compared with relational syntax, functional syntax is a more compact notation for nested functions. For example, the definition of maternal grandmother in functional syntax can be written in the nested form:
maternal_grandmother(X) = mother(mother(X)).
The same definition in relational notation needs to be written in the unnested form:
maternal_grandmother(X, Y) :- mother(X, Z), mother(Z, Y).
Here :- means if and , means and.
However, the difference between the two representations is simply syntactic. In Ciao Prolog, relations can be nested, like functions in functional programming:[111]
grandparent(X) := parent(parent(X)).
parent(X) := mother(X).
parent(X) := father(X).
mother(charles) := elizabeth.
father(charles) := phillip.
mother(harry) := diana.
father(harry) := charles.
?- grandparent(X,Y).
X = harry,
Y = elizabeth.
X = harry,
Y = phillip.
Ciao transforms the function-like notation into relational form and executes the resulting logic program using the standard Prolog execution strategy.
Applications
[edit]Text editors
[edit]Emacs, a highly extensible text editor family uses its own Lisp dialect for writing plugins. The original author of the most popular Emacs implementation, GNU Emacs and Emacs Lisp, Richard Stallman considers Lisp one of his favorite programming languages.[112]
Helix, since version 24.03 supports previewing AST as S-expressions, which are also the core feature of the Lisp programming language family.[113]
Spreadsheets
[edit]Spreadsheets can be considered a form of pure, zeroth-order, strict-evaluation functional programming system.[114] However, spreadsheets generally lack higher-order functions as well as code reuse, and in some implementations, also lack recursion. Several extensions have been developed for spreadsheet programs to enable higher-order and reusable functions, but so far remain primarily academic in nature.[115]
Microservices
[edit]Due to their composability, functional programming paradigms can be suitable for microservices-based architectures.[116]
Academia
[edit]Functional programming is an active area of research in the field of programming language theory. There are several peer-reviewed publication venues focusing on functional programming, including the International Conference on Functional Programming, the Journal of Functional Programming, and the Symposium on Trends in Functional Programming.
Industry
[edit]Functional programming has been employed in a wide range of industrial applications. For example, Erlang, which was developed by the Swedish company Ericsson in the late 1980s, was originally used to implement fault-tolerant telecommunications systems,[11] but has since become popular for building a range of applications at companies such as Nortel, Facebook, Électricité de France and WhatsApp.[10][12][117][118][119] Scheme, a dialect of Lisp, was used as the basis for several applications on early Apple Macintosh computers[3][4] and has been applied to problems such as training-simulation software[5] and telescope control.[6] OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis,[14] driver verification, industrial robot programming and static analysis of embedded software.[15] Haskell, though initially intended as a research language,[17] has also been applied in areas such as aerospace systems, hardware design and web programming.[16][17]
Other functional programming languages that have seen use in industry include Scala,[120] F#,[18][19] Wolfram Language,[7] Lisp,[121] Standard ML[122][123] and Clojure.[124] Scala has been widely used in Data science,[125] while ClojureScript,[126] Elm[127] or PureScript[128] are some of the functional frontend programming languages used in production. Elixir's Phoenix framework is also used by some relatively popular commercial projects, such as Font Awesome or Allegro (one of the biggest e-commerce platforms in Poland)[129]'s classified ads platform Allegro Lokalnie.[130]
Functional "platforms" have been popular in finance for risk analytics (particularly with large investment banks). Risk factors are coded as functions that form interdependent graphs (categories) to measure correlations in market shifts, similar in manner to Gröbner basis optimizations but also for regulatory frameworks such as Comprehensive Capital Analysis and Review. Given the use of OCaml and Caml variations in finance, these systems are sometimes considered related to a categorical abstract machine. Functional programming is heavily influenced by category theory.[citation needed]
Education
[edit]Many universities teach functional programming.[131][132][133][134] Some treat it as an introductory programming concept[134] while others first teach imperative programming methods.[133][135]
Outside of computer science, functional programming is used to teach problem-solving, algebraic and geometric concepts.[136] It has also been used to teach classical mechanics, as in the book Structure and Interpretation of Classical Mechanics.
In particular, Scheme has been a relatively popular choice for teaching programming for years.[137][138]
See also
[edit]Notes and references
[edit]- ^ Hudak, Paul (September 1989). "Conception, evolution, and application of functional programming languages" (PDF). ACM Computing Surveys. 21 (3): 359–411. doi:10.1145/72551.72554. S2CID 207637854. Archived from the original (PDF) on 2016-01-31. Retrieved 2013-08-10.
- ^ a b Hughes, John (1984). "Why Functional Programming Matters".
- ^ a b Clinger, Will (1987). "MultiTasking and MacScheme". MacTech. 3 (12). Retrieved 2008-08-28.
- ^ a b Hartheimer, Anne (1987). "Programming a Text Editor in MacScheme+Toolsmith". MacTech. 3 (1). Archived from the original on 2011-06-29. Retrieved 2008-08-28.
- ^ a b Kidd, Eric. Terrorism Response Training in Scheme. CUFP 2007. Archived from the original on 2010-12-21. Retrieved 2009-08-26.
- ^ a b Cleis, Richard. Scheme in Space. CUFP 2006. Archived from the original on 2010-05-27. Retrieved 2009-08-26.
- ^ a b "Wolfram Language Guide: Functional Programming". 2015. Retrieved 2015-08-24.
- ^ "Functional vs. Procedural Programming Language". Department of Applied Math. University of Colorado. Archived from the original on 2007-11-13. Retrieved 2006-08-28.
- ^ "State-Based Scripting in Uncharted 2" (PDF). Archived from the original (PDF) on 2012-12-15. Retrieved 2011-08-08.
- ^ a b "Who uses Erlang for product development?". Frequently asked questions about Erlang. Retrieved 2018-04-27.
- ^ a b Armstrong, Joe (June 2007). "A history of Erlang". Proceedings of the third ACM SIGPLAN conference on History of programming languages. Third ACM SIGPLAN Conference on History of Programming Languages. San Diego, California. doi:10.1145/1238844.1238850. ISBN 9781595937667.
- ^ a b Larson, Jim (March 2009). "Erlang for concurrent programming". Communications of the ACM. 52 (3): 48. doi:10.1145/1467247.1467263. S2CID 524392.
- ^ "The Elixir Programming Language". Retrieved 2021-02-14.
- ^ a b Minsky, Yaron; Weeks, Stephen (July 2008). "Caml Trading — experiences with functional programming on Wall Street". Journal of Functional Programming. 18 (4): 553–564. doi:10.1017/S095679680800676X. S2CID 30955392.
- ^ a b Leroy, Xavier. Some uses of Caml in Industry (PDF). CUFP 2007. Archived from the original (PDF) on 2011-10-08. Retrieved 2009-08-26.
- ^ a b "Haskell in industry". Haskell Wiki. Retrieved 2009-08-26.
Haskell has a diverse range of use commercially, from aerospace and defense, to finance, to web startups, hardware design firms and lawnmower manufacturers.
- ^ a b c Hudak, Paul; Hughes, J.; Jones, S. P.; Wadler, P. (June 2007). A history of Haskell: being lazy with class. Third ACM SIGPLAN Conference on History of Programming Languages. San Diego, California. doi:10.1145/1238844.1238856. Retrieved 2013-09-26.
- ^ a b Mansell, Howard (2008). Quantitative Finance in F#. CUFP 2008. Archived from the original on 2015-07-08. Retrieved 2009-08-29.
- ^ a b Peake, Alex (2009). The First Substantial Line of Business Application in F#. CUFP 2009. Archived from the original on 2009-10-17. Retrieved 2009-08-29.
- ^ de Moura, Leonardo; Ullrich, Sebastian (July 2021). "The Lean 4 Theorem Prover and Programming Language". Lecture Notes in Artificial Intelligence. Conference on Automated Deduction. Vol. 12699. pp. 625–635. doi:10.1007/978-3-030-79876-5_37. ISSN 1611-3349.
- ^ Banz, Matt (2017-06-27). "An introduction to functional programming in JavaScript". Opensource.com. Retrieved 2021-01-09.
- ^ "The useR! 2006 conference schedule includes papers on the commercial use of R". R-project.org. 2006-06-08. Retrieved 2011-06-20.
- ^ Chambers, John M. (1998). Programming with Data: A Guide to the S Language. Springer Verlag. pp. 67–70. ISBN 978-0-387-98503-9.
- ^ Novatchev, Dimitre. "The Functional Programming Language XSLT — A proof through examples". Retrieved May 27, 2006.
- ^ Mertz, David. "XML Programming Paradigms (part four): Functional Programming approached to XML processing". IBM developerWorks. Retrieved May 27, 2006.
- ^ Chamberlin, Donald D.; Boyce, Raymond F. (1974). "SEQUEL: A structured English query language". Proceedings of the 1974 ACM SIGFIDET: 249–264.
- ^ Functional Programming with C# - Simon Painter - NDC Oslo 2020, 8 August 2021, archived from the original on 2021-10-30, retrieved 2021-10-23
- ^ a b "Functional programming - Kotlin Programming Language". Kotlin. Retrieved 2019-05-01.
- ^ Dominus, Mark J. (2005). Higher-Order Perl. Morgan Kaufmann. ISBN 978-1-55860-701-9.
- ^ Holywell, Simon (2014). Functional Programming in PHP. php[architect]. ISBN 9781940111056.
- ^ The Cain Gang Ltd. "Python Metaclasses: Who? Why? When?" (PDF). Archived from the original (PDF) on 30 May 2009. Retrieved 27 June 2009.
- ^ "GopherCon 2020: Dylan Meeus - Functional Programming with Go". YouTube. 22 December 2020.
- ^ "Functional Language Features: Iterators and Closures - The Rust Programming Language". doc.rust-lang.org. Retrieved 2021-01-09.
- ^ Vanderbauwhede, Wim (18 July 2020). "Cleaner code with functional programming". Archived from the original on 28 July 2020. Retrieved 6 October 2020.
- ^ "Effective Scala". Scala Wiki. Archived from the original on 2012-06-19. Retrieved 2012-02-21.
Effective Scala.
- ^ "Documentation for package java.util.function since Java 8 (also known as Java 1.8)". Retrieved 2021-06-16.
- ^ Turing, A. M. (1937). "Computability and λ-definability". The Journal of Symbolic Logic. 2 (4). Cambridge University Press: 153–163. doi:10.2307/2268280. JSTOR 2268280. S2CID 2317046.
- ^ Haskell Brooks Curry; Robert Feys (1958). Combinatory Logic. North-Holland Publishing Company. Retrieved 10 February 2013.
- ^ Church, A. (1940). "A Formulation of the Simple Theory of Types". Journal of Symbolic Logic. 5 (2): 56–68. doi:10.2307/2266170. JSTOR 2266170. S2CID 15889861.
- ^ McCarthy, John (June 1978). "History of LISP". The first ACM SIGPLAN conference on History of programming languages - HOPL-1 (PDF). Los Angeles, CA. pp. 173–185. doi:10.1145/800025.808387.
{{cite book}}: CS1 maint: location missing publisher (link) - ^ John McCarthy (1960). "Recursive functions of symbolic expressions and their computation by machine, Part I." (PDF). Communications of the ACM. 3 (4): 184–195. doi:10.1145/367177.367199. S2CID 1489409.
- ^ Guy L. Steele; Richard P. Gabriel (February 1996). "The evolution of Lisp". History of programming languages---II (PDF). pp. 233–330. doi:10.1145/234286.1057818. ISBN 978-0-201-89502-5. S2CID 47047140.
- ^ The memoir of Herbert A. Simon (1991), Models of My Life pp.189-190 ISBN 0-465-04640-1 claims that he, Al Newell, and Cliff Shaw are "...commonly adjudged to be the parents of [the] artificial intelligence [field]," for writing Logic Theorist, a program that proved theorems from Principia Mathematica automatically. To accomplish this, they had to invent a language and a paradigm that, viewed retrospectively, embeds functional programming.
- ^ Landin, Peter J. (1964). "The mechanical evaluation of expressions". The Computer Journal. 6 (4). British Computer Society: 308–320. doi:10.1093/comjnl/6.4.308.
- ^ Diehl, Stephan; Hartel, Pieter; Sestoft, Peter (2000). "Abstract machines for programming language implementation". Future Generation Computer Systems. Vol. 16. pp. 739–751.
- ^ Landin, Peter J. (February 1965a). "Correspondence between ALGOL 60 and Church's Lambda-notation: part I". Communications of the ACM. 8 (2). Association for Computing Machinery: 89–101. doi:10.1145/363744.363749. S2CID 6505810.
- ^ Landin, Peter J. (March 1965b). "A correspondence between ALGOL 60 and Church's Lambda-notation: part II". Communications of the ACM. 8 (3). Association for Computing Machinery: 158–165. doi:10.1145/363791.363804. S2CID 15781851.
- ^ Landin, Peter J. (March 1966b). "The next 700 programming languages". Communications of the ACM. 9 (3). Association for Computing Machinery: 157–166. doi:10.1145/365230.365257. S2CID 13409665.
- ^ Backus, J. (1978). "Can programming be liberated from the von Neumann style?: A functional style and its algebra of programs". Communications of the ACM. 21 (8): 613–641. doi:10.1145/359576.359579.
- ^ R.M. Burstall. Design considerations for a functional programming language. Invited paper, Proc. Infotech State of the Art Conf. "The Software Revolution", Copenhagen, 45–57 (1977)
- ^ R.M. Burstall and J. Darlington. A transformation system for developing recursive programs. Journal of the Association for Computing Machinery 24(1):44–67 (1977)
- ^ R.M. Burstall, D.B. MacQueen and D.T. Sannella. HOPE: an experimental applicative language. Proceedings 1980 LISP Conference, Stanford, 136–143 (1980).
- ^ "Make discovering assign() easier!". OpenSCAD. Archived from the original on 2023-04-19.
- ^ Peter Bright (March 13, 2018). "Developers love trendy new languages but earn more with functional programming". Ars Technica.
- ^ John Leonard (January 24, 2017). "The stealthy rise of functional programming". Computing.
- ^ Leo Cheung (May 9, 2017). "Is functional programming better for your startup?". InfoWorld.
- ^ Sean Tull - Monoidal Categories for Formal Concept Analysis.
- ^ Pountain, Dick. "Functional Programming Comes of Age". Byte (August 1994). Archived from the original on 2006-08-27. Retrieved August 31, 2006.
- ^ a b "ISO/IEC JTC 1/SC 22/WG5/N2137 – Fortran 2015 Committee Draft (J3/17-007r2)" (PDF). International Organization for Standardization. July 6, 2017. pp. 336–338.
- ^ "Revised^6 Report on the Algorithmic Language Scheme". R6rs.org. Retrieved 2013-03-21.
- ^ "Revised^6 Report on the Algorithmic Language Scheme - Rationale". R6rs.org. Retrieved 2013-03-21.
- ^ Clinger, William (1998). "Proper tail recursion and space efficiency". Proceedings of the ACM SIGPLAN 1998 conference on Programming language design and implementation - PLDI '98. pp. 174–185. doi:10.1145/277650.277719. ISBN 0897919874. S2CID 16812984.
- ^ Baker, Henry (1994). "CONS Should Not CONS Its Arguments, Part II: Cheney on the M.T.A." Archived from the original on 2006-03-03. Retrieved 2020-04-29.
- ^ Turner, D.A. (2004-07-28). "Total Functional Programming". Journal of Universal Computer Science. 10 (7): 751–768. doi:10.3217/jucs-010-07-0751.
- ^ The Implementation of Functional Programming Languages. Simon Peyton Jones, published by Prentice Hall, 1987
- ^ a b Launchbury, John (March 1993). A Natural Semantics for Lazy Evaluation. Symposium on Principles of Programming Languages. Charleston, South Carolina: ACM. pp. 144–154. doi:10.1145/158511.158618.
- ^ Robert W. Harper (2009). Practical Foundations for Programming Languages (PDF). Archived from the original (PDF) on 2016-04-07.
- ^ Huet, Gérard P. (1973). "The Undecidability of Unification in Third Order Logic". Information and Control. 22 (3): 257–267. doi:10.1016/s0019-9958(73)90301-x.
- ^ Huet, Gérard (Sep 1976). Resolution d'Equations dans des Langages d'Ordre 1,2,...ω (Ph.D.) (in French). Universite de Paris VII.
- ^ Huet, Gérard (2002). "Higher Order Unification 30 years later" (PDF). In Carreño, V.; Muñoz, C.; Tahar, S. (eds.). Proceedings, 15th International Conference TPHOL. LNCS. Vol. 2410. Springer. pp. 3–12.
- ^ Wells, J. B. (1993). "Typability and type checking in the second-order lambda-calculus are equivalent and undecidable". Tech. Rep. 93-011: 176–185. CiteSeerX 10.1.1.31.3590.
- ^ Leroy, Xavier (17 September 2018). "The Compcert verified compiler".
- ^ Peyton Jones, Simon; Vytiniotis, Dimitrios; Weirich, Stephanie; Geoffrey Washburn (April 2006). "Simple unification-based type inference for GADTs". Icfp 2006: 50–61.
- ^ "OCaml Manual". caml.inria.fr. Retrieved 2021-03-08.
- ^ "Algebraic Data Types". Scala Documentation. Retrieved 2021-03-08.
- ^ Kennedy, Andrew; Russo, Claudio V. (October 2005). Generalized Algebraic Data Types and Object-Oriented Programming (PDF). OOPSLA. San Diego, California: ACM. doi:10.1145/1094811.1094814. ISBN 9781595930316. Archived from the original on 2006-12-29.
- ^ Hughes, John. "Why Functional Programming Matters" (PDF). Chalmers University of Technology.
- ^ Purely functional data structures by Chris Okasaki, Cambridge University Press, 1998, ISBN 0-521-66350-4
- ^ L’orange, Jean Niklas. "polymatheia - Understanding Clojure's Persistent Vector, pt. 1". Polymatheia. Retrieved 2018-11-13.
- ^ Michael Barr, Charles Well - Category theory for computer science.
- ^ Newbern, J. "All About Monads: A comprehensive guide to the theory and practice of monadic programming in Haskell". Retrieved 2008-02-14.
- ^ "Thirteen ways of looking at a turtle". fF# for fun and profit. Retrieved 2018-11-13.
- ^ Hartmanis, Juris; Hemachandra, Lane (1986). "Complexity classes without machines: On complete languages for UP". Automata, Languages and Programming. Lecture Notes in Computer Science. Vol. 226. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 123–135. doi:10.1007/3-540-16761-7_62. ISBN 978-3-540-16761-7. Retrieved 2024-12-12.
- ^ Paulson, Larry C. (28 June 1996). ML for the Working Programmer. Cambridge University Press. ISBN 978-0-521-56543-1. Retrieved 10 February 2013.
- ^ Spiewak, Daniel (26 August 2008). "Implementing Persistent Vectors in Scala". Code Commit. Archived from the original on 23 September 2015. Retrieved 17 April 2012.
- ^ "Which programs are fastest? | Computer Language Benchmarks Game". benchmarksgame.alioth.debian.org. Archived from the original on 2013-05-20. Retrieved 2011-06-20.
- ^ Igor Pechtchanski; Vivek Sarkar (2005). "Immutability specification and its applications". Concurrency and Computation: Practice and Experience. 17 (5–6): 639–662. doi:10.1002/cpe.853. S2CID 34527406.
- ^ "An In-Depth Look at Clojure Collections". InfoQ. Retrieved 2024-04-29.
- ^ "References and Borrowing - The Rust Programming Language". doc.rust-lang.org. Retrieved 2024-04-29.
- ^ "Validating References with Lifetimes - The Rust Programming Language". doc.rust-lang.org. Retrieved 2024-04-29.
- ^ "Concurrent Collections (The Java™ Tutorials > Essential Java Classes > Concurrency)". docs.oracle.com. Retrieved 2024-04-29.
- ^ "Understanding The Actor Model To Build Non-blocking, High-throughput Distributed Systems - Scaleyourapp". scaleyourapp.com. 2023-01-28. Retrieved 2024-04-29.
- ^ Cesarini, Francesco; Thompson, Simon (2009). Erlang programming: a concurrent approach to software development (1st ed.). O'Reilly Media, Inc. (published 2009-06-11). p. 6. ISBN 978-0-596-55585-6.
- ^ "Chapter 25. Profiling and optimization". Book.realworldhaskell.org. Retrieved 2011-06-20.
- ^ Berthe, Samuel (2024-04-29), samber/lo, retrieved 2024-04-29
- ^ "Go Wiki: Compiler And Runtime Optimizations - The Go Programming Language". go.dev. Retrieved 2024-04-29.
- ^ "Comparing Performance: Loops vs. Iterators - The Rust Programming Language". doc.rust-lang.org. Retrieved 2024-04-29.
- ^ Hartel, Pieter; Henk Muller; Hugh Glaser (March 2004). "The Functional C experience" (PDF). Journal of Functional Programming. 14 (2): 129–135. doi:10.1017/S0956796803004817. S2CID 32346900. Archived from the original (PDF) on 2011-07-19. Retrieved 2006-05-28.; David Mertz. "Functional programming in Python, Part 3". IBM developerWorks. Archived from the original on 2007-10-16. Retrieved 2006-09-17.(Part 1, Part 2)
- ^ "Functions — D Programming Language 2.0". Digital Mars. 30 December 2012.
- ^ "Lua Unofficial FAQ (uFAQ)".
- ^ "First-Class Functions in Go - The Go Programming Language". golang.org. Retrieved 2021-01-04.
- ^ Eich, Brendan (3 April 2008). "Popularity".
- ^ van Rossum, Guido (2009-04-21). "Origins of Python's "Functional" Features". Retrieved 2012-09-27.
- ^ "functools — Higher order functions and operations on callable objects". Python Software Foundation. 2011-07-31. Retrieved 2011-07-31.
- ^ Skarsaune, Martin (2008). The SICS Java Port Project Automatic Translation of a Large Object Oriented System from Smalltalk to Java.
- ^ Gosling, James. "Closures". James Gosling: on the Java Road. Oracle. Archived from the original on 2013-04-14. Retrieved 11 May 2013.
- ^ Williams, Michael (8 April 2013). "Java SE 8 Lambda Quick Start".
- ^ Bloch, Joshua (2008). "Item 15: Minimize Mutability". Effective Java (Second ed.). Addison-Wesley. ISBN 978-0321356680.
- ^ "Object.freeze() - JavaScript | MDN". developer.mozilla.org. Retrieved 2021-01-04.
The Object.freeze() method freezes an object. A frozen object can no longer be changed; freezing an object prevents new properties from being added to it, existing properties from being removed, prevents changing the enumerability, configurability, or writability of existing properties, and prevents the values of existing properties from being changed. In addition, freezing an object also prevents its prototype from being changed. freeze() returns the same object that was passed in.
- ^ Daniel Friedman; William Byrd; Oleg Kiselyov; Jason Hemann (2018). The Reasoned Schemer, Second Edition. The MIT Press.
- ^ A. Casas, D. Cabeza, M. V. Hermenegildo. A Syntactic Approach to Combining Functional Notation, Lazy Evaluation and Higher-Order in LP Systems. The 8th International Symposium on Functional and Logic Programming (FLOPS'06), pages 142-162, April 2006.
- ^ "How I do my Computing". stallman.org. Retrieved 2024-04-29.
- ^ "Helix". helix-editor.com. Retrieved 2024-04-29.
- ^ Wakeling, David (2007). "Spreadsheet functional programming" (PDF). Journal of Functional Programming. 17 (1): 131–143. doi:10.1017/S0956796806006186. ISSN 0956-7968. S2CID 29429059.
- ^ Peyton Jones, Simon; Burnett, Margaret; Blackwell, Alan (March 2003). "Improving the world's most popular functional language: user-defined functions in Excel". Archived from the original on 2005-10-16.
- ^ Rodger, Richard (11 December 2017). The Tao of Microservices. Manning. ISBN 9781638351733.
- ^ Piro, Christopher (2009). Functional Programming at Facebook. CUFP 2009. Archived from the original on 2009-10-17. Retrieved 2009-08-29.
- ^ "Sim-Diasca: a large-scale discrete event concurrent simulation engine in Erlang". November 2011. Archived from the original on 2013-09-17. Retrieved 2011-11-08.
- ^ 1 million is so 2011 Archived 2014-02-19 at the Wayback Machine // WhatsApp blog, 2012-01-06: "the last important piece of our infrastracture is Erlang"
- ^ Momtahan, Lee (2009). Scala at EDF Trading: Implementing a Domain-Specific Language for Derivative Pricing with Scala. CUFP 2009. Archived from the original on 2009-10-17. Retrieved 2009-08-29.
- ^ Graham, Paul (2003). "Beating the Averages". Retrieved 2009-08-29.
- ^ Sims, Steve (2006). Building a Startup with Standard ML (PDF). CUFP 2006. Retrieved 2009-08-29.
- ^ Laurikari, Ville (2007). Functional Programming in Communications Security. CUFP 2007. Archived from the original on 2010-12-21. Retrieved 2009-08-29.
- ^ Lorimer, R. J. (19 January 2009). "Live Production Clojure Application Announced". InfoQ.
- ^ Bugnion, Pascal (2016). Scala for Data Science (1st ed.). Packt. ISBN 9781785281372.
- ^ "Why developers like ClojureScript". StackShare. Retrieved 2024-04-29.
- ^ Herrick, Justin (2024-04-29), jah2488/elm-companies, retrieved 2024-04-29
- ^ "Why developers like PureScript". StackShare. Retrieved 2024-04-29.
- ^ Team, Editorial (2019-01-08). "ALLEGRO - all you need to know about the best Polish online marketplace". E-commerce Germany News. Retrieved 2024-04-29.
- ^ "Websites using Phoenix Framework - Wappalyzer". www.wappalyzer.com. Retrieved 2024-04-29.
- ^ "Functional Programming: 2019-2020". University of Oxford Department of Computer Science. Retrieved 28 April 2020.
- ^ "Programming I (Haskell)". Imperial College London Department of Computing. Retrieved 28 April 2020.
- ^ a b "Computer Science BSc - Modules". Retrieved 28 April 2020.
- ^ a b Abelson, Hal; Sussman, Gerald Jay (1985). "Preface to the Second Edition". Structure and Interpretation of Computer Programs (2 ed.). MIT Press. Bibcode:1985sicp.book.....A.
- ^ John DeNero (Fall 2019). "Computer Science 61A, Berkeley". Department of Electrical Engineering and Computer Sciences, Berkeley. Retrieved 2020-08-14.
- ^ Emmanuel Schanzer of Bootstrap interviewed on the TV show Triangulation on the TWiT.tv network
- ^ "Why Scheme for Introductory Programming?". home.adelphi.edu. Retrieved 2024-04-29.
- ^ Staff, IMACS (2011-06-03). "What Is Scheme & Why Is it Beneficial for Students?". IMACS – Making Better Thinkers for Life. Retrieved 2024-04-29.
Further reading
[edit]- Abelson, Hal; Sussman, Gerald Jay (1985). Structure and Interpretation of Computer Programs. MIT Press. Bibcode:1985sicp.book.....A.
- Cousineau, Guy and Michel Mauny. The Functional Approach to Programming. Cambridge, UK: Cambridge University Press, 1998.
- Curry, Haskell Brooks and Feys, Robert and Craig, William. Combinatory Logic. Volume I. North-Holland Publishing Company, Amsterdam, 1958.
- Curry, Haskell B.; Hindley, J. Roger; Seldin, Jonathan P. (1972). Combinatory Logic. Vol. II. Amsterdam: North Holland. ISBN 978-0-7204-2208-5.
- Dominus, Mark Jason. Higher-Order Perl. Morgan Kaufmann. 2005.
- Felleisen, Matthias; Findler, Robert; Flatt, Matthew; Krishnamurthi, Shriram (2018). How to Design Programs. MIT Press.
- Graham, Paul. ANSI Common LISP. Englewood Cliffs, New Jersey: Prentice Hall, 1996.
- MacLennan, Bruce J. Functional Programming: Practice and Theory. Addison-Wesley, 1990.
- Michaelson, Greg (10 April 2013). An Introduction to Functional Programming Through Lambda Calculus. Courier Corporation. ISBN 978-0-486-28029-5.
- O'Sullivan, Brian; Stewart, Don; Goerzen, John (2008). Real World Haskell. O'Reilly.
- Pratt, Terrence W. and Marvin Victor Zelkowitz. Programming Languages: Design and Implementation. 3rd ed. Englewood Cliffs, New Jersey: Prentice Hall, 1996.
- Salus, Peter H. Functional and Logic Programming Languages. Vol. 4 of Handbook of Programming Languages. Indianapolis, Indiana: Macmillan Technical Publishing, 1998.
- Thompson, Simon. Haskell: The Craft of Functional Programming. Harlow, England: Addison-Wesley Longman Limited, 1996.
External links
[edit]- Ford, Neal. "Functional thinking". Retrieved 2021-11-10.
- Akhmechet, Slava (2006-06-19). "defmacro – Functional Programming For The Rest of Us". Retrieved 2013-02-24. An introduction
- Functional programming in Python (by David Mertz): part 1, part 2, part 3
Functional programming
View on GrokipediaOverview
Definition
Functional programming is a declarative programming paradigm that treats computation as the evaluation of mathematical functions, emphasizing immutability, expression evaluation, and the avoidance of changing state and mutable data.[6][12] In this approach, programs are built by applying and composing functions rather than relying on imperative instructions that modify variables or external state.[13] This paradigm promotes predictability and modularity by modeling software development after mathematical principles, where functions map inputs to outputs without unintended consequences.[8] Core characteristics of functional programming include the composition of functions to create complex behaviors from simpler ones, the strict avoidance of side effects such as modifying global variables or performing input/output within functions, and the use of expressions that evaluate to values rather than statements that execute actions.[13][14] These features ensure that functions remain pure, meaning their outputs depend solely on inputs and not on hidden state changes, facilitating easier reasoning, testing, and parallelization.[6] By prioritizing expressions over statements, functional programs express what the computation should achieve rather than how to perform it step by step.[15] Programs in functional programming are constructed as chains of function transformations applied to input data, transforming it through successive evaluations without persisting changes to the data itself.[6] This stateless nature allows for reusable and composable code blocks that can be combined flexibly. For example, a computation might involve function composition like , where the result of applying to is passed directly to , yielding a new value without altering or any surrounding context.[13] This illustrative pattern demonstrates how stateless computation enables clear, mathematical-style problem-solving.[16]Relation to mathematical foundations
Functional programming draws its foundational principles from lambda calculus, a formal system developed by Alonzo Church in the 1930s to model computation through the application and abstraction of functions.[17] In lambda calculus, all computations are expressed as function definitions and applications, providing a pure framework that directly inspires the treatment of functions as first-class entities in functional programming.[18] This system underpins the paradigm by demonstrating how complex behaviors can emerge from simple functional compositions without mutable state.[17] A key aspect of this connection is Church's thesis, which posits that the effectively computable functions are precisely those definable in the lambda calculus.[18] Formulated in 1936, the thesis equates lambda-definable functions with recursive functions and those computable by Turing machines, establishing lambda calculus as a universal model of computation equivalent in expressive power to other foundational systems.[18] This assertion supports the computability of pure functional programs, affirming that functional programming can express any algorithm while adhering to mathematical rigor.[17] Closely related is combinatory logic, a variable-free alternative to lambda calculus introduced by Moses Schönfinkel and Haskell Curry, which achieves Turing completeness through a minimal set of combinators like S and K.[19] In pure functional systems, combinatory logic enables the emulation of lambda terms via bracket abstraction, allowing recursive definitions and the encoding of data structures without explicit variables.[19] Its Turing completeness ensures that functional programming languages based on these logics can compute all partial recursive functions, mirroring the universality of lambda calculus.[19] Category theory further influences functional programming by providing abstract structures for compositionality, such as functors and monads, which model mappings between types and computations.[20] Originating in the work of Samuel Eilenberg and Saunders Mac Lane in 1945, category theory treats functions as morphisms between objects (types), enabling high-level abstractions that promote reusable patterns in functional code.[20] Functors preserve structure across categories, while monads encapsulate sequential operations, offering a mathematical basis for handling effects in otherwise pure systems.[20] While inspired by mathematics, functions in programming differ from pure mathematical functions, which are total mappings that deterministically assign a unique output to every input in a defined domain.[21] In functional programming, functions approximate this ideal through purity and referential transparency but can be partial, undefined for certain inputs due to practical constraints like non-termination, and occasionally impure if side effects are introduced for interoperability.[17] This distinction highlights how programming functions balance mathematical determinism with computational realities, such as resource limitations and error handling.[21]History
Origins in lambda calculus
Functional programming traces its theoretical roots to lambda calculus, a formal system developed by Alonzo Church in the early 1930s as a model for computation and the foundations of mathematics.[22] Church introduced lambda calculus in his papers "A Set of Postulates for the Foundation of Logic" published in 1932 and 1933, aiming to formalize logical systems through function abstraction and application without relying on set theory.[22] This system provided a pure, mathematical framework for expressing computations as transformations via functions, laying the groundwork for later functional paradigms by emphasizing composition and avoidance of mutable state.[23] At its core, lambda calculus operates through lambda terms that encapsulate three primary concepts: abstraction, which defines functions by binding variables to expressions (e.g., λx.M denoting a function taking x to M); application, which combines terms to invoke functions (e.g., (λx.M)N applying the function to argument N); and beta-reduction, the key computational rule that substitutes arguments into function bodies to evaluate expressions (e.g., reducing (λx.M)N to M with x replaced by N).[23] These mechanisms allow lambda calculus to represent and compute any recursive function, establishing it as a universal model of computation equivalent in expressive power to other foundational systems.[24] The development of lambda calculus intersected with broader debates on computability in the 1930s, particularly between Church and Alan Turing. Church proposed lambda calculus as a solution to the Entscheidungsproblem (decision problem) in 1936, asserting it could define "effectively calculable" functions, but inconsistencies in early formulations led to refinements.[24] Turing, independently, introduced Turing machines in his 1936 paper "On Computable Numbers," also addressing the decision problem and demonstrating that lambda calculus and Turing machines were computationally equivalent through mutual simulations.[24] This equivalence, formalized in subsequent work by Church's students Stephen Kleene and J. Barkley Rosser, underscored the Church-Turing thesis, positing that these models capture the intuitive notion of mechanical computation.[24] Early extensions to lambda calculus addressed limitations in expressiveness and consistency, notably through typed variants. Church developed the simply typed lambda calculus in 1940, introducing type assignments to terms to prevent paradoxes like those in the untyped version, such as the self-applicable function leading to inconsistency.[23] Concurrently, Haskell Curry advanced related ideas via combinatory logic, a notationally distinct but equivalent system without explicit variables, and contributed to typed extensions that influenced later type theories.[25] These typed systems provided a safer foundation for logical and computational reasoning, paving the way for applications in proof theory and programming language design.[23]Early languages and influences
The development of Lisp in 1958 by John McCarthy marked the first practical implementation of functional programming concepts, drawing directly from lambda calculus to support symbolic computation in artificial intelligence research.[26] McCarthy designed Lisp as a list-processing language where functions could treat code as data, enabling recursion and higher-order functions, which facilitated early AI experiments at institutions like MIT and Stanford.[27] This innovation addressed the limitations of imperative languages like Fortran by emphasizing expression evaluation over step-by-step instructions, laying groundwork for functional paradigms in computing.[26] In 1966, Peter Landin introduced ISWIM (If You See What I Mean), an abstract functional language that formalized many Lisp-inspired ideas without tying them to a specific machine model.[28] ISWIM featured applicative-style programming, lexical scoping, and a denotational semantics based on lambda calculus, influencing subsequent designs by abstracting away implementation details like storage management.[28] Although never fully implemented, ISWIM's emphasis on purity and orthogonality of constructs served as a blueprint for later languages, promoting the separation of algorithmic description from hardware concerns.[28] The 1970s saw the emergence of Hope, developed by Rod Burstall and colleagues at the University of Edinburgh, which introduced pattern matching as a core mechanism for data decomposition in functional programs.[29] Hope built on ISWIM's abstractions to support nondeterministic computation and abstract data types, making it suitable for theorem proving and symbolic manipulation tasks.[29] Concurrently, ML (Meta Language) originated in 1973 within the LCF theorem-proving system at Edinburgh, where Robin Milner and others created it as an interactive meta-language for defining tactics and proofs.[30] ML's polymorphic type inference and strong typing disciplined functional expressions, enhancing reliability in logical reasoning applications while evolving into a general-purpose language.[30] By the 1980s, David Turner's Miranda advanced lazy evaluation and polymorphic typing in a purely functional setting, serving as a direct precursor to Haskell.[31] Miranda, implemented starting in 1985, integrated user-defined algebraic data types with automatic type checking, streamlining the development of modular, composable programs for both research and practical use.[31] These early languages collectively influenced key computing techniques, including garbage collection—first practically realized in Lisp to manage dynamic list structures automatically—list processing for symbolic AI tasks, and recursion as a primary control mechanism over loops.[32] Such advancements shifted programming toward declarative styles, impacting system design in areas like theorem proving and compiler construction.[32]Modern developments
In the 1990s, Haskell emerged as a pivotal advancement in functional programming, initiated by a committee of researchers aiming to consolidate diverse lazy functional languages into a standardized, purely functional paradigm. The first Haskell Report was published in 1990, defining the language's core features including lazy evaluation, type classes, and monads, which facilitated expressive yet safe programming. Subsequent revisions, such as the Haskell 98 Report in 1999 and the Haskell 2010 Report, refined these elements, promoting widespread adoption and influencing compiler implementations like GHC.[33][34] The 2000s saw the rise of dependently typed functional languages, enabling formal verification of software by allowing types to depend on program values. Idris, developed by Edwin Brady starting in 2007, introduced practical dependent types with totality checking to ensure termination, making it suitable for verified systems programming beyond theorem proving. Similarly, Agda, evolving from earlier proof assistants and reaching its modern form around 2007 under Ulf Norell's leadership, supports interactive theorem proving and dependently typed programming, with applications in formalizing mathematics and software correctness. These languages extended functional programming's mathematical foundations into verifiable computation.[35][36] Functional programming's principles of immutability and higher-order functions have significantly influenced parallel and distributed computing since the 1990s. Erlang, released publicly in 1998 after internal development at Ericsson, pioneered lightweight process-based concurrency with message passing, enabling fault-tolerant systems for telecommunications that scale across millions of concurrent tasks without shared state. In the 2010s, these ideas permeated big data processing; Apache Spark, originating as a UC Berkeley research project in 2009 and becoming an Apache project in 2013, incorporated functional APIs via Resilient Distributed Datasets (RDDs), allowing immutable, chainable transformations for efficient parallel data analytics on clusters.[37][38] Recent trends in functional programming emphasize advanced type systems for handling side effects and flexibility. Effect systems, which explicitly track computational effects like I/O or state in types, gained traction in the 2010s; Koka, developed at Microsoft Research since around 2012, uses row-polymorphic effect types to compose and infer effects modularly, bridging pure functional code with imperative necessities without monads. Concurrently, gradual typing has emerged to integrate dynamic and static typing seamlessly in functional languages, reducing barriers to adoption; for instance, extensions in languages like Racket (via Typed Racket) and research prototypes support progressive type annotations, preserving functional purity while allowing untyped code migration, as explored in sound gradual typing frameworks since the mid-2010s.[39][40]Core Concepts
Pure functions and immutability
In functional programming, pure functions are central, defined as mappings from inputs to outputs that always produce the same result for the same arguments and exhibit no observable side effects, such as input/output operations, mutable state changes, or external interactions.[41] This strict adherence to mathematical function semantics ensures that function behavior is deterministic and independent of execution context or global state.[41] For instance, in a language like Haskell, a function such as:square :: Int -> Int
square x = x * x
square :: Int -> Int
square x = x * x
x without accessing or modifying any external variables.[42]
Immutability reinforces the purity of functional programs by prohibiting modifications to data after its creation, requiring the production of new data structures for any updates rather than altering existing ones in place.[43] This approach yields significant benefits, including enhanced thread-safety, as immutable data can be shared concurrently across multiple threads without the need for synchronization mechanisms like locks, thereby reducing the risk of race conditions.[43] Additionally, immutability simplifies program reasoning and debugging, as developers can analyze code without tracking potential state mutations that might occur elsewhere, leading to more modular and verifiable designs.[44] An example is list construction in functional languages, where appending an element creates a new list rather than modifying the original:
newList = 1 : originalList -- originalList remains unchanged
newList = 1 : originalList -- originalList remains unchanged
originalList throughout the program.[43]
A key consequence of combining pure functions with immutability is referential transparency, the property that any expression in the program can be replaced by its computed value without altering the overall program's observable behavior or meaning. This substitutability, rooted in the absence of side effects and mutable state, enables equational reasoning, where programs can be transformed algebraically for optimization or proof, much like mathematical equations.[45] Referential transparency thus underpins the reliability of functional programs, allowing optimizations such as common subexpression elimination without fear of unintended consequences.[45]
While pure functional programming avoids side effects to maintain these properties, real-world applications require handling them in controlled ways, such as through monads, which encapsulate effects like I/O or state within a pure computational framework, ensuring they do not leak into the main program.[42] For example, Haskell's IO monad sequences side-effecting actions while isolating them from pure code.[42] Alternatively, delimited continuations offer a mechanism to capture and manipulate portions of the control stack for effectful computations, providing composable control over effects without compromising referential transparency in the broader program.[46] These abstractions, often implemented as external modules or libraries, allow functional programmers to integrate necessary impurities while preserving the core benefits of purity and immutability.[42]
First-class and higher-order functions
In functional programming, first-class functions are treated as values with the same rights as other data types, such as integers or strings. This means functions can be assigned to variables, passed as arguments to other functions, returned as results from functions, and stored in data structures like lists or tables.[47] For instance, a function can be stored in a collection and later retrieved and invoked, enabling dynamic behavior and flexible code organization. This treatment contrasts with languages where functions are second-class citizens, limited to static definitions without such manipulations.[47] Higher-order functions build upon first-class functions by accepting other functions as inputs or producing functions as outputs, facilitating abstraction and composition. Common examples includemap, which applies a given function to each element of a list to produce a new list; filter, which uses a predicate function to select elements meeting a condition; and fold (or reduce), which combines list elements using an accumulating function and an initial value, such as computing a sum or product.[44] These functions promote algorithmic generality, allowing the same implementation to handle diverse operations by varying the supplied function.[44]
A key technique enabled by first-class and higher-order functions is currying, where a function expecting multiple arguments is transformed into a sequence of functions, each accepting a single argument. This allows partial application: for example, a binary addition function add x y = x + y can be curried to add x, yielding a new unary function that adds x to its input.[44] Currying enhances expressiveness by turning multi-argument functions into chains of single-argument ones, which can then be composed or passed as higher-order arguments.[44]
The benefits of first-class and higher-order functions include improved code reuse, as generic functions like map or fold can be reused across data types and operations without duplication.[44] They also foster modularity by encapsulating behavior in composable units, reducing coupling and simplifying maintenance.[47] Overall, this paradigm boosts expressiveness in algorithm design, enabling concise implementations of complex logic through function composition rather than imperative loops or conditionals.[44]
Example: Higher-Order Functions in Pseudocode
-- Map: applies f to each element of xs
map f [] = []
map f (x:xs) = f x : map f xs
-- Usage: double all numbers in a list
doubles = map (\x -> x * 2) [1, 2, 3] -- Results in [2, 4, 6]
-- Fold: accumulates using op, starting from init
fold op init [] = init
fold op init (x:xs) = fold op (op init x) xs
-- Usage: sum a list
sum xs = fold (+) 0 xs
-- Map: applies f to each element of xs
map f [] = []
map f (x:xs) = f x : map f xs
-- Usage: double all numbers in a list
doubles = map (\x -> x * 2) [1, 2, 3] -- Results in [2, 4, 6]
-- Fold: accumulates using op, starting from init
fold op init [] = init
fold op init (x:xs) = fold op (op init x) xs
-- Usage: sum a list
sum xs = fold (+) 0 xs
Recursion and referential transparency
In functional programming, recursion provides the primary means of controlling program flow and implementing iteration, by defining a function in terms of calls to itself rather than relying on mutable state or imperative loops. This technique allows for concise and declarative expressions of algorithms, such as computing factorials or traversing data structures, and aligns closely with mathematical induction. Early demonstrations of recursion's power appeared in list-processing systems, where it enabled symbolic computation without explicit iteration constructs.[48] A key variant is tail recursion, in which the recursive call occurs as the final action in the function body, with no subsequent computations required. This structure permits tail call optimization (TCO), a compiler technique that eliminates the need for additional stack frames, converting the recursion into an efficient loop-like iteration and preventing stack overflow in deep calls. Proper tail recursion ensures predictable space efficiency, making it a cornerstone for scalable functional implementations.[49] Referential transparency enhances recursion by guaranteeing that any function application can be replaced by its computed value without changing the overall program's behavior, promoting reliable equational reasoning even in recursive definitions. This property allows developers to treat recursive functions as algebraic equations, facilitating optimizations like unfolding for termination analysis or equivalence proofs. In recursive contexts, transparency supports modular composition and verification, as the outcome depends solely on inputs, free from hidden dependencies.[50] Fixed-point combinators, such as the Y combinator, enable recursion in purely applicative settings like untyped lambda calculus, where functions lack direct self-reference. The Y combinator computes a fixed point for any function , allowing anonymous recursive implementations through higher-order application alone. Formally, this construction underpins general recursion in foundational models of computation.[51]Evaluation and Computation
Strict versus lazy evaluation
In functional programming, strict evaluation, also known as eager or call-by-value evaluation, requires that function arguments be fully evaluated to their values before the function is applied.[52] This strategy is employed in languages such as Standard ML and Scheme, where applicative-order reduction ensures arguments are reduced to normal form prior to substitution.[53][54] For instance, in Standard ML, the expressionf (g () + h ()) first computes g () and h () completely before passing their results to f.[55]
In contrast, lazy evaluation, or non-strict evaluation, defers the computation of arguments until their values are actually required during the program's execution, typically using a call-by-need mechanism to avoid redundant computations through memoization.[56] This approach is the default in Haskell, where it facilitates the definition and manipulation of infinite data structures, such as the infinite list of alternating 1s and 0s: alt = 1 : 0 : alt.[56] Only the elements needed for a specific computation are evaluated, as in the case of filtering or mapping over this list, which would otherwise be impossible under strict evaluation.[57]
Strict evaluation offers predictability in execution order and efficiency for finite computations, as arguments are evaluated exactly once, aligning well with hardware optimizations and avoiding overhead from thunks (unevaluated expressions).[52] However, it can lead to unnecessary work, such as evaluating unused branches in conditionals like if, and may cause non-termination if an argument diverges.[58] Lazy evaluation promotes conciseness by allowing modular composition of functions without concern for evaluation order, enhances support for higher-order functions, and enables parallelism since independent subcomputations can proceed concurrently without affecting the final result.[57] Its drawbacks include potentially unpredictable memory usage due to thunk accumulation and challenges in debugging due to deferred effects, though these are mitigated in pure functional settings.[58]
The Church-Rosser theorem underpins the independence of these evaluation strategies in lambda calculus, stating that if a term reduces to two different terms via beta-reduction, there exists a common term to which both can further reduce (confluence).[59] This property ensures that, provided a normal form exists, strict and lazy strategies (corresponding to applicative-order and normal-order reduction) will converge to the same result, justifying the use of laziness without loss of correctness in confluent systems like pure functional languages.[60][61]
Reduction strategies
In functional programming, reduction strategies define the order in which expressions, particularly beta-redexes in lambda calculus, are evaluated to compute results. These strategies influence efficiency, termination behavior, and resource usage, with normal-order and applicative-order serving as foundational approaches that correspond to lazy and strict evaluation models, respectively.[62] Advanced variants like call-by-need and optimal reduction address limitations by incorporating memoization and graph-based optimizations to minimize redundant computations.[63] Normal-order reduction prioritizes the leftmost outermost redex, delaying evaluation of arguments until they are needed, which ensures that if a normal form exists, it will be reached without reducing unused subexpressions.[64] This strategy aligns with call-by-name semantics and is particularly effective in pure functional languages for avoiding unnecessary work, as seen in the evaluation of , where the outer application reduces first to yield , bypassing the divergent argument.[62] However, it may perform duplicate reductions if the same argument is used multiple times, leading to higher computational cost in such cases.[65] In contrast, applicative-order reduction evaluates the rightmost innermost redex first, fully reducing arguments before applying functions, which mirrors eager evaluation and can prevent non-termination in some scenarios but risks computing unused values.[64] For the same expression , applicative order would first evaluate the argument , causing non-termination, whereas normal order succeeds.[62] This strategy is common in strict languages like Scheme, where it promotes predictable performance by evaluating all operands upfront.[63] Call-by-need extends normal-order by introducing sharing through memoization: arguments are evaluated at most once and stored for reuse, reducing redundancy while preserving laziness.[66] Formally defined in the calculus, it replaces variables with arguments only when needed and retains the result in a let-like binding, as in the reduction of in , where is shared after one computation.[67] This approach, implemented in languages like Haskell, achieves optimal space usage for functional programs with multiple argument occurrences, avoiding the duplication pitfalls of pure call-by-name.[66] Optimal reduction strategies leverage graph rewriting to perform the minimal number of beta-reductions necessary to reach the normal form, eliminating all duplication and unnecessary steps through pointer-free representations.[68] John Lamping's algorithm, for instance, encodes lambda terms as directed acyclic graphs with control operators, enabling interaction-net reductions that simulate Levy-optimal behavior without copying subterms.[68] In practice, this is applied in compilers for functional languages, such as those using SECD machines or G-machine variants, where graph reduction replaces tree-based evaluation to handle sharing efficiently, as demonstrated in the TIM (Three Instruction Machine) for lazy evaluation.[69] Such techniques ensure that reductions are both time- and space-optimal for typable terms, though they increase implementation complexity.[68]Type Systems
Typing disciplines
Functional programming languages employ various typing disciplines to ensure program correctness, ranging from basic type checking to advanced mechanisms that enhance expressiveness and safety. Static typing, as seen in languages like Haskell, performs type checks at compile time, catching errors early and enabling optimizations such as type-directed compilation. In contrast, dynamic typing, exemplified by Lisp, defers type checking to runtime, offering greater flexibility for rapid prototyping but potentially leading to errors detected only during execution.[70] This dichotomy balances safety with expressiveness, with static approaches prioritizing prevention of type mismatches before runtime, while dynamic ones allow for more fluid code evolution.[71] Strong typing in functional programming prevents invalid operations by enforcing strict type rules, reducing the risk of subtle bugs like mixing incompatible data types. For instance, it disallows implicit conversions that could lead to unintended behaviors, promoting safer code construction. Polymorphism extends this by allowing functions to operate uniformly across multiple types; parametric polymorphism enables generic functions that work independently of specific types, such as a map function applicable to lists of any element type, while ad-hoc polymorphism supports type-specific overloads through mechanisms like type classes.[72] These features enhance reusability without sacrificing type safety, as the type system verifies applicability at compile time in statically typed languages.[73] Types also enforce functional purity by distinguishing pure functions—those without side effects—from impure ones, ensuring referential transparency where expressions can be substituted without altering program behavior. In Haskell, the IO monad encapsulates side-effecting operations, preventing pure functions from inadvertently introducing impurity and allowing the compiler to optimize pure code more aggressively. Monomorphic functions operate on fixed types, limiting generality, whereas polymorphic ones adapt to varying types while maintaining purity guarantees through type constraints. This discipline isolates effects, making it easier to reason about program semantics and compose components reliably. Dependent types represent an advanced discipline where types can depend on values, enabling the expression of precise invariants and proofs within the type system itself. For example, a function might require a vector of length exactly n, where n is a runtime value, ensuring that length-related errors are caught at compile time rather than runtime. Languages like Idris and Agda use dependent types to verify properties such as totality—guaranteeing that functions terminate—thus providing a foundation for certified software. This approach bridges programming and proof, enhancing safety for critical applications while introducing some complexity in type annotations.[74]Type inference and polymorphism
Type inference in functional programming languages enables compilers to automatically determine the types of expressions, reducing boilerplate while preserving type safety, particularly in systems supporting polymorphism where functions can operate generically across multiple types. Parametric polymorphism, a core feature, allows quantification over type variables, enabling reusable code without runtime type information. This contrasts with explicit type annotations in other paradigms, as inference algorithms leverage the structure of lambda calculus to derive principal types efficiently.[75] The Hindley-Milner type system, pioneered by J. Roger Hindley in the early 1970s and formalized by Robin Milner in 1978, provides a decidable framework for inferring polymorphic types in the simply typed lambda calculus extended with let-polymorphism. It introduces the concept of a principal type scheme—the most general type from which all others can be derived through instantiation—ensuring completeness and uniqueness in inference. Algorithm W, the standard implementation, uses unification to match type variables against constraints, allowing local definitions to generalize types implicitly, as seen in languages like ML where a function likeid x = x infers the type forall a. a -> a. This approach guarantees decidable type checking in linear time relative to program size, supporting safe polymorphism without annotations.[75]
System F, independently developed by Jean-Yves Girard around 1970 and John Reynolds in 1974, extends parametric polymorphism to second-order quantification, permitting abstraction over types themselves (e.g., forall a. a -> a for the identity function, applicable to any type). This higher-kinded polymorphism enables encoding complex data abstractions, such as polymorphic lists, directly in the type system via type lambdas (Λa. e) and applications (e [τ]). However, while type checking remains decidable in restricted fragments like prenex form (where quantifiers are outermost), full type inference in System F is undecidable, as proven by reduction to semi-unification problems, complicating its practical use without partial annotations or restrictions.[76][77]
For ad-hoc polymorphism, where operations like equality or arithmetic are overloaded based on type-specific behaviors rather than uniform generics, type classes offer a modular solution integrated with Hindley-Milner inference. Proposed by Philip Wadler and Stephen Blott in 1989 for Haskell, type classes define interfaces (e.g., class Eq a where (==) :: a -> a -> Bool) with instances providing implementations for specific types, such as integers or lists. Inference extends Algorithm W by collecting and resolving class constraints via dictionary translation, ensuring overload resolution without explicit types; for instance, elem :: Eq a => a -> [a] -> Bool is inferred automatically. This preserves decidability while enabling extensible overloading.[78]
Key challenges in these systems include balancing expressiveness with inference completeness and decidability; for example, impredicative polymorphism in System F leads to undecidability, requiring trade-offs like rank restrictions or user annotations in languages like Haskell to maintain tractable inference. Additionally, extending Hindley-Milner with features like type classes demands careful constraint solving to avoid ambiguity, though practical implementations achieve completeness for typical programs.[77]
Data Abstractions
Immutable data structures
Immutable data structures form a cornerstone of functional programming, where data cannot be modified after creation, ensuring that functions produce outputs dependent solely on their inputs. This immutability aligns with the paradigm's emphasis on pure functions and referential transparency, enabling safe composition and reasoning about code without side effects.[79] Persistent variants of these structures allow multiple versions to coexist, achieved through structural sharing that reuses unchanged portions across updates, rather than full copying.[79] A primary technique for persistence is path copying in tree-based structures, where modifications create new paths from the root to the affected node while sharing unaffected subtrees, thus maintaining efficiency. For instance, in balanced binary search trees, this approach yields O(log n) time for insertions and deletions, comparable to mutable counterparts but without altering existing versions.[79] Common immutable types include cons cells for lists, as pioneered in Lisp, where each cell pairs a value with a pointer to the rest of the list, allowing O(1) cons operations and efficient tail sharing for appends.[79] Tree structures extend this to more complex data, such as binary random-access lists that support O(1) head and tail operations alongside O(log n) indexing, leveraging complete binary leaf trees for balance.[79] For associative data, persistent hash maps employ hash array mapped tries (HAMTs), which use hash bits to index into arrays of sub-tries, enabling O(1) amortized lookups and O(log n) updates through structural sharing of nodes. These designs, as in catenable lists for queues, often achieve O(1) amortized operations for enqueue and dequeue via lazy evaluation and suspensions.[79] Efficiency in persistent structures stems from minimizing recomputation: updates copy only the modified path, typically O(log n) nodes for trees, reducing time and space overhead compared to naive copying.[79] However, this incurs trade-offs, including higher overall memory usage from retaining multiple versions and shared nodes, which can multiply space requirements in long-lived computations.[79] In exchange, immutability provides inherent thread safety, eliminating the need for locks in concurrent settings and facilitating parallelism, as multiple threads can access versions without interference.[79]Algebraic data types and pattern matching
Algebraic data types (ADTs) are composite data types in functional programming that allow programmers to define structured data through algebraic operations, specifically products and sums of simpler types. Product types, such as tuples or records, combine multiple values into a single unit, representing the Cartesian product of their component types; for instance, a pair of integers (Int, Int) holds two integer values. Sum types, also known as variants or discriminated unions, represent choices among alternatives, where a value is exactly one of several possible constructors, each potentially carrying associated data; this enables safe handling of disjoint cases without runtime type checks.[32] The concept of ADTs traces back to Peter Landin's ISWIM language in 1966, which introduced algebraic type definitions using a sum-of-products structure, laying foundational ideas for typed functional languages. Subsequent developments occurred in NPL (1973–1975) by Rod Burstall and John Darlington, which extended ISWIM with algebraic types and case expressions for analysis, and in HOPE (1980) by Rod Burstall, David MacQueen, and Don Sannella, which incorporated polymorphic algebraic types and pattern matching. Modern implementations appear in languages like ML (from the 1970s, pioneered by Robin Milner), Miranda (1985 by David Turner), and Haskell (1990 by a committee including Paul Hudak and Simon Peyton Jones), where ADTs form a core mechanism for data abstraction while maintaining type safety.[32][32][32] Pattern matching provides a concise way to destructure and inspect values of ADTs, enabling control flow based on the structure and content of data without explicit type tests or conditionals. In functional languages, it is expressed through constructs like case expressions or multi-equation function definitions, where patterns are matched sequentially against a subject value, binding variables to subcomponents upon success and executing the corresponding body. This mechanism originated in early languages like NPL with case expressions and evolved in HOPE and SASL (1973–1983) to support multi-level destructuring, such aslet (a, (b, c), d) = expr in body. In Haskell, for example, pattern matching integrates seamlessly with function definitions:
safeHead :: [a] -> Maybe a
safeHead [] = Nothing
safeHead (x:_) = Just x
safeHead :: [a] -> Maybe a
safeHead [] = Nothing
safeHead (x:_) = Just x
[] and cons (x:_) patterns destructure lists, handling the absence or presence of elements.[32][32][32]
Common ADTs include the Maybe (or Option) type for representing optional values or error handling without null pointers, defined in Haskell as data Maybe a = Nothing | Just a, where Nothing indicates absence and Just x wraps a value x. Pattern matching on Maybe allows safe extraction:
handleMaybe :: Maybe Int -> String
handleMaybe Nothing = "No value"
handleMaybe (Just n) = "Value: " ++ show n
handleMaybe :: Maybe Int -> String
handleMaybe Nothing = "No value"
handleMaybe (Just n) = "Value: " ++ show n
Either type models computations that may produce a success or failure, defined as data Either a b = Left a | Right b, with Left typically denoting an error and Right a result; it facilitates error propagation in a typed manner. These types exemplify how ADTs promote composability and reduce boilerplate compared to ad-hoc error mechanisms in other paradigms.
Many functional language compilers perform exhaustiveness checking on pattern matches, verifying that all possible constructors of an ADT are covered to prevent runtime errors from unmatched cases. In ML and Haskell, this analysis issues warnings for non-exhaustive matches, ensuring program robustness; for instance, GHC (the Glasgow Haskell Compiler) uses an algorithm to detect uncovered variants during compilation. This feature, refined in works like Luc Maranget's analysis of ML pattern-matching anomalies, detects both non-exhaustiveness and redundant patterns, contributing to safer code.[80][80][80]
Comparisons to Other Paradigms
Versus imperative programming
Functional programming and imperative programming represent two fundamental paradigms in software development, differing primarily in their approach to computation and state management. Imperative programming emphasizes explicit control over the program's state through sequential instructions, where developers specify how to achieve results via assignments that modify variables and control structures like loops for repetition.[81] In contrast, functional programming adopts a declarative style, focusing on what the computation should compute by composing pure functions and expressions without direct state mutation, treating programs as mathematical evaluations of functions applied to arguments.[81] This shift avoids the "word-at-a-time" modifications central to imperative languages, enabling higher-level abstractions and algebraic manipulation of entire program structures.[81] While functional programming eschews mutable state to ensure referential transparency, it can simulate imperative-style state changes through mechanisms like monads and uniqueness types, allowing controlled mutation without compromising purity. Monads encapsulate state transformations by threading an implicit state parameter through computations, as in the state monad where a function might update a counter during evaluation while returning both the result and the new state.[82] For instance, in Haskell, the state monad enables tracking operations like division counts without global variables, sequencing effects via the bind operator.[82] Similarly, uniqueness types in languages like Clean permit destructive updates on data structures guaranteed to have a single reference, using type annotations to enforce that uniqueness at compile time, thus enabling efficient in-place modifications akin to imperative assignments while preserving functional semantics.[83] Control flow in functional programming relies on recursion and pattern matching rather than imperative constructs like if-else statements or while loops, promoting composable and equational reasoning. Recursion serves as the primary means for iteration, often optimized via tail-call elimination to avoid stack overflows, as seen in Haskell implementations where recursive list comprehensions replace C++-style for loops for tasks like coordinate transformations.[84] Pattern matching on algebraic data types further handles branching by exhaustively deconstructing values, contrasting with imperative conditional jumps that can lead to non-local control effects.[84] Error handling in functional programming favors explicit representation of failures through monads or algebraic data types, avoiding the non-local control flow of imperative exceptions. The exception monad, for example, wraps computations to either return a value or raise an error like division by zero, allowing propagation and handling via bind without disrupting purity.[82] Algebraic data types such as Either enable typed error channels, where a function returns success or failure variants that can be pattern-matched, providing compile-time guarantees against unhandled errors in contrast to imperative try-catch blocks that rely on runtime checks.[85]Versus object-oriented programming
Functional programming emphasizes composition of functions to build complex behaviors, where higher-order functions enable modular assembly without altering internal state, contrasting with object-oriented programming's reliance on inheritance hierarchies to extend and specialize classes. In FP, behaviors are combined through function application and currying, promoting reuse via small, composable units that adhere to the principle of referential transparency.[86] In OOP, inheritance allows subclasses to inherit and override methods from superclasses, facilitating polymorphism but often leading to rigid hierarchies prone to the fragile base class problem.[87] This difference underscores FP's preference for "composition over inheritance," as articulated in design principles that favor flexible function pipelines over static class trees.[88] Objects in OOP can be viewed as stateful closures, encapsulating mutable data and methods within a single entity that maintains internal state across invocations, akin to a closure capturing and modifying variables from its environment. This statefulness enables imperative updates but introduces challenges like aliasing and concurrency issues, diverging from FP's stateless pure functions that produce outputs deterministically from inputs without side effects.[87] In FP, functions act as mathematical mappings, ensuring predictability and easier testing, while OOP objects bundle state and behavior to model real-world entities with evolving properties.[88] Many object-oriented languages have incorporated FP elements, such as functional interfaces in Java 8, which allow lambda expressions to implement single-abstract-method interfaces for stream processing. The Stream API enables declarative operations like map, filter, and reduce on collections, treating data flows functionally while integrating with Java's class-based ecosystem.[89] This hybrid approach leverages FP's expressiveness for concise data manipulation without abandoning OOP's object model. Languages like Scala exemplify hybrids blending FP and OOP through traits, which serve as interfaces with implementations that support both mixin composition and inheritance. Traits allow stacking behaviors modularly, combining FP's higher-order functions with OOP's type-safe polymorphism, enabling scalable designs that unify the paradigms.[90][86]Versus logic programming
Functional programming and logic programming both belong to the declarative paradigm, where programs describe what computations should achieve rather than how to perform them step by step.[91] In logic programming, exemplified by languages like Prolog, computations are based on defining relations between entities using logical predicates, with execution driven by a search process that attempts to satisfy queries through backtracking over possible solutions. This contrasts with functional programming, where computation centers on the evaluation of pure functions applied to arguments, producing deterministic results without side effects or search mechanisms.[92] A key distinction lies in their core mechanisms for handling structures: logic programming relies on unification and resolution. Unification finds substitutions that make two terms identical, allowing variables on both sides to bind dynamically during query resolution, which supports non-deterministic exploration of relations via backtracking.[93] In functional programming, pattern matching serves a similar but more restricted role, decomposing data structures in a one-way manner against fixed patterns in function definitions, without bidirectional variable binding or inherent search.[93] Resolution in logic programming then uses these unifications to derive facts from a knowledge base, enabling relational queries that can yield multiple answers.[94] Despite these differences, both paradigms share declarative roots in avoiding explicit control flow, drawing from mathematical foundations like lambda calculus for functions and first-order logic for relations.[95] However, functional programming is inherently deterministic, with evaluation strategies like call-by-value or call-by-need ensuring predictable outcomes, whereas logic programming's non-determinism arises from the order-independent nature of clauses and the need to explore alternative proofs.[91] Efforts to bridge these paradigms have led to hybrid languages like Curry, which integrates functional evaluation with logic programming features such as non-deterministic search and unification, allowing programmers to define functions that incorporate logical variables and backtracking while maintaining higher-order functional abstractions.[96] In Curry, this combination enables concise expressions of both computational and relational problems, such as using functional patterns alongside free variables for flexible data processing.[96]Implementations and Languages
Pure functional languages
Pure functional languages are programming languages designed to strictly adhere to functional programming principles, emphasizing immutability, referential transparency, and the absence of side effects, which allows for mathematical-like reasoning about code behavior.[97] These languages typically feature strong type systems, higher-order functions, and mechanisms for abstraction that promote composability and predictability. Examples include Haskell and Idris, as well as languages like Clean and Mercury, each implementing core functional concepts such as pure functions and lazy or strict evaluation strategies.[98] Haskell stands out as a purely functional language with lazy evaluation by default, where expressions are only computed when needed, enabling efficient composition of functions without unnecessary computations.[97] Its purity ensures that functions have no side effects outside controlled structures like monads, treating them as mathematical mappings from inputs to outputs, as exemplified by type signatures likesquare :: Int -> Int.[97] Haskell introduces type classes for ad-hoc polymorphism, allowing flexible overloading of operations, and monads to encapsulate effects like input/output in a controlled, composable manner, such as in the IO monad for handling external interactions.[97]
Other notable pure functional languages include Elm, tailored for web frontend development with strict evaluation and enforced immutability to prevent runtime exceptions.[99] Elm's type system provides compile-time error detection, and its Elm Architecture standardizes functional updates via pure functions, contributing to high performance in virtual DOM rendering.[100] Idris advances purity through dependent types, where types can depend on values to express precise specifications, supported by totality checking to guarantee termination of functions.[101] This type-driven approach facilitates formal verification within a purely functional framework.[102] Clean uses uniqueness typing to enable efficient strict evaluation while maintaining purity, and Mercury combines logic programming with functional features in a strictly pure, statically typed environment.[98]
Multi-paradigm languages with FP features
Multi-paradigm languages often incorporate functional programming (FP) features to enhance expressiveness, safety, and composability while retaining compatibility with imperative or object-oriented paradigms. These integrations allow developers to leverage FP concepts like immutability and higher-order functions within established ecosystems, facilitating gradual adoption without requiring a full shift to pure FP languages.[103] Languages from the Lisp and ML families, such as Scheme, Clojure, Standard ML (SML), and OCaml, exemplify this by supporting functional styles alongside imperative elements. Among Lisp variants, Scheme supports functional programming through a minimalist design featuring lexical scoping for predictable variable binding and first-class procedures that treat functions as values to support higher-order abstractions, though it allows side effects and imperative constructs. Tail-call optimization ensures that recursive functions, a staple of functional style, execute efficiently without stack overflow, aligning with Scheme's small core defined in standards like R5RS and R7RS.[104] Clojure, another Lisp dialect, prioritizes immutability through persistent data structures like vectors and maps, which facilitate functional transformations while running on the JVM for seamless integration with Java libraries.[105] This design promotes functional purity by minimizing mutable state, enabling robust concurrency via immutable data sharing and software transactional memory, but permits mutable state when needed.[105] The ML family includes Standard ML (SML), a strict functional language with a strong static type system and inference, though it supports imperative features like mutable references.[106] SML's module system supports large-scale structuring of functions and abstractions, as formalized in the Definition of Standard ML '97.[107] OCaml extends this foundation with strict evaluation and pattern matching for concise data handling, supporting a functional core alongside optional object-oriented and imperative features that allow mutable state.[108] Its modules and garbage collection further aid in building reliable programs with functional elements.[108] Scala is a statically typed language that runs on the Java Virtual Machine (JVM), blending object-oriented and functional paradigms with features such as immutable collections and higher-kinded types. Immutable collections in Scala, likeList and Vector, encourage data persistence and thread safety by preventing in-place mutations, aligning with FP principles of referential transparency.[103] Higher-kinded types enable abstraction over type constructors, supporting advanced FP patterns such as monads and functors, which are essential for composable code in libraries like Cats.[109]
JavaScript, primarily an imperative and prototype-based language, gained significant FP capabilities through ECMAScript 6 (ES6) and later standards, including arrow functions and array methods like map and reduce. Arrow functions provide concise syntax for anonymous functions, preserving the lexical this binding and promoting higher-order programming by facilitating callbacks and functional composition.[110] Methods such as map, filter, and reduce enable declarative data transformations without explicit loops, treating arrays as immutable pipelines in FP style. For stricter immutability, libraries like Immutable.js offer persistent data structures, such as Map and List, which return new instances on updates to avoid side effects.
Python, a dynamically typed multi-paradigm language, supports FP through built-in constructs like lambda functions, list comprehensions, and modules in the standard library. Lambda expressions allow inline anonymous functions for simple operations, often used with higher-order functions like map and filter to process iterables functionally.[111] List comprehensions provide a concise, declarative alternative to loops, generating lists via expressions that iterate over iterables while supporting filtering and transformations. Functional modules such as functools (for partial application and decorators) and itertools (for iterator tools like chain and groupby) extend these capabilities, enabling efficient, iterator-based FP without mutable state.
Rust, a systems language emphasizing memory safety, integrates FP elements like pattern matching and functional traits within its ownership model, which enforces immutability by default through borrowing rules. Pattern matching via match expressions allows exhaustive decomposition of algebraic data types, promoting safe and expressive control flow akin to FP languages.[112] Functional traits, such as [Iterator](/page/Iterator) and Fn, support higher-order functions and closures, enabling composable algorithms like iterators for lazy evaluation despite the strict ownership constraints that prevent data races.[113][114] This combination allows FP patterns in performance-critical code while upholding Rust's borrow checker for concurrency safety.[115]
Applications and Impact
In industry and software engineering
Functional programming has found significant adoption in the financial sector, particularly for building high-reliability trading systems. Jane Street Capital, a major quantitative trading firm, extensively uses OCaml for its core trading infrastructure, research tools, and systems software, leveraging the language's strong static typing and purity guarantees to minimize runtime errors and ensure system robustness under high-stakes conditions. This approach has enabled the firm to rewrite and maintain large-scale trading applications since 2005, where the immutability and referential transparency of functional constructs reduce the risk of subtle bugs that could lead to financial losses.[116][117][118] In web development, functional principles are increasingly integrated into scalable server-side and client-side architectures. The Elixir programming language, built on the Erlang VM, powers the Phoenix framework, which is used by companies like Discord and Pinterest for handling millions of concurrent connections in real-time applications such as chat services and content feeds. Phoenix's functional design, emphasizing immutable data and lightweight processes, facilitates fault-tolerant, horizontally scalable web servers that maintain performance during traffic spikes. On the client side, React's shift toward functional components—enabled by Hooks since 2018—has become the industry standard for building interactive UIs at scale, as seen in applications by Netflix and Facebook, where pure functions and immutability simplify state management and component reusability.[119][120][121] For big data processing, functional paradigms underpin pipelines in distributed systems like Apache Spark and Apache Flink, enabling efficient, declarative transformations over massive datasets. Spark's API, inspired by functional programming, uses higher-order functions such asmap, filter, and reduce to process petabyte-scale data in batch and streaming modes, as deployed by organizations like Uber for real-time analytics and fraud detection. Similarly, Flink employs functional stream processing operators for low-latency event handling in production environments at companies such as Alibaba, where its stateful computations over unbounded streams support e-commerce recommendation systems. These tools promote composable, side-effect-free workflows that streamline data engineering tasks.[122][123]
A key benefit driving this industry adoption is the reduction in bugs through immutability and pure functions, which eliminate shared mutable state and side effects that often cause concurrency issues and heisenbugs in imperative codebases. For instance, studies and practitioner reports indicate that functional languages like OCaml can catch certain error classes at compile time via type checking, significantly lowering defect rates in safety-critical systems. Additionally, pure functions are inherently easier to test, as they produce deterministic outputs for given inputs without external dependencies, allowing for rapid unit testing and property-based verification that accelerates development cycles and improves code maintainability in large teams.[10][118]
