Recent from talks
Nothing was collected or created yet.
List of numerical libraries
View on WikipediaThis is a list of numerical libraries, which are libraries used in software development for performing numerical calculations. It is not a complete listing but is instead a list of numerical libraries with articles on Wikipedia, with few exceptions.
The choice of a typical library depends on a range of requirements such as: desired features (e.g. large dimensional linear algebra, parallel computation, partial differential equations), licensing, readability of API, portability or platform/compiler dependence (e.g. Linux, Windows, Visual C++, GCC), performance, ease-of-use, continued support from developers, standard compliance, specialized optimization in code for specific application scenarios or even the size of the code-base to be installed.
Multi-language
[edit]- ALGLIB is an open source numerical analysis library which may be used from C++, C#, FreePascal, Delphi, VBA.
- ArrayFire is a high performance open source software library for parallel computing with an easy-to-use API.
- IMSL Numerical Libraries are libraries of numerical analysis functionality implemented in standard programming languages like C, Java, C# .NET, Fortran, and Python.
- The NAG Library is a collection of mathematical and statistical routines for multiple programming languages (C, C++, Fortran, Visual Basic, Java, Python and C#) and packages (MATLAB, Excel, R, LabVIEW).
- GNU Octave is an open source high level programming language and library, including a command line interface and GUI, analogous to commercial alternatives such as Maple, MATLAB, Mathematica, etc. APIs, functions and libraries can be called from many platforms, including high level engineering programs, where functions are, in many cases, seamlessly interpreted and integrated in similar fashion to MATLAB. It also can be used with batch orientation.
- librsb is an open source library for high performance sparse matrix computations providing multi-threaded primitives to build iterative solvers (implements also the Sparse BLAS standard). It can be used from C, C++, Fortran, and a dedicated GNU Octave package.
- BLOPEX (Block Locally Optimal Preconditioned Eigenvalue Xolvers) is an open-source library for the scalable (parallel) solution of eigenvalue problems.
- Fastest Fourier Transform in the West (FFTW) is a software library for computing Fourier and related transforms.
- GNU Scientific Library, a popular, free numerical analysis library implemented in C.
- GNU Multi-Precision Library is a library for doing arbitrary-precision arithmetic.
- hypre (High Performance Preconditioners) is an open-source library of routines for scalable (parallel) solution of linear systems and preconditioning.
- LabWindows/CVI is an ANSI C IDE that includes built-in libraries for analysis of raw measurement data, signal generation, windowing, filter functions, signal processing, linear algebra, array and complex operations, curve fitting and statistics.
- Lis is a scalable parallel library for solving systems of linear equations and eigenvalue problems using iterative methods.
- Intel MKL (Math Kernel Library) contains optimized math routines for science, engineering, and financial applications, and is written in C/C++ and Fortran. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math.
- Intel IPP is a multi-threaded software library of functions for multimedia and data processing applications.
- OpenBLAS is an open source implementation of the BLAS API with many hand-crafted optimizations for specific processor types. It performs similar to Intel MKL on Intel processors and higher on various others.
- Portable, Extensible Toolkit for Scientific Computation (PETSc), is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations.
- SLEPc Scalable Library for Eigenvalue Problem Computations is a PETSc-based open-source library for the scalable (parallel) solution of eigenvalue problems.
- UMFPACK is a library for solving sparse linear systems, written in Ansi C. It is the backend for sparse matrices in MATLAB and SciPy.
- Adept is a combined automatic differentiation and array library.
- Advanced Simulation Library is free and open source hardware accelerated multiphysics simulation software with an OpenCL-based internal computational engine.
- ALGLIB is an open source / commercial numerical analysis library with C++ version
- Armadillo is a C++ linear algebra library (matrix and vector maths), aiming towards a good balance between speed and ease of use.[1] It employs template classes, and has optional links to BLAS and LAPACK. The syntax (API) is similar to MATLAB.
- Blitz++ is a high-performance vector mathematics library written in C++.
- Boost.uBLAS C++ libraries for numerical computation
- deal.II is a library supporting all the finite element solution of partial differential equations.
- Dlib is a modern C++ library with easy to use linear algebra and optimization tools which benefit from optimized BLAS and LAPACK libraries.
- Eigen is a vector mathematics library with performance comparable with Intel's Math Kernel Library
- Hermes Project: C++/Python library for rapid prototyping of space- and space-time adaptive hp-FEM solvers.
- IML++ is a C++ library for solving linear systems of equations, capable of dealing with dense, sparse, and distributed matrices.
- IT++ is a C++ library for linear algebra (matrices and vectors), signal processing and communications. Functionality similar to MATLAB and Octave.
- LAPACK++, a C++ wrapper library for LAPACK and BLAS
- MFEM is a free, lightweight, scalable C++ library for finite element methods.
- Intel MKL, Intel Math Kernel Library (in C and C++), a library of optimized math routines for science, engineering, and financial applications, written in C/C++ and Fortran. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math.
- mlpack is an open-source library for machine learning, exploiting C++ language features to provide maximum performance and flexibility while providing a simple and consistent API
- MTL4 is a generic C++ template library providing sparse and dense BLAS functionality. MTL4 establishes an intuitive interface (similar to MATLAB) and broad applicability thanks to Generic programming.
- The NAG Library has C++ API
- NTL is a C++ library for number theory.
- OpenFOAM is an open-source C++ library for solving partial differential equations in computational fluid dynamics (CFD).
- SU2 code is an open-source library for solving partial differential equations with the finite volume or finite element method.
- Trilinos is an effort to develop algorithms and enabling technologies for the solution of large-scale, complex multi-physics engineering and scientific problems. It is a collection of packages.
- Template Numerical Toolkit (TNT) linear algebra software in the public domain and entirely in the form of headers, from NIST. TNT was originally presented as a successor to Lapack++, Sparselib++, and IML++.[2]
- ALGLIB - an open source numerical analysis library.
.NET Framework languages C#, F#, VB.NET and PowerShell
[edit]- Accord.NET is a collection of libraries for scientific computing, including numerical linear algebra, optimization, statistics, artificial neural networks, machine learning, signal processing and computer vision. LGPLv3, partly GPLv3.
- AForge.NET is a computer vision and artificial intelligence library. It implements a number of genetic, fuzzy logic and machine learning algorithms with several architectures of artificial neural networks with corresponding training algorithms. LGPLv3 and partly GPLv3.
- ALGLIB is an open source numerical analysis library with C# version. Dual licensed: GPLv2+, commercial license.
- ILNumerics.Net Commercial high performance, typesafe numerical array classes and functions for general math, FFT and linear algebra, aims .NET/mono, 32&64 bit, script-like syntax in C#, 2D & 3D plot controls, efficient memory management.
- IMSL Numerical Libraries have C# version (commercially licensed). IMSL .Net have announced end of life at the end of 2020.
- Math.NET Numerics aims to provide methods and algorithms for numerical computations in science, engineering and everyday use. Covered topics include special functions, linear algebra, probability models, random numbers, interpolation, integral transforms and more. Free software under MIT/X11 license.
- Measurement Studio is a commercial integrated suite UI controls and class libraries for use in developing test and measurement applications. The analysis class libraries provide various digital signal processing, signal filtering, signal generation, peak detection, and other general mathematical functionality.
- ML.NET is a free software machine learning library for the C# programming language.[3][4]
- The NAG Library has C# API. Commercially licensed.
- NMath by CenterSpace Software: Commercial numerical component libraries for the .NET platform, including signal processing (FFT) classes, a linear algebra (LAPACK & BLAS) framework, and a statistics package.
- BLAS (Basic Linear Algebra Subprograms) is a de facto application programming interface standard for publishing libraries to perform basic linear algebra operations such as vector and matrix multiplication.
- CERNLIB is a collection of FORTRAN 77 libraries and modules.
- EISPACK is a software library for numerical computation of eigenvalues and eigenvectors of matrices,[5] written in FORTRAN. It contains subroutines for calculating the eigenvalues of nine classes of matrices: complex general, complex Hermitian, real general, real symmetric, real symmetric banded, real symmetric tridiagonal, special real tridiagonal, generalized real, and generalized real symmetric matices.
- IMSL Numerical Libraries are cross-platform libraries containing a comprehensive set of mathematical and statistical functions that can be embedded in a users application.
- Harwell Subroutine Library is a collection of Fortran 77 and 95 codes that address core problems in numerical analysis.
- LAPACK,[6][7] the Linear Algebra PACKage, is a software library for numerical computing originally written in FORTRAN 77 and now written in Fortran 90.
- LINPACK is a software library for performing numerical linear algebra on digital computers.[8][9][10] It was written in Fortran by Jack Dongarra, Jim Bunch, Cleve Moler, and Pete Stewart, and was intended for use on supercomputers in the 1970s and early 1980s. It has been largely superseded by LAPACK, which will run more efficiently on modern architectures.
- Lis is a scalable parallel library for solving systems of linear equations and eigenvalue problems using iterative methods.
- MINPACK is a library of FORTRAN subroutines for the solving of systems of nonlinear equations, or the least squares minimization of the residual of a set of linear or nonlinear equations.
- The NAG Fortran Library is a collection of mathematical and statistical routines for Fortran.
- NOVAS is a software library for astrometry-related numerical computations. Both Fortran and C versions are available.
- Netlib is a repository of scientific computing software which contains a large number of separate programs and libraries including BLAS, EISPACK, LAPACK and others.
- PAW is a free data analysis package developed at CERN.
- Portable, Extensible Toolkit for Scientific Computation (PETSc), is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations.
- QUADPACK is a FORTRAN 77 library for numerical integration of one-dimensional functions
- SLATEC is a FORTRAN 77 library of over 1400 general purpose mathematical and statistical routines.
- SOFA is a collection of subroutines that implement official IAU algorithms for astronomical computations. Both Fortran and C versions are available.
- ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems.
- BLIS is an open-source framework for implementing a superset of BLAS functionality for specific processor types that was awarded the J. H. Wilkinson Prize for Numerical Software.
- Apache Commons, is an open-source for creating reusable Java components. It has numerical packages for linear algebra and non-linear optimization.
- Colt provides a set of Open Source Libraries for High Performance Scientific and Technical Computing.
- Efficient Java Matrix Library (EJML) is an open-source linear algebra library for manipulating dense matrices.
- JAMA, a numerical linear algebra toolkit for the Java programming language. No active development has taken place since 2005, but it still one of the more popular linear algebra packages in Java.
- Jblas: Linear Algebra for Java, a linear algebra library which is an easy to use wrapper around BLAS and LAPACK.
- Parallel Colt is an open source library for scientific computing. A parallel extension of Colt.
- Matrix Toolkit Java is a linear algebra library based on BLAS and LAPACK.
- ojAlgo is an open source Java library for mathematics, linear algebra and optimisation.
- exp4j is a small Java library for evaluation of mathematical expressions.
- SuanShu is an open-source Java math library. It supports numerical analysis, statistics and optimization.
- Maja is an open-source Java library focusing primarily on correct implementations of various special functions.
- OCaml programming language has support for array programming in the standard library, also with a specific module named bigarrays for multi-dimensional, numerical arrays, with both C and Fortran layout options. A comprehensive support of numerical computations is provided by the library Owl Scientific Computing which provides methods for statistics, linear algebra (using OpenBLAS), differential equations, algorithmic differentiation, Fourier fast transform, or deep neural networks.[11] Other numerical libraries in OCaml are Lacaml that interfaces BLAS and LAPACK Fortran/C libraries, L-BFGS-ocaml (OCaml bindings for L-BFGS). For visualization there are libraries for plotting using PLplot, gnuplot or matplotlib.
- Perl Data Language gives standard Perl the ability to compactly store and speedily manipulate the large N-dimensional data arrays.[12] It can perform complex and matrix maths, and has interfaces for the GNU Scientific Library, LINPACK, PROJ, and plotting with PGPLOT. There are libraries on CPAN adding support for the linear algebra library LAPACK,[13] the Fourier transform library FFTW,[14] and plotting with gnuplot,[15] and PLplot.[16]
- NumPy, a BSD-licensed library that adds support for the manipulation of large, multi-dimensional arrays and matrices; it also includes a large collection of high-level mathematical functions. NumPy serves as the backbone for a number of other numerical libraries, notably SciPy. De facto standard for matrix/tensor operations in Python.
- Pandas, a library for data manipulation and analysis.
- SageMath is a large mathematical software application which integrates the work of nearly 100 free software projects and supports linear algebra, combinatorics, numerical mathematics, calculus, and more.[17]
- SciPy,[18][19][20] a large BSD-licensed library of scientific tools. De facto standard for scientific computations in Python.
- ScientificPython, a library with a different set of scientific tools
- SymPy, a library based on New BSD license for symbolic computation. Features of Sympy range from basic symbolic arithmetic to calculus, algebra, discrete mathematics and quantum physics.
Others
[edit]- XNUMBERS – multi-precision floating-Point computing and numerical methods for Microsoft Excel.
- INTLAB – interval arithmetic library for MATLAB.[21][22][23][24]
See also
[edit]References
[edit]- ^ Sanderson, C., & Curtin, R. (2016). Armadillo: a template-based C++ library for linear algebra. Journal of Open Source Software, 1(2), 26.
- ^ Pozo, Roldan (1997). "Template Numerical Toolkit for Linear Algebra: High Performance Programming With C++ and the Standard Template Library". The International Journal of Supercomputer Applications and High Performance Computing. 11 (3). Association for Computing Machinery: 251–263. doi:10.1177/109434209701100307. Retrieved 15 October 2024.
- ^ David Ramel (2018-05-08). "Open Source, Cross-Platform ML.NET Simplifies Machine Learning -- Visual Studio Magazine". Visual Studio Magazine. Retrieved 2018-05-10.
- ^ Kareem Anderson (2017-05-09). "Microsoft debuts ML.NET cross-platform machine learning framework". On MSFT. Retrieved 2018-05-10.
- ^ Smith, B. T., Boyle, J. M., Garbow, B. S., Ikebe, Y., Klema, V. C., & Moler, C. B. (2013). Matrix eigensystem routines-EISPACK guide (Vol. 6). Springer.
- ^ Anderson, E., Bai, Z., Bischof, C., Blackford, S., Dongarra, J., Du Croz, J., ... & Sorensen, D. (1999). LAPACK Users' guide (Vol. 9). SIAM.
- ^ Demmel, J. (1989, December). LAPACK: A portable linear algebra library for supercomputers. In IEEE Control Systems Society Workshop on Computer-Aided Control System Design (pp. 1-7). IEEE.
- ^ Dongarra, J. J., Moler, C. B., Bunch, J. R., & Stewart, G. W. (1979). LINPACK users' guide. Society for Industrial and Applied Mathematics.
- ^ Dongarra, J. J., Luszczek, P., & Petitet, A. (2003). The LINPACK benchmark: past, present and future. Concurrency and Computation: practice and experience, 15(9), 803-820.
- ^ Dongarra, J. J. (1987, June). The LINPACK benchmark: An explanation. In International Conference on Supercomputing (pp. 456-474). Springer, Berlin, Heidelberg.
- ^ "Owl Online Tutorial". Owl Online Tutorial. Retrieved 2025-02-09.
- ^ "Perl Data Language - metacpan.org". July 26, 2021.
- ^ "PDL::LinearAlgebra - Linear Algebra utils for PDL - metacpan.org". July 26, 2021.
- ^ "PDL::FFTW3 - PDL interface to the Fastest Fourier Transform in the West - metacpan.org". July 26, 2021.
- ^ "PDL::Graphics::Gnuplot - Gnuplot-based plotting for PDL - metacpan.org". July 26, 2021.
- ^ "PDL::Graphics::PLplot - Object-oriented interface from perl/PDL to the PLPLOT plotting library - metacpan.org". July 26, 2021.
- ^ Zimmermann, P., Casamayou, A., Cohen, N., Connan, G., Dumont, T., Fousse, L., ... & Bray, E. (2018). Computational Mathematics with SageMath. SIAM.
- ^ Jones, E., Oliphant, T., & Peterson, P. (2001). SciPy: Open source scientific tools for Python.
- ^ Bressert, E. (2012). SciPy and NumPy: an overview for developers. " O'Reilly Media, Inc.".
- ^ Blanco-Silva, F. J. (2013). Learning SciPy for numerical and scientific computing. Packt Publishing Ltd.
- ^ S.M. Rump: INTLAB – INTerval LABoratory. In Tibor Csendes, editor, Developments in Reliable Computing, pages 77–104. Kluwer Academic Publishers, Dordrecht, 1999.
- ^ Moore, R. E., Kearfott, R. B., & Cloud, M. J. (2009). Introduction to Interval Analysis. Society for Industrial and Applied Mathematics.
- ^ Rump, S. M. (2010). Verification methods: Rigorous results using floating-point arithmetic. Acta Numerica, 19, 287–449.
- ^ Hargreaves, G. I. (2002). Interval analysis in MATLAB. Numerical Algorithms, (2009.1).
External links
[edit]- The Math Forum - Math Libraries, an extensive list of mathematical libraries with short descriptions
List of numerical libraries
View on GrokipediaMulti-language Libraries
Core Mathematical and Linear Algebra Libraries
Core mathematical and linear algebra libraries form the foundational layer for numerical computations in multi-language environments, providing essential routines for matrix operations, vector manipulations, and linear system solving that can be accessed across various programming paradigms. These libraries emphasize portability and efficiency, often integrating with established standards like BLAS and LAPACK to ensure compatibility with high-performance computing setups. By offering interfaces in multiple languages, they enable developers to perform core operations without reinventing basic algorithms, supporting applications from scientific simulations to data processing. ALGLIB is an open-source numerical analysis library that includes comprehensive support for linear algebra, such as dense and sparse matrix solvers, decompositions (LU, Cholesky, QR, SVD), and basic mathematical routines.[8] Developed since 1999, it supports C++, C#, Java, Python, and Delphi, with a dual licensing model featuring a free GPL edition and a commercial version for industrial use.[8] ALGLIB achieves efficiency through native implementations and compatibility with BLAS/LAPACK for dense and sparse matrices, allowing seamless integration with optimized vendor libraries.[8] IMSL Numerical Libraries provide a commercial suite for advanced numerical analysis, encompassing over 1,000 routines for linear systems, eigenvalue computations, interpolation, and approximation.[9] Available in C, C++, Fortran, Java, .NET (including C#), and Python, these libraries are designed for robust performance in enterprise applications like machine learning and engineering simulations.[9] Key features include solvers for symmetric and general eigenproblems, as well as spline and polynomial interpolation methods, with built-in support for parallel processing via OpenMP.[10] The NAG Library, developed by the Numerical Algorithms Group since 1976, offers a comprehensive collection of over 1,600 routines for linear algebra, fast Fourier transforms (FFT), and special functions, catering to high-precision scientific computing needs.[11] It supports interfaces for C, C++, Fortran, Java, Python, .NET, and others, ensuring broad accessibility across platforms.[11] NAG includes BLAS/LAPACK-compatible implementations for dense and sparse matrix operations, along with advanced eigensystem analysis and quadrature routines, validated through rigorous testing for reliability in critical applications.[11] Armadillo is an open-source C++ library focused on linear algebra, featuring a high-level syntax reminiscent of MATLAB for operations on dense and sparse matrices, vectors, and cubes.[12] It provides bindings for Python via PyArmadillo and for Julia through mlpack interfaces, extending its usability beyond native C++.[13][14] Efficiency is enhanced by template meta-programming for compile-time optimizations and delayed evaluation to minimize temporary allocations, with optional integration to BLAS/LAPACK backends like OpenBLAS for accelerated decompositions and solvers.[12] These libraries often integrate with high-level wrappers, such as NumPy in Python, to facilitate rapid prototyping while leveraging underlying multi-language cores.[12]Optimization and Statistical Libraries
Optimization and statistical libraries provide essential tools for solving complex optimization problems and performing advanced statistical analyses, often leveraging parallel computing and specialized algorithms accessible across multiple programming languages. These libraries enable researchers and practitioners to model and solve large-scale optimization tasks, such as mixed-integer linear programming (MILP) and nonlinear programming, as well as Bayesian inference through probabilistic programming frameworks. By offering interfaces in languages like C++, Python, R, and others, they facilitate integration into diverse computational workflows without language-specific silos. ArrayFire is a high-performance, GPU-accelerated library designed for parallel computing, particularly suited for optimization routines and image processing applications that benefit from vectorized operations on arrays.[15] It supports backends for CUDA, OpenCL, and CPU, allowing seamless execution on various hardware architectures, and has been available as open-source software with optional commercial extensions since 2014.[16] ArrayFire provides bindings for C++, Python (via afnumpy), Julia, and R, enabling developers to implement parallel algorithms for tasks like gradient-based optimization and stochastic simulations with minimal code changes across languages.[17] Its just-in-time (JIT) compilation optimizes kernel launches for efficiency in high-dimensional data processing.[17] Gurobi Optimizer is a commercial solver renowned for tackling mixed-integer linear programming (MILP), quadratic programming, and convex optimization problems at scale.[18] It employs advanced barrier methods for continuous problems and branch-and-cut algorithms for mixed-integer cases, enabling the solution of models with millions of variables and constraints. Gurobi offers native interfaces for Python (gurobipy), Java, C++, MATLAB, R, and .NET, supporting rapid prototyping and deployment in mixed-language environments.[18] These interfaces allow users to define models using high-level APIs while accessing low-level tuning parameters for performance-critical applications.[19] IPOPT (Interior Point OPTimizer) is an open-source software package for large-scale nonlinear optimization, implementing a primal-dual interior-point algorithm to find local solutions to constrained problems.[20] Developed under the COIN-OR project since 2001, it is written in C++ and includes bindings for Python (via cyipopt), Java, Julia (Ipopt.jl), and Fortran, promoting its use in multi-language scientific computing pipelines.[21] IPOPT supports filter-line-search methods for handling nonlinear constraints and objectives, with options for warm starts and Hessian approximations to accelerate convergence in iterative solvers.[22] It often integrates with core linear algebra libraries like BLAS for efficient matrix operations during factorization steps.[23] Stan is a probabilistic programming library for Bayesian statistical inference, allowing users to specify complex hierarchical models and perform posterior sampling using Markov chain Monte Carlo (MCMC) methods.[24] Built with a C++ core since its initial release in 2012, Stan provides interfaces for the Stan modeling language, Python (PyStan or CmdStanPy), R (rstan), and Julia, enabling seamless model development and inference across ecosystems.[25] It employs Hamiltonian Monte Carlo sampling, enhanced by the No-U-Turn Sampler (NUTS) for efficient exploration of posterior distributions without tuning hyperparameters manually. Stan's automatic differentiation capabilities compute gradients for the log-probability function, supporting scalable inference in models with thousands of parameters.Low-level Programming Languages
C Libraries
C libraries form the foundation for many numerical computing tasks due to their efficiency, portability across platforms, and straightforward integration with other languages through foreign function interfaces. These libraries typically expose procedural APIs without relying on object-oriented constructs, making them ideal for performance-critical applications in scientific simulations, data analysis, and engineering. Key examples include comprehensive toolkits for general-purpose computations and specialized solvers for optimization problems. The GNU Scientific Library (GSL) is a free, open-source numerical library implemented in C, designed for scientific and mathematical computations. First released in 1996, it is distributed under the GNU General Public License version 3 or later.[26] GSL encompasses over 1,000 functions spanning diverse domains, including random number generation with multiple algorithms like Mersenne Twister, numerical integration via adaptive quadrature methods, and statistical tools for distributions and hypothesis testing.[27] [27] For linear algebra, it provides structures such asgsl_matrix for dense matrix operations, supporting basic arithmetic, decomposition, and eigenvalue computations. Version 2.8, released on May 25, 2024, includes enhancements such as improved support for C11 complex types and matrix operations, along with numerous bug fixes.[26]
FFTPACK is a public-domain library offering efficient routines for computing fast Fourier transforms (FFTs), originally developed by Paul N. Swarztrauber at the National Center for Atmospheric Research (NCAR) in the 1980s.[28] It supports one-dimensional and multidimensional FFTs for complex, real, sine, cosine, and quarter-wave symmetric data, making it suitable for signal processing and geophysical modeling.[28] While primarily written in Fortran, C wrappers and ports exist to enable direct use in C environments, preserving its high performance on periodic and symmetric sequences.[29] The library's routines, such as cfft for complex FFTs and sint for sine transforms, have been widely adopted in legacy scientific codes due to their reliability and lack of licensing restrictions.[28]
FFTW (Fastest Fourier Transform in the West) is a comprehensive, open-source C library for computing discrete Fourier transforms (DFTs) in one or multiple dimensions, for both real and complex data, and of arbitrary input size. Developed starting in 1997 by Matteo Frigo and Steven G. Johnson, it is distributed under the GNU General Public License (GPL) or a proprietary license for commercial use.[30] FFTW features advanced performance optimizations, including SIMD instructions, cache-friendly blocking, and multithreading support, making it suitable for high-performance computing applications in signal processing, image analysis, and physics simulations.[31] Its planner mechanism automatically selects the fastest algorithm based on input characteristics, achieving near-optimal performance across various hardware platforms.[32]
GSL and similar libraries can be interfaced with higher-level languages, such as Python, using mechanisms like ctypes for seamless integration in scripting environments.[33]
Fortran Libraries
Fortran has long been a cornerstone language for numerical computing due to its efficient array handling and performance in high-performance computing (HPC) environments, making it ideal for scientific simulations and large-scale data processing.[34] Numerical libraries in Fortran emphasize low-level optimizations for vector and matrix operations, often prioritizing cache efficiency to minimize data movement in memory hierarchies, which is crucial for HPC applications.[35] The Basic Linear Algebra Subprograms (BLAS) provide a standard application programming interface (API) for fundamental vector and matrix operations in Fortran.[36] BLAS is divided into three levels: Level 1 for vector-vector operations like dot products, Level 2 for matrix-vector operations, and Level 3 for matrix-matrix operations such as general matrix multiplications.[37] Originally developed in the late 1970s as a set of 38 Fortran subroutines for basic numerical linear algebra tasks, BLAS has evolved through multiple implementations optimized for modern architectures, including OpenBLAS, a high-performance fork from the 2010s that supports multithreading and SIMD instructions.[38] These implementations enhance cache efficiency by blocking operations to improve data locality, reducing memory access latency in HPC workloads.[39] Modern BLAS routines leverage enhancements from the Fortran 2018 standard, such as improved array operations and parallelism features, to achieve better scalability on multicore systems.[40] Building upon BLAS, the Linear Algebra Package (LAPACK) is an open-source Fortran library for advanced numerical linear algebra, serving as a successor to the earlier LINPACK and EISPACK libraries.[41] Developed starting in 1992 and written primarily in Fortran 90, LAPACK offers routines for solving systems of linear equations, eigenvalue problems, singular value decompositions, and least-squares solutions, all under a BSD-3-Clause-like license that promotes widespread adoption.[42] A key feature is its use of LU decomposition to solve linear systems of the form , where is factorized into lower and upper triangular matrices and such that , allowing efficient forward and backward substitution to find .[43] The Netlib Repository, hosted since the 1980s, serves as a foundational public-domain collection of Fortran routines for various numerical tasks, including optimization and interpolation.[44] Notable packages include MINPACK, a set of subroutines for solving systems of nonlinear equations and nonlinear least-squares problems using algorithms like Levenberg-Marquardt, which are essential for optimization in scientific modeling.[45] Similarly, FITPACK provides Fortran routines for curve and surface fitting with splines, supporting automatic tension determination, derivative computations, and three-dimensional curve handling to facilitate data interpolation in simulations.[46] C wrappers like cblas.h enable BLAS and LAPACK integration with C code, extending their utility in mixed-language environments.[47]Object-oriented Programming Languages
C++ Libraries
C++ numerical libraries leverage the language's template metaprogramming and Standard Template Library (STL) integration to provide efficient, type-safe implementations of mathematical operations, enabling expressive and performant code in object-oriented designs. These libraries often emphasize compile-time optimizations and resource acquisition is initialization (RAII) for memory management, distinguishing them from lower-level C alternatives by reducing boilerplate while maintaining high performance through native compilation. Eigen is a header-only C++ template library for linear algebra, offering support for dense and sparse matrices, vectors, numerical solvers, and geometric transformations.[48] It originated in 2006 and is licensed under the Mozilla Public License 2.0 (MPL2).[49] A key feature of Eigen is its use of expression templates, which enable lazy evaluation to avoid unnecessary temporary objects during computations, thereby improving performance by fusing operations at compile time. Eigen also provides compatibility with external standards like BLAS and LAPACK for accelerated linear algebra routines. Version 5.0.0, released on September 30, 2025, includes enhanced support for SIMD instructions, building on prior versions' features like AVX-512 and ARM NEON, to boost vectorized performance on modern hardware.[50] Boost.Math, a component of the broader Boost C++ Libraries collection, provides implementations of special mathematical functions, quaternions, octonions, and statistical distributions for numerical computations.[51] Developed since 2001 under the Boost Software License, it emphasizes precision and portability across floating-point types. The library's policy-based design allows users to customize error handling, precision, and domain policies at compile time, facilitating tailored behavior for specific applications without runtime overhead. IT++ is a C++ library focused on signal processing and communication system simulations, including classes for filters, modulation schemes, channel models, and error-correcting codes. Released under the GNU General Public License (GPL) and developed in the early 2000s, with the last release (version 4.3.1) in July 2013, it supports efficient simulations through optimized linear algebra and random number generation routines.[52] IT++ integrates STL containers for data handling and provides extensible frameworks for custom signal processing blocks.Java Libraries
Java numerical libraries provide robust support for mathematical computations within the Java Virtual Machine (JVM), emphasizing portability, threading, and integration with enterprise environments. These libraries leverage Java's object-oriented features and garbage collection to handle complex numerical tasks, such as linear algebra and optimization, while maintaining compatibility across platforms. Key examples include Apache Commons Math, ND4J, and ojAlgo, each offering specialized tools for scientific and data-intensive applications.[53][54][55] Apache Commons Math is an open-source library that delivers lightweight components for algebra, statistics, and optimization, addressing common mathematical problems not natively available in the Java standard library.[53] It supports arbitrary-precision arithmetic through integration with BigDecimal, enabling high-accuracy computations for financial and scientific use cases. Released under the Apache License 2.0, the library originated in 2003 as part of the Apache Commons project and has evolved into a foundational tool for JVM-based numerical analysis.[56] A notable feature is the RealMatrix interface, which facilitates dense matrix operations like multiplication and inversion, providing efficient handling of real-valued matrices in linear algebra tasks.[57] As of November 2025, version 3.6.1 (released in 2016) is the latest stable release, with version 4.0 in beta since 2024; enhancements in 3.6.1 included bug fixes and refinements to convergence criteria in optimization algorithms, improving reliability for iterative solvers.[58] ND4J serves as a Java and Scala library for numerical computing, supporting both CPU and GPU execution on the JVM through NumPy-like n-dimensional arrays (NDArrays).[59] It is licensed under Apache 2.0 and forms a core component of the Deeplearning4j ecosystem, which emerged in the mid-2010s to enable deep learning workflows in Java environments; the latest version, 1.0.0-M2.1, was released on August 14, 2022.[60] ND4J's CUDA backend allows seamless acceleration of tensor operations on NVIDIA GPUs, making it suitable for handling multi-dimensional data in machine learning applications.[61] This backend supports tensor manipulations akin to those in deep learning frameworks, such as reshaping and broadcasting, while ensuring JVM compatibility.[62] ojAlgo is an open-source Java library focused on linear algebra and optimization, designed as a high-performance engine that can underpin other computational tools.[55] It provides multi-threaded implementations for matrix decompositions and solvers, emphasizing resource efficiency in pure Java environments. Version 56.1.1, released November 9, 2025, under the MIT License, traces its development to the mid-2000s, with early versions offering dual GPL and Apache compatibility to broaden adoption.[63] The library excels in optimization tasks, including linear and quadratic programming, serving as a modular "engine for engines" in data science pipelines.[64] These libraries often integrate briefly with big data frameworks like Hadoop for scalable numerical processing in distributed systems.[53].NET Framework Libraries
The .NET Framework provides a robust ecosystem for numerical computing, leveraging languages like C#, F#, and VB.NET to enable high-performance, type-safe implementations suitable for enterprise Windows development. Numerical libraries in this domain emphasize integration with the Common Language Runtime (CLR), supporting generics for reusable algorithms and asynchronous operations for scalable computations. These libraries facilitate tasks ranging from linear algebra to machine learning, often incorporating native interop for enhanced performance. Math.NET Numerics is an open-source numerical library initiated in 2009 through the merger of the dnAnalytics and Math.NET Iridium projects, providing comprehensive methods for linear algebra, statistics, optimization, and signal processing such as Fast Fourier Transforms (FFT).[65] It supports generic types for flexible matrix and vector operations, allowing developers to work with various numeric types like double and complex numbers. The DenseMatrix class implements efficient linear solvers, including LU decomposition and iterative methods like conjugate gradient, enabling solutions to systems of linear equations in scientific simulations. Licensed under the MIT License, it has been actively maintained on GitHub, with version 5.0.0 released on April 3, 2022, remaining the latest as of November 2025, introducing improved native interop for providers like Intel MKL and OpenBLAS to boost computational speed without managed overhead.[66][67] Accord.NET, started in 2009 as an extension to the AForge.NET framework for computer vision, has evolved into a full-featured open-source framework for scientific computing, machine learning, and multimedia processing.[68] It incorporates numerical primitives for statistics, including distributions, hypothesis testing, and kernel methods, alongside tools for audio and video analysis such as Fourier transforms and filtering. The framework, distributed under the GNU Lesser General Public License version 2.1, supports integration with .NET's imaging and audio APIs for real-time applications.[69] Its predecessor, AForge.NET, provided foundational numerics for image processing, which Accord.NET expanded with advanced machine learning components like support vector machines and neural networks.[70] The latest stable release, version 3.8.0 from October 22, 2017, with no further releases as of November 2025, includes optimized algorithms for pattern recognition, making it suitable for research in signal processing and computer audition. Microsoft.ML, part of the .NET ecosystem since its initial preview release in May 2018, is an open-source machine learning framework that includes numerical primitives for data transformation, feature engineering, and model training. It offers built-in support for linear algebra operations, such as vectorization and matrix manipulations, essential for algorithms like regression and clustering, and integrates with ONNX for model interoperability. Licensed under the MIT License, it enables end-to-end workflows from data loading to prediction serving, with extensibility for custom trainers.[71] Version 3.0, released in November 2023, added deep learning capabilities via TorchSharp; subsequent releases as of 2025 include additional enhancements like object detection support.[72] These libraries can reference C++ numerical routines through P/Invoke for specialized high-performance needs.High-level and Scripting Languages
Python Libraries
Python's numerical libraries form a rich ecosystem that prioritizes ease of use, rapid prototyping, and seamless integration with other tools in data science and scientific computing, making it a preferred choice for researchers and developers seeking interpretable code without sacrificing performance.[73] NumPy serves as the foundational package for numerical computing in Python, enabling efficient handling of large, multi-dimensional arrays and matrices through its corendarray data structure, which supports advanced indexing, broadcasting for operations on arrays of differing shapes, and a suite of mathematical functions including those for linear algebra via the linalg submodule. Released under the BSD license, NumPy originated from the merger of earlier array libraries and achieved its first stable version, 1.0, in 2006, establishing it as the bedrock for subsequent scientific Python tools.[73][74] In June 2024, NumPy 2.0 marked the first major release since 2006, delivering enhanced C API stability, relaxed type promotion rules, and improved performance for array operations to better support modern workflows. Subsequent releases, such as NumPy 2.1 in August 2024 and 2.3 in 2025, introduced further optimizations including better support for string dtypes and relaxed strides.[75][76][77]
SciPy extends NumPy's capabilities into a comprehensive framework for scientific and technical computing, incorporating modules for optimization, numerical integration, interpolation, signal processing, eigenvalue computations, ordinary differential equation solvers, and sparse linear algebra through its sparse.linalg submodule, which provides iterative methods for solving large-scale systems on sparse matrices.[78] Distributed under the BSD license, SciPy's development began in 2001, evolving into a de facto standard for applying advanced algorithms in Python with over 600 unique contributors by 2019.[79][80] As of 2025, SciPy 1.14 includes enhancements in sparse matrix handling and integration routines.[81]
SymPy offers a pure-Python library for symbolic mathematics, acting as a lightweight computer algebra system capable of manipulating algebraic expressions, solving equations symbolically, performing calculus operations like differentiation and integration, and simplifying mathematical constructs without numerical approximation. Initiated in 2005 and reaching its first public release in 2007, SymPy operates under the BSD license, emphasizing extensibility and integration with numerical libraries like NumPy for hybrid symbolic-numeric computations. In April 2025, SymPy 1.14 added improvements to series expansions and solving capabilities.[82][83][84]
R Libraries
R libraries for numerical computing are primarily designed for statistical analysis, data manipulation, and visualization, building on the S language heritage to support the needs of statisticians and data scientists. These packages emphasize integration with R's base functions for seamless workflows in exploratory data analysis, modeling, and simulation. Key contributions include efficient handling of matrices, acceleration of computations via lower-level languages, and practical implementations of numerical algorithms tailored to statistical applications. The Matrix package provides a comprehensive framework for dense and sparse matrices, extending R's base matrix capabilities with S4 classes and methods for linear algebra operations. It interfaces with established libraries such as LAPACK and BLAS for dense matrix computations, enabling efficient storage and manipulation of large datasets common in statistical modeling. First released in 2000, Matrix supports operations like LU decomposition via thelu() function, which factors a square matrix into lower and upper triangular components for solving linear systems.[85]
Rcpp facilitates high-performance numerical computations by providing a seamless interface between R and C++, allowing users to embed C++ code directly within R functions for speed-critical tasks. Licensed under GPL since its initial release in 2008, it maps R data types to C++ equivalents, supporting vectorized operations and custom algorithms without sacrificing R's interpretive flexibility. For linear algebra, the RcppArmadillo package extends Rcpp by integrating the Armadillo templated C++ library, offering high-level syntax for matrix manipulations while leveraging optimized backends like OpenBLAS.[86]
The pracma package delivers practical numerical mathematics routines, focusing on algorithms for root finding, interpolation, and splines that complement R's statistical tools. Available since the early 2010s, it includes implementations such as Newton-Raphson for univariate root solving and cubic spline fitting, aiding in data smoothing and optimization within statistical contexts. These functions prioritize ease of use for applied users, often drawing from MATLAB-like syntax for familiarity.[87]
Updates in R version 4.4.0, released in April 2024, enhanced numerical stability in the base system, including more graceful underflow handling in pnorm() and improved accuracy for Stirling's approximation in stirlerr(), which impacts density functions like dgamma(). Additionally, starting from R 4.2.3 (October 2022), solve.default() better manages NA or NaN values in inputs with certain LAPACK configurations, reducing errors in linear system solving. As of November 2025, R 4.5.2 is the current version, with further refinements in numerical functions such as improved precision in integrate() for certain integrands. R libraries can interface with Python ecosystems via the reticulate package for hybrid workflows.[88][89]
Perl Libraries
Perl numerical libraries primarily facilitate array-oriented computations and linear algebra within the Perl scripting environment, which is particularly advantageous for tasks involving text processing, bioinformatics, and data manipulation where seamless integration with string operations is essential. These modules extend Perl's capabilities beyond its core strengths in text handling, enabling efficient numerical processing through object-oriented interfaces and vectorized operations. Distributed via the Comprehensive Perl Archive Network (CPAN), they support a range of applications from scientific data analysis to image processing. The Perl Data Language (PDL) stands as the cornerstone for numerical computing in Perl, providing an array-oriented extension that handles large N-dimensional data arrays with high performance. Released initially in 1997 under the Artistic License or GPL, PDL introduces the "pdl" object, a versatile data structure for multidimensional arrays that supports vectorized operations known as broadcasting (formerly termed threading), allowing efficient manipulation without explicit loops.[90][91][92] It excels in domains like image processing, where functions for filtering, transformations, and display integrate natively with Perl's ecosystem.[90] Furthermore, PDL integrates with the GNU Scientific Library (GSL) through the PDL::GSL module, offering access to advanced routines for integration, differentiation, random number generation, and more, thereby bridging Perl scripts with robust C-based numerics.[93] Notable updates include version 2.080 in May 2022, which enhanced broadcasting capabilities and included bug fixes for improved stability in threaded operations, with further releases up to 2.100 by 2025 adding support for newer Perl versions and performance tweaks.[94] For linear algebra specifically, the Math::MatrixReal module implements operations on real-valued matrices and vectors in pure Perl, without external dependencies, making it suitable for environments where portability is key. Developed in the early 2000s, it supports essential functions such as matrix multiplication, inversion, determinants, eigenvalues for symmetric matrices via Householder transformations and QL decomposition, and LU decomposition for solving systems of equations.[95] The module leverages Perl's operator overloading to treat matrices intuitively, akin to built-in types, facilitating applications in simulations and data modeling within Perl workflows.Specialized and Emerging Languages
Julia Libraries
Julia, a high-level programming language designed for numerical and scientific computing, leverages just-in-time (JIT) compilation to achieve performance comparable to C, enabling efficient handling of large-scale computations without sacrificing expressiveness. Numerical libraries in Julia are often implemented as packages that integrate seamlessly with its multiple dispatch system, allowing for generic and performant algorithms tailored to scientific workflows. These libraries emphasize high-performance linear algebra, differential equation solving, and optimization, forming the backbone of Julia's ecosystem for simulations and data analysis. The standard library module LinearAlgebra.jl, included in Julia's base distribution since version 0.7 in 2018, provides comprehensive support for matrix and vector operations, including decompositions like eigenvalues, singular value decomposition (SVD), and LU factorization. It utilizes BLAS and LAPACK backends for optimized performance on multi-core systems and GPUs, ensuring numerical stability through features such as pivoting in factorizations. For instance, functions likeeigen compute eigenvalues and eigenvectors for symmetric matrices with high accuracy, making it suitable for quantum mechanics and structural analysis applications. As of November 2025, Julia's latest stable release is version 1.12.1 (October 2025), which includes ongoing performance optimizations in base libraries like LinearAlgebra.jl.[96]
For solving ordinary differential equations (ODEs) and partial differential equations (PDEs), DifferentialEquations.jl serves as a unified interface, developed under the MIT license since its initial release in 2017. This package supports a wide range of solvers, including adaptive time-stepping methods like Runge-Kutta and multistep algorithms, which dynamically adjust step sizes to balance accuracy and efficiency in stiff systems. It integrates backends such as Sundials.jl, which wraps the SUNDIALS suite for robust handling of large-scale, nonlinear problems common in chemical kinetics and epidemiology modeling. The library's composable design allows users to specify equation structures declaratively, facilitating rapid prototyping of complex models like those in neuroscience.
Optimization tasks in Julia are addressed by JuMP.jl, a domain-specific modeling language for mathematical programming, licensed under the Mozilla Public License (MPL) and first released in 2012. It provides an intuitive syntax for formulating linear, mixed-integer, and nonlinear programs, interfacing with external solvers such as Gurobi, CPLEX, and Ipopt to leverage their advanced algorithms for global optimality. For example, JuMP enables concise definitions of constraints and objectives, as in supply chain logistics or energy grid scheduling, where it abstracts solver-specific details while maintaining high performance through Julia's type system. This package has been pivotal in operations research, with extensions supporting stochastic and robust optimization variants.
Julia's numerical ecosystem also benefits from interoperability tools like PyCall.jl, which allows seamless integration with Python libraries such as NumPy for hybrid workflows. Overall, these libraries underscore Julia's role in bridging dynamic scripting with compiled efficiency, powering applications from climate modeling to machine learning.
Rust Libraries
Rust's numerical libraries leverage the language's ownership model and borrow checker to ensure memory safety and enable safe concurrency in high-performance computing tasks, making them suitable for systems-level numerics where parallel operations on arrays and matrices are common. These libraries provide foundational tools for array manipulation, linear algebra, and machine learning primitives, often integrating with Rust's standard library for efficient data handling without runtime overhead. The ndarray crate offers an n-dimensional container for general elements and numerical computations, supporting features like lightweight array views, slicing, and broadcasting to facilitate efficient data operations similar to those in scientific computing environments.[97] Released initially in December 2015 and licensed under Apache-2.0/MIT, ndarray includes the Axis type for specifying dimensions in operations, such as summing along a particular axis with.sum_axis(Axis(0)) to compute row-wise totals.[98][99]
Nalgebra serves as a linear algebra library tailored for geometry and physics simulations, supporting entity-component systems through its algebraic structures for vectors, matrices, and transformations. First released in November 2014 under the Apache-2.0/MIT license, it provides types like Matrix3 for 3D affine transformations, enabling operations such as rotation matrices with Matrix3::from_angle_z([theta](/page/Theta)).[100][101]
Linfa constitutes a machine learning toolkit built on numerical primitives, offering algorithms for tasks including clustering (e.g., K-means) and regression (e.g., linear models).[102] Launched in February 2020 with an Apache-2.0/MIT license, it emphasizes modular design for integrating with crates like ndarray for data preprocessing.[103] Many Rust numerical libraries, including extensions to ndarray, utilize foreign function interfaces (FFI) to bind with established C libraries like BLAS for optimized linear algebra routines.
OCaml Libraries
OCaml, a functional programming language with imperative capabilities, supports numerical computing through libraries that emphasize type safety, immutability, and integration with established numerical backends. These libraries leverage OCaml's pattern matching and garbage collection to enable reliable mathematical software development, distinguishing them from ownership-based systems in languages like Rust.[104] Owl is a comprehensive numerical library for scientific and engineering computing in OCaml, providing support for N-dimensional arrays, dense and sparse matrix operations, linear algebra, statistical functions, optimization algorithms, regressions, and fast Fourier transforms.[105] Developed since 2016 and licensed under the MIT license, Owl originated from a research project at the University of Cambridge and emphasizes a functional style with immutable data structures to ensure correctness and composability in computations.[106] Its core data structure, theNdarray module, handles both CPU and GPU-accelerated operations, with Dense.Matrix offering CUDA bindings for parallel computing on graphics processing units.[107] In version 0.5.0, released in 2019, Owl introduced dedicated modules for neural networks, including single- and double-precision support for layers, activation functions, and training algorithms, expanding its applicability to machine learning tasks.[108][109]
Lacaml provides OCaml bindings to the BLAS (Basic Linear Algebra Subprograms) and LAPACK (Linear Algebra Package) libraries, enabling efficient linear algebra operations such as matrix decompositions, eigenvalue computations, and least-squares solvers.[110] Licensed under the LGPL since its initial development in the early 2000s, Lacaml wraps these Fortran-based routines while maintaining OCaml's type system for safe array handling and supports both real and complex number precisions.[111] It is particularly suited for high-performance applications requiring low-level numerical primitives without the overhead of higher-level abstractions.[112]
OCaml's numerical libraries, including Owl, find practical use in finance, as exemplified by Jane Street's extensive adoption of the language for trading systems and infrastructure.[113]
