Hubbry Logo
List of numerical librariesList of numerical librariesMain
Open search
List of numerical libraries
Community hub
List of numerical libraries
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
List of numerical libraries
List of numerical libraries
from Wikipedia

This is a list of numerical libraries, which are libraries used in software development for performing numerical calculations. It is not a complete listing but is instead a list of numerical libraries with articles on Wikipedia, with few exceptions.

The choice of a typical library depends on a range of requirements such as: desired features (e.g. large dimensional linear algebra, parallel computation, partial differential equations), licensing, readability of API, portability or platform/compiler dependence (e.g. Linux, Windows, Visual C++, GCC), performance, ease-of-use, continued support from developers, standard compliance, specialized optimization in code for specific application scenarios or even the size of the code-base to be installed.

Multi-language

[edit]
  • ALGLIB is an open source numerical analysis library which may be used from C++, C#, FreePascal, Delphi, VBA.
  • ArrayFire is a high performance open source software library for parallel computing with an easy-to-use API.
  • IMSL Numerical Libraries are libraries of numerical analysis functionality implemented in standard programming languages like C, Java, C# .NET, Fortran, and Python.
  • The NAG Library is a collection of mathematical and statistical routines for multiple programming languages (C, C++, Fortran, Visual Basic, Java, Python and C#) and packages (MATLAB, Excel, R, LabVIEW).
  • GNU Octave is an open source high level programming language and library, including a command line interface and GUI, analogous to commercial alternatives such as Maple, MATLAB, Mathematica, etc. APIs, functions and libraries can be called from many platforms, including high level engineering programs, where functions are, in many cases, seamlessly interpreted and integrated in similar fashion to MATLAB. It also can be used with batch orientation.
  • librsb is an open source library for high performance sparse matrix computations providing multi-threaded primitives to build iterative solvers (implements also the Sparse BLAS standard). It can be used from C, C++, Fortran, and a dedicated GNU Octave package.
  • Adept is a combined automatic differentiation and array library.
  • Advanced Simulation Library is free and open source hardware accelerated multiphysics simulation software with an OpenCL-based internal computational engine.
  • ALGLIB is an open source / commercial numerical analysis library with C++ version
  • Armadillo is a C++ linear algebra library (matrix and vector maths), aiming towards a good balance between speed and ease of use.[1] It employs template classes, and has optional links to BLAS and LAPACK. The syntax (API) is similar to MATLAB.
  • Blitz++ is a high-performance vector mathematics library written in C++.
  • Boost.uBLAS C++ libraries for numerical computation
  • deal.II is a library supporting all the finite element solution of partial differential equations.
  • Dlib is a modern C++ library with easy to use linear algebra and optimization tools which benefit from optimized BLAS and LAPACK libraries.
  • Eigen is a vector mathematics library with performance comparable with Intel's Math Kernel Library
  • Hermes Project: C++/Python library for rapid prototyping of space- and space-time adaptive hp-FEM solvers.
  • IML++ is a C++ library for solving linear systems of equations, capable of dealing with dense, sparse, and distributed matrices.
  • IT++ is a C++ library for linear algebra (matrices and vectors), signal processing and communications. Functionality similar to MATLAB and Octave.
  • LAPACK++, a C++ wrapper library for LAPACK and BLAS
  • MFEM is a free, lightweight, scalable C++ library for finite element methods.
  • Intel MKL, Intel Math Kernel Library (in C and C++), a library of optimized math routines for science, engineering, and financial applications, written in C/C++ and Fortran. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math.
  • mlpack is an open-source library for machine learning, exploiting C++ language features to provide maximum performance and flexibility while providing a simple and consistent API
  • MTL4 is a generic C++ template library providing sparse and dense BLAS functionality. MTL4 establishes an intuitive interface (similar to MATLAB) and broad applicability thanks to Generic programming.
  • The NAG Library has C++ API
  • NTL is a C++ library for number theory.
  • OpenFOAM is an open-source C++ library for solving partial differential equations in computational fluid dynamics (CFD).
  • SU2 code is an open-source library for solving partial differential equations with the finite volume or finite element method.
  • Trilinos is an effort to develop algorithms and enabling technologies for the solution of large-scale, complex multi-physics engineering and scientific problems. It is a collection of packages.
  • Template Numerical Toolkit (TNT) linear algebra software in the public domain and entirely in the form of headers, from NIST. TNT was originally presented as a successor to Lapack++, Sparselib++, and IML++.[2]
  • ALGLIB - an open source numerical analysis library.

.NET Framework languages C#, F#, VB.NET and PowerShell

[edit]
  • Accord.NET is a collection of libraries for scientific computing, including numerical linear algebra, optimization, statistics, artificial neural networks, machine learning, signal processing and computer vision. LGPLv3, partly GPLv3.
  • AForge.NET is a computer vision and artificial intelligence library. It implements a number of genetic, fuzzy logic and machine learning algorithms with several architectures of artificial neural networks with corresponding training algorithms. LGPLv3 and partly GPLv3.
  • ALGLIB is an open source numerical analysis library with C# version. Dual licensed: GPLv2+, commercial license.
  • ILNumerics.Net Commercial high performance, typesafe numerical array classes and functions for general math, FFT and linear algebra, aims .NET/mono, 32&64 bit, script-like syntax in C#, 2D & 3D plot controls, efficient memory management.
  • IMSL Numerical Libraries have C# version (commercially licensed). IMSL .Net have announced end of life at the end of 2020.
  • Math.NET Numerics aims to provide methods and algorithms for numerical computations in science, engineering and everyday use. Covered topics include special functions, linear algebra, probability models, random numbers, interpolation, integral transforms and more. Free software under MIT/X11 license.
  • Measurement Studio is a commercial integrated suite UI controls and class libraries for use in developing test and measurement applications. The analysis class libraries provide various digital signal processing, signal filtering, signal generation, peak detection, and other general mathematical functionality.
  • ML.NET is a free software machine learning library for the C# programming language.[3][4]
  • The NAG Library has C# API. Commercially licensed.
  • NMath by CenterSpace Software: Commercial numerical component libraries for the .NET platform, including signal processing (FFT) classes, a linear algebra (LAPACK & BLAS) framework, and a statistics package.
  • Apache Commons, is an open-source for creating reusable Java components. It has numerical packages for linear algebra and non-linear optimization.
  • Colt provides a set of Open Source Libraries for High Performance Scientific and Technical Computing.
  • Efficient Java Matrix Library (EJML) is an open-source linear algebra library for manipulating dense matrices.
  • JAMA, a numerical linear algebra toolkit for the Java programming language. No active development has taken place since 2005, but it still one of the more popular linear algebra packages in Java.
  • Jblas: Linear Algebra for Java, a linear algebra library which is an easy to use wrapper around BLAS and LAPACK.
  • Parallel Colt is an open source library for scientific computing. A parallel extension of Colt.
  • Matrix Toolkit Java is a linear algebra library based on BLAS and LAPACK.
  • ojAlgo is an open source Java library for mathematics, linear algebra and optimisation.
  • exp4j is a small Java library for evaluation of mathematical expressions.
  • SuanShu is an open-source Java math library. It supports numerical analysis, statistics and optimization.
  • Maja is an open-source Java library focusing primarily on correct implementations of various special functions.
  • OCaml programming language has support for array programming in the standard library, also with a specific module named bigarrays for multi-dimensional, numerical arrays, with both C and Fortran layout options. A comprehensive support of numerical computations is provided by the library Owl Scientific Computing which provides methods for statistics, linear algebra (using OpenBLAS), differential equations, algorithmic differentiation, Fourier fast transform, or deep neural networks.[11] Other numerical libraries in OCaml are Lacaml that interfaces BLAS and LAPACK Fortran/C libraries, L-BFGS-ocaml (OCaml bindings for L-BFGS). For visualization there are libraries for plotting using PLplot, gnuplot or matplotlib.
  • NumPy, a BSD-licensed library that adds support for the manipulation of large, multi-dimensional arrays and matrices; it also includes a large collection of high-level mathematical functions. NumPy serves as the backbone for a number of other numerical libraries, notably SciPy. De facto standard for matrix/tensor operations in Python.
  • Pandas, a library for data manipulation and analysis.
  • SageMath is a large mathematical software application which integrates the work of nearly 100 free software projects and supports linear algebra, combinatorics, numerical mathematics, calculus, and more.[17]
  • SciPy,[18][19][20] a large BSD-licensed library of scientific tools. De facto standard for scientific computations in Python.
  • ScientificPython, a library with a different set of scientific tools
  • SymPy, a library based on New BSD license for symbolic computation. Features of Sympy range from basic symbolic arithmetic to calculus, algebra, discrete mathematics and quantum physics.

Others

[edit]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A list of numerical libraries is a comprehensive catalog of software packages that provide reusable routines, functions, subroutines, or classes for implementing mathematical and numerical algorithms, enabling efficient computations in scientific, engineering, and data-intensive applications. These libraries address core challenges in numerical analysis by offering optimized implementations for operations such as solving linear systems, performing eigenvalue decompositions, optimization, statistical modeling, Fourier transforms, and evaluation of special functions like Bessel or gamma functions. Developed primarily in performance-oriented languages like and C for low-level efficiency, they often include interfaces or bindings for higher-level languages such as C++, Python, and Julia to facilitate broader accessibility and integration into modern workflows. By standardizing algorithms with rigorous testing for accuracy, portability, and error handling—often adhering to standards like —numerical libraries reduce development time, minimize bugs, and support on distributed systems or grids. Lists of these libraries serve as valuable resources for selecting tools based on specific needs, such as domain (e.g., linear algebra versus ), licensing (open-source versus ), and compatibility with hardware accelerators like GPUs.

Multi-language Libraries

Core Mathematical and Linear Algebra Libraries

Core mathematical and linear algebra libraries form the foundational layer for numerical computations in multi-language environments, providing essential routines for matrix operations, vector manipulations, and solving that can be accessed across various programming paradigms. These libraries emphasize portability and efficiency, often integrating with established standards like BLAS and to ensure compatibility with setups. By offering interfaces in multiple languages, they enable developers to perform core operations without reinventing basic algorithms, supporting applications from scientific simulations to . ALGLIB is an open-source library that includes comprehensive support for linear algebra, such as dense and solvers, decompositions (LU, Cholesky, QR, SVD), and basic mathematical routines. Developed since 1999, it supports C++, C#, , Python, and , with a dual licensing model featuring a free GPL edition and a commercial version for industrial use. ALGLIB achieves efficiency through native implementations and compatibility with BLAS/ for dense and sparse matrices, allowing seamless integration with optimized vendor libraries. IMSL Numerical Libraries provide a commercial suite for advanced numerical analysis, encompassing over 1,000 routines for linear systems, eigenvalue computations, interpolation, and approximation. Available in C, C++, Fortran, Java, .NET (including C#), and Python, these libraries are designed for robust performance in enterprise applications like machine learning and engineering simulations. Key features include solvers for symmetric and general eigenproblems, as well as spline and polynomial interpolation methods, with built-in support for parallel processing via OpenMP. The NAG Library, developed by the Numerical Algorithms Group since 1976, offers a comprehensive collection of over 1,600 routines for linear algebra, fast Fourier transforms (FFT), and special functions, catering to high-precision scientific computing needs. It supports interfaces for C, C++, Fortran, Java, Python, .NET, and others, ensuring broad accessibility across platforms. NAG includes BLAS/LAPACK-compatible implementations for dense and sparse matrix operations, along with advanced eigensystem analysis and quadrature routines, validated through rigorous testing for reliability in critical applications. Armadillo is an open-source C++ library focused on linear algebra, featuring a high-level syntax reminiscent of for operations on dense and sparse matrices, vectors, and cubes. It provides bindings for Python via PyArmadillo and for Julia through mlpack interfaces, extending its usability beyond native C++. Efficiency is enhanced by template meta-programming for compile-time optimizations and delayed evaluation to minimize temporary allocations, with optional integration to BLAS/LAPACK backends like for accelerated decompositions and solvers. These libraries often integrate with high-level wrappers, such as in Python, to facilitate rapid prototyping while leveraging underlying multi-language cores.

Optimization and Statistical Libraries

Optimization and statistical libraries provide essential tools for solving complex optimization problems and performing advanced statistical analyses, often leveraging and specialized algorithms accessible across multiple programming languages. These libraries enable researchers and practitioners to model and solve large-scale optimization tasks, such as mixed-integer linear programming (MILP) and , as well as through probabilistic programming frameworks. By offering interfaces in languages like C++, Python, R, and others, they facilitate integration into diverse computational workflows without language-specific silos. ArrayFire is a high-performance, GPU-accelerated library designed for , particularly suited for optimization routines and image processing applications that benefit from vectorized operations on arrays. It supports backends for , , and CPU, allowing seamless execution on various hardware architectures, and has been available as with optional commercial extensions since 2014. ArrayFire provides bindings for C++, Python (via afnumpy), Julia, and , enabling developers to implement parallel algorithms for tasks like gradient-based optimization and simulations with minimal code changes across languages. Its just-in-time () compilation optimizes kernel launches for efficiency in high-dimensional data processing. Gurobi Optimizer is a commercial solver renowned for tackling mixed-integer (MILP), , and problems at scale. It employs advanced barrier methods for continuous problems and branch-and-cut algorithms for mixed-integer cases, enabling the solution of models with millions of variables and constraints. Gurobi offers native interfaces for Python (gurobipy), , C++, , , and .NET, supporting rapid prototyping and deployment in mixed-language environments. These interfaces allow users to define models using high-level APIs while accessing low-level tuning parameters for performance-critical applications. IPOPT (Interior Point OPTimizer) is an package for large-scale nonlinear optimization, implementing a primal-dual interior-point to find local solutions to constrained problems. Developed under the project since 2001, it is written in C++ and includes bindings for Python (via cyipopt), , Julia (Ipopt.jl), and , promoting its use in multi-language scientific computing pipelines. IPOPT supports filter-line-search methods for handling nonlinear constraints and objectives, with options for warm starts and Hessian approximations to accelerate convergence in iterative solvers. It often integrates with core linear algebra libraries like BLAS for efficient matrix operations during factorization steps. Stan is a probabilistic programming library for Bayesian statistical inference, allowing users to specify complex hierarchical models and perform posterior sampling using (MCMC) methods. Built with a C++ core since its initial release in 2012, Stan provides interfaces for the Stan modeling language, Python (PyStan or CmdStanPy), (rstan), and Julia, enabling seamless model development and across ecosystems. It employs sampling, enhanced by the No-U-Turn Sampler (NUTS) for efficient exploration of posterior distributions without tuning hyperparameters manually. Stan's capabilities compute gradients for the log-probability function, supporting scalable in models with thousands of parameters.

Low-level Programming Languages

C Libraries

C libraries form the foundation for many numerical computing tasks due to their , portability across platforms, and straightforward integration with other languages through foreign function interfaces. These libraries typically expose procedural APIs without relying on object-oriented constructs, making them ideal for performance-critical applications in scientific simulations, , and engineering. Key examples include comprehensive toolkits for general-purpose computations and specialized solvers for optimization problems. The GNU Scientific Library (GSL) is a free, open-source numerical library implemented , designed for scientific and mathematical computations. First released in , it is distributed under the GNU General Public License version 3 or later. GSL encompasses over 1,000 functions spanning diverse domains, including random number generation with multiple algorithms like , numerical integration via methods, and statistical tools for distributions and hypothesis testing. For linear algebra, it provides structures such as gsl_matrix for dense matrix operations, supporting basic arithmetic, decomposition, and eigenvalue computations. Version 2.8, released on May 25, 2024, includes enhancements such as improved support for complex types and matrix operations, along with numerous bug fixes. FFTPACK is a public-domain library offering efficient routines for computing fast Fourier transforms (FFTs), originally developed by Paul N. Swarztrauber at the (NCAR) in the . It supports one-dimensional and multidimensional FFTs for complex, real, sine, cosine, and quarter-wave symmetric data, making it suitable for and geophysical modeling. While primarily written in , C wrappers and ports exist to enable direct use in C environments, preserving its high performance on periodic and symmetric sequences. The library's routines, such as cfft for complex FFTs and sint for sine transforms, have been widely adopted in legacy scientific codes due to their reliability and lack of licensing restrictions. FFTW (Fastest Fourier Transform in the West) is a comprehensive, open-source C library for computing discrete Fourier transforms (DFTs) in one or multiple dimensions, for both real and complex data, and of arbitrary input size. Developed starting in 1997 by Matteo Frigo and Steven G. Johnson, it is distributed under the GNU General Public License (GPL) or a proprietary license for commercial use. FFTW features advanced performance optimizations, including SIMD instructions, cache-friendly blocking, and multithreading support, making it suitable for high-performance computing applications in signal processing, image analysis, and physics simulations. Its planner mechanism automatically selects the fastest algorithm based on input characteristics, achieving near-optimal performance across various hardware platforms. GSL and similar libraries can be interfaced with higher-level languages, such as Python, using mechanisms like ctypes for seamless integration in scripting environments.

Fortran Libraries

has long been a cornerstone language for numerical due to its efficient array handling and performance in (HPC) environments, making it ideal for scientific simulations and large-scale data processing. Numerical libraries in emphasize low-level optimizations for vector and matrix operations, often prioritizing cache efficiency to minimize data movement in memory hierarchies, which is crucial for HPC applications. The (BLAS) provide a standard (API) for fundamental vector and matrix operations in . BLAS is divided into three levels: Level 1 for vector-vector operations like dot products, Level 2 for matrix-vector operations, and Level 3 for matrix-matrix operations such as general matrix multiplications. Originally developed in the late as a set of 38 Fortran subroutines for basic tasks, BLAS has evolved through multiple implementations optimized for modern architectures, including , a high-performance fork from the that supports multithreading and SIMD instructions. These implementations enhance cache efficiency by blocking operations to improve data locality, reducing memory access latency in HPC workloads. Modern BLAS routines leverage enhancements from the 2018 standard, such as improved array operations and parallelism features, to achieve better scalability on multicore systems. Building upon BLAS, the Linear Algebra Package (LAPACK) is an open-source Fortran library for advanced numerical linear algebra, serving as a successor to the earlier LINPACK and EISPACK libraries. Developed starting in 1992 and written primarily in Fortran 90, LAPACK offers routines for solving systems of linear equations, eigenvalue problems, singular value decompositions, and least-squares solutions, all under a BSD-3-Clause-like license that promotes widespread adoption. A key feature is its use of LU decomposition to solve linear systems of the form Ax=bAx = b, where AA is factorized into lower and upper triangular matrices LL and UU such that A=LUA = LU, allowing efficient forward and backward substitution to find xx. The Netlib Repository, hosted since the 1980s, serves as a foundational public-domain collection of routines for various numerical tasks, including optimization and . Notable packages include MINPACK, a set of subroutines for solving systems of nonlinear equations and nonlinear least-squares problems using algorithms like Levenberg-Marquardt, which are essential for optimization in scientific modeling. Similarly, FITPACK provides routines for curve and surface fitting with splines, supporting automatic tension determination, derivative computations, and three-dimensional curve handling to facilitate data in simulations. C wrappers like cblas.h enable BLAS and integration with C code, extending their utility in mixed-language environments.

Object-oriented Programming Languages

C++ Libraries

C++ numerical libraries leverage the language's and (STL) integration to provide efficient, type-safe implementations of mathematical operations, enabling expressive and performant code in object-oriented designs. These libraries often emphasize compile-time optimizations and (RAII) for , distinguishing them from lower-level C alternatives by reducing boilerplate while maintaining high performance through native compilation. Eigen is a C++ template library for linear algebra, offering support for dense and sparse matrices, vectors, numerical solvers, and geometric transformations. It originated in 2006 and is licensed under the 2.0 (MPL2). A key feature of Eigen is its use of expression templates, which enable to avoid unnecessary temporary objects during computations, thereby improving performance by fusing operations at . Eigen also provides compatibility with external standards like BLAS and for accelerated linear algebra routines. Version 5.0.0, released on September 30, 2025, includes enhanced support for SIMD instructions, building on prior versions' features like and ARM , to boost vectorized performance on modern hardware. Boost.Math, a component of the broader Boost C++ Libraries collection, provides implementations of special mathematical functions, quaternions, octonions, and statistical distributions for numerical computations. Developed since 2001 under the Boost Software License, it emphasizes precision and portability across floating-point types. The library's policy-based design allows users to customize error handling, precision, and domain policies at compile time, facilitating tailored behavior for specific applications without runtime overhead. IT++ is a C++ library focused on and communication system simulations, including classes for filters, modulation schemes, channel models, and error-correcting codes. Released under the GNU General Public License (GPL) and developed in the early , with the last release (version 4.3.1) in July 2013, it supports efficient simulations through optimized linear algebra and routines. IT++ integrates STL containers for data handling and provides extensible frameworks for custom blocks.

Java Libraries

Java numerical libraries provide robust support for mathematical computations within the (JVM), emphasizing portability, threading, and integration with enterprise environments. These libraries leverage Java's object-oriented features and garbage collection to handle complex numerical tasks, such as linear and optimization, while maintaining compatibility across platforms. Key examples include Math, ND4J, and ojAlgo, each offering specialized tools for scientific and data-intensive applications. Apache Commons Math is an open-source library that delivers lightweight components for algebra, statistics, and optimization, addressing common mathematical problems not natively available in the Java standard library. It supports arbitrary-precision arithmetic through integration with BigDecimal, enabling high-accuracy computations for financial and scientific use cases. Released under the Apache License 2.0, the library originated in 2003 as part of the Apache Commons project and has evolved into a foundational tool for JVM-based numerical analysis. A notable feature is the RealMatrix interface, which facilitates dense matrix operations like multiplication and inversion, providing efficient handling of real-valued matrices in linear algebra tasks. As of November 2025, version 3.6.1 (released in 2016) is the latest stable release, with version 4.0 in beta since 2024; enhancements in 3.6.1 included bug fixes and refinements to convergence criteria in optimization algorithms, improving reliability for iterative solvers. ND4J serves as a Java and Scala library for numerical computing, supporting both CPU and GPU execution on the JVM through NumPy-like n-dimensional arrays (NDArrays). It is licensed under Apache 2.0 and forms a core component of the Deeplearning4j ecosystem, which emerged in the mid-2010s to enable deep learning workflows in Java environments; the latest version, 1.0.0-M2.1, was released on August 14, 2022. ND4J's CUDA backend allows seamless acceleration of tensor operations on NVIDIA GPUs, making it suitable for handling multi-dimensional data in machine learning applications. This backend supports tensor manipulations akin to those in deep learning frameworks, such as reshaping and broadcasting, while ensuring JVM compatibility. ojAlgo is an open-source library focused on linear algebra and optimization, designed as a high-performance that can underpin other computational tools. It provides multi-threaded implementations for matrix decompositions and solvers, emphasizing resource efficiency in pure Java environments. Version 56.1.1, released November 9, 2025, under the , traces its development to the mid-2000s, with early versions offering dual GPL and compatibility to broaden adoption. The library excels in optimization tasks, including linear and , serving as a modular "engine for engines" in pipelines. These libraries often integrate briefly with big data frameworks like Hadoop for scalable numerical processing in distributed systems.

.NET Framework Libraries

The .NET Framework provides a robust ecosystem for numerical , leveraging languages like C#, F#, and VB.NET to enable high-performance, type-safe implementations suitable for enterprise Windows development. Numerical libraries in this domain emphasize integration with the (CLR), supporting generics for reusable algorithms and asynchronous operations for scalable computations. These libraries facilitate tasks ranging from linear algebra to , often incorporating native interop for enhanced performance. Math.NET Numerics is an open-source numerical library initiated in 2009 through the merger of the dnAnalytics and Math.NET projects, providing comprehensive methods for linear algebra, statistics, optimization, and such as Fast Fourier Transforms (FFT). It supports generic types for flexible matrix and vector operations, allowing developers to work with various numeric types like double and complex numbers. The DenseMatrix class implements efficient linear solvers, including and iterative methods like conjugate gradient, enabling solutions to systems of linear equations in scientific simulations. Licensed under the , it has been actively maintained on , with version 5.0.0 released on April 3, 2022, remaining the latest as of November 2025, introducing improved native interop for providers like MKL and to boost computational speed without managed overhead. Accord.NET, started in 2009 as an extension to the AForge.NET framework for , has evolved into a full-featured open-source framework for scientific computing, , and processing. It incorporates numerical primitives for statistics, including distributions, hypothesis testing, and kernel methods, alongside tools for audio and video analysis such as Fourier transforms and filtering. The framework, distributed under the version 2.1, supports integration with .NET's imaging and audio APIs for real-time applications. Its predecessor, AForge.NET, provided foundational numerics for image processing, which Accord.NET expanded with advanced components like support vector machines and neural networks. The latest stable release, version 3.8.0 from October 22, 2017, with no further releases as of November 2025, includes optimized algorithms for , making it suitable for research in and computer audition. Microsoft.ML, part of the .NET ecosystem since its initial preview release in May 2018, is an open-source framework that includes numerical primitives for data transformation, , and model training. It offers built-in support for linear algebra operations, such as vectorization and matrix manipulations, essential for algorithms like regression and clustering, and integrates with ONNX for model . Licensed under the , it enables end-to-end workflows from data loading to prediction serving, with extensibility for custom trainers. Version 3.0, released in November 2023, added capabilities via TorchSharp; subsequent releases as of 2025 include additional enhancements like support. These libraries can reference C++ numerical routines through P/Invoke for specialized high-performance needs.

High-level and Scripting Languages

Python Libraries

Python's numerical libraries form a rich ecosystem that prioritizes ease of use, rapid prototyping, and seamless integration with other tools in and scientific , making it a preferred choice for researchers and developers seeking interpretable code without sacrificing performance. serves as the foundational package for numerical in Python, enabling efficient handling of large, multi-dimensional arrays and matrices through its core ndarray data structure, which supports advanced indexing, for operations on arrays of differing shapes, and a suite of mathematical functions including those for linear algebra via the linalg submodule. Released under the BSD license, originated from the merger of earlier array libraries and achieved its first stable version, 1.0, in 2006, establishing it as the bedrock for subsequent scientific Python tools. In June 2024, 2.0 marked the first major release since 2006, delivering enhanced C API stability, relaxed type promotion rules, and improved performance for array operations to better support modern workflows. Subsequent releases, such as 2.1 in August 2024 and 2.3 in 2025, introduced further optimizations including better support for string dtypes and relaxed strides. SciPy extends NumPy's capabilities into a comprehensive framework for scientific and technical computing, incorporating modules for optimization, numerical integration, interpolation, signal processing, eigenvalue computations, ordinary differential equation solvers, and sparse linear algebra through its sparse.linalg submodule, which provides iterative methods for solving large-scale systems on sparse matrices. Distributed under the BSD license, SciPy's development began in 2001, evolving into a de facto standard for applying advanced algorithms in Python with over 600 unique contributors by 2019. As of 2025, SciPy 1.14 includes enhancements in sparse matrix handling and integration routines. SymPy offers a pure-Python library for symbolic mathematics, acting as a lightweight capable of manipulating algebraic expressions, solving equations symbolically, performing calculus operations like differentiation and integration, and simplifying mathematical constructs without numerical approximation. Initiated in 2005 and reaching its first public release in 2007, SymPy operates under the BSD license, emphasizing extensibility and integration with numerical libraries like for hybrid symbolic-numeric computations. In April 2025, SymPy 1.14 added improvements to series expansions and solving capabilities.

R Libraries

R libraries for numerical computing are primarily designed for statistical , data manipulation, and visualization, building on the S language heritage to support the needs of statisticians and data scientists. These packages emphasize integration with R's base functions for seamless workflows in , modeling, and simulation. Key contributions include efficient handling of matrices, acceleration of computations via lower-level languages, and practical implementations of numerical algorithms tailored to statistical applications. The Matrix package provides a comprehensive framework for dense and sparse matrices, extending R's base matrix capabilities with S4 classes and methods for linear algebra operations. It interfaces with established libraries such as and BLAS for dense matrix computations, enabling efficient storage and manipulation of large datasets common in statistical modeling. First released in 2000, Matrix supports operations like via the lu() function, which factors a square matrix into lower and upper triangular components for solving linear systems. Rcpp facilitates high-performance numerical computations by providing a seamless interface between R and C++, allowing users to embed C++ code directly within R functions for speed-critical tasks. Licensed under GPL since its initial release in 2008, it maps R data types to C++ equivalents, supporting vectorized operations and custom algorithms without sacrificing R's interpretive flexibility. For linear algebra, the RcppArmadillo package extends Rcpp by integrating the templated C++ library, offering high-level syntax for matrix manipulations while leveraging optimized backends like . The pracma package delivers practical numerical mathematics routines, focusing on algorithms for root finding, interpolation, and splines that complement R's statistical tools. Available since the early , it includes implementations such as Newton-Raphson for univariate root solving and cubic spline fitting, aiding in data smoothing and optimization within statistical contexts. These functions prioritize ease of use for applied users, often drawing from MATLAB-like syntax for familiarity. Updates in R version 4.4.0, released in April 2024, enhanced in the base system, including more graceful underflow handling in pnorm() and improved accuracy for in stirlerr(), which impacts density functions like dgamma(). Additionally, starting from R 4.2.3 (October 2022), solve.default() better manages NA or values in inputs with certain configurations, reducing errors in solving. As of November 2025, R 4.5.2 is the current version, with further refinements in numerical functions such as improved precision in integrate() for certain integrands. R libraries can interface with Python ecosystems via the reticulate package for hybrid workflows.

Perl Libraries

Perl numerical libraries primarily facilitate array-oriented computations and linear algebra within the Perl scripting environment, which is particularly advantageous for tasks involving text processing, bioinformatics, and data manipulation where seamless integration with string operations is essential. These modules extend Perl's capabilities beyond its core strengths in text handling, enabling efficient numerical processing through object-oriented interfaces and vectorized operations. Distributed via the , they support a range of applications from scientific to image processing. The Data Language (PDL) stands as the cornerstone for numerical computing in , providing an array-oriented extension that handles large N-dimensional data arrays with high performance. Released initially in 1997 under the or GPL, PDL introduces the "pdl" object, a versatile for multidimensional arrays that supports vectorized operations known as (formerly termed threading), allowing efficient manipulation without explicit loops. It excels in domains like image processing, where functions for filtering, transformations, and display integrate natively with 's ecosystem. Furthermore, PDL integrates with the Scientific Library (GSL) through the PDL::GSL module, offering access to advanced routines for integration, differentiation, , and more, thereby bridging scripts with robust C-based numerics. Notable updates include version 2.080 in May 2022, which enhanced capabilities and included bug fixes for improved stability in threaded operations, with further releases up to 2.100 by 2025 adding support for newer versions and performance tweaks. For linear algebra specifically, the Math::MatrixReal module implements operations on real-valued matrices and vectors in pure , without external dependencies, making it suitable for environments where portability is key. Developed in the early , it supports essential functions such as , inversion, determinants, eigenvalues for symmetric matrices via transformations and QL decomposition, and for solving systems of equations. The module leverages 's to treat matrices intuitively, akin to built-in types, facilitating applications in simulations and within Perl workflows.

Specialized and Emerging Languages

Julia Libraries

Julia, a designed for numerical and scientific computing, leverages just-in-time (JIT) compilation to achieve performance comparable to C, enabling efficient handling of large-scale computations without sacrificing expressiveness. Numerical libraries in Julia are often implemented as packages that integrate seamlessly with its system, allowing for generic and performant algorithms tailored to scientific workflows. These libraries emphasize high-performance linear algebra, solving, and optimization, forming the backbone of Julia's ecosystem for simulations and data analysis. The module LinearAlgebra.jl, included in Julia's base distribution since version 0.7 in , provides comprehensive support for matrix and vector operations, including decompositions like eigenvalues, (SVD), and LU factorization. It utilizes BLAS and backends for optimized performance on multi-core systems and GPUs, ensuring through features such as pivoting in factorizations. For instance, functions like eigen compute for symmetric matrices with high accuracy, making it suitable for and applications. As of November 2025, Julia's latest stable release is version 1.12.1 (October 2025), which includes ongoing performance optimizations in base libraries like LinearAlgebra.jl. For solving ordinary differential equations (ODEs) and partial differential equations (PDEs), DifferentialEquations.jl serves as a unified interface, developed under the since its initial release in 2017. This package supports a wide range of solvers, including adaptive time-stepping methods like Runge-Kutta and multistep algorithms, which dynamically adjust step sizes to balance accuracy and efficiency in stiff systems. It integrates backends such as Sundials.jl, which wraps the SUNDIALS suite for robust handling of large-scale, nonlinear problems common in and modeling. The library's composable design allows users to specify equation structures declaratively, facilitating rapid prototyping of complex models like those in . Optimization tasks in Julia are addressed by JuMP.jl, a domain-specific modeling language for mathematical programming, licensed under the Mozilla Public License (MPL) and first released in 2012. It provides an intuitive syntax for formulating linear, mixed-integer, and nonlinear programs, interfacing with external solvers such as Gurobi, CPLEX, and to leverage their advanced algorithms for global optimality. For example, JuMP enables concise definitions of constraints and objectives, as in or energy grid scheduling, where it abstracts solver-specific details while maintaining high performance through Julia's . This package has been pivotal in , with extensions supporting and variants. Julia's numerical ecosystem also benefits from interoperability tools like PyCall.jl, which allows seamless integration with Python libraries such as for hybrid workflows. Overall, these libraries underscore Julia's role in bridging dynamic scripting with compiled efficiency, powering applications from climate modeling to .

Rust Libraries

Rust's numerical libraries leverage the language's ownership model and borrow checker to ensure and enable safe concurrency in tasks, making them suitable for systems-level numerics where parallel operations on s and matrices are common. These libraries provide foundational tools for array manipulation, linear algebra, and machine learning primitives, often integrating with Rust's for efficient data handling without runtime overhead. The ndarray crate offers an n-dimensional for general elements and numerical computations, supporting features like lightweight views, slicing, and to facilitate efficient data operations similar to those in scientific environments. Released initially in December 2015 and licensed under Apache-2.0/MIT, ndarray includes the Axis type for specifying dimensions in operations, such as summing along a particular axis with .sum_axis(Axis(0)) to compute row-wise totals. Nalgebra serves as a linear library tailored for geometry and physics simulations, supporting entity-component systems through its algebraic structures for vectors, matrices, and transformations. First released in November 2014 under the Apache-2.0/, it provides types like Matrix3 for 3D affine transformations, enabling operations such as matrices with Matrix3::from_angle_z([theta](/page/Theta)). Linfa constitutes a machine learning toolkit built on numerical primitives, offering algorithms for tasks including clustering (e.g., K-means) and regression (e.g., linear models). Launched in 2020 with an Apache-2.0/, it emphasizes modular design for integrating with crates like ndarray for data preprocessing. Many Rust numerical libraries, including extensions to ndarray, utilize foreign function interfaces (FFI) to bind with established C libraries like BLAS for optimized linear algebra routines.

OCaml Libraries

, a language with imperative capabilities, supports numerical computing through libraries that emphasize , immutability, and integration with established numerical backends. These libraries leverage OCaml's and garbage collection to enable reliable mathematical , distinguishing them from ownership-based systems in languages like . is a comprehensive numerical library for scientific and engineering computing in OCaml, providing support for N-dimensional arrays, dense and sparse matrix operations, linear algebra, statistical functions, optimization algorithms, regressions, and fast Fourier transforms. Developed since 2016 and licensed under the , Owl originated from a research project at the and emphasizes a functional style with immutable data structures to ensure correctness and composability in computations. Its core data structure, the Ndarray module, handles both CPU and GPU-accelerated operations, with Dense.Matrix offering bindings for on graphics processing units. In version 0.5.0, released in 2019, Owl introduced dedicated modules for neural networks, including single- and double-precision support for layers, activation functions, and training algorithms, expanding its applicability to tasks. Lacaml provides OCaml bindings to the BLAS (Basic Linear Algebra Subprograms) and LAPACK (Linear Algebra Package) libraries, enabling efficient linear algebra operations such as matrix decompositions, eigenvalue computations, and least-squares solvers. Licensed under the LGPL since its initial development in the early 2000s, Lacaml wraps these Fortran-based routines while maintaining OCaml's type system for safe array handling and supports both real and complex number precisions. It is particularly suited for high-performance applications requiring low-level numerical primitives without the overhead of higher-level abstractions. OCaml's numerical libraries, including , find practical use in finance, as exemplified by Jane Street's extensive adoption of the language for trading systems and infrastructure.

Other Languages

Delphi Libraries

, an object-oriented dialect of Pascal developed by , is widely used for native Windows desktop applications, where numerical libraries provide essential support for scientific computing and data analysis without relying on the .NET . These libraries are typically implemented in Pascal for seamless integration with Delphi's (VCL), enabling GUI-based tools for numerical tasks. ALGLIB offers a comprehensive library with a dedicated version, featuring routines for linear algebra, optimization, , and data processing, all accessible via Pascal syntax. Released initially in 1999, its edition employs a dual licensing model, allowing free use under a BSD-like for the reference manual while imposing restrictions on commercial distribution of the . The library's core is written for performance, wrapped in Pascal units compatible with both and FreePascal compilers, supporting platforms like Windows (32/64-bit) and . Key capabilities include dense and solvers (e.g., , QR factorization), fast Fourier transforms, and statistical functions, making it suitable for engineering simulations and in applications. A specific example of ALGLIB's functionality in Delphi is the minlm unit, which implements the Levenberg-Marquardt algorithm for nonlinear least squares fitting, allowing users to optimize model parameters by minimizing residuals between observed and predicted data. In Pascal syntax, this is invoked through procedures like MinLMCreate, which initializes the solver with bounds and constraints, followed by MinLMOptimize to iteratively refine solutions using function evaluations or gradients. This approach supports both unconstrained and bound-constrained problems, with built-in for cases lacking analytical derivatives, and has been applied in tasks across scientific domains. The DMath library, a community-driven open-source project, provides custom Pascal implementations for , solving, and basic statistical analysis tailored for and FreePascal environments. Registered in 2008, it includes routines for methods like for definite integrals and Runge-Kutta solvers for ordinary s, alongside statistical tools for such as , variance, and coefficients. No active development has occurred since 2012, after which it was considered complete; a called LMath continues maintenance with improved integration for Lazarus. As a lightweight alternative to commercial suites, DMath emphasizes portability and ease of integration into projects, with demo programs illustrating its use in numerical experiments.

Haskell Libraries

Haskell numerical libraries leverage the language's pure functional paradigm to enable verifiable and composable computations, avoiding side effects and mutable state for enhanced reliability in mathematical operations. These libraries support a range of numerical tasks, from basic vector and matrix manipulations to more advanced linear algebra, while benefiting from Haskell's strong and strategy. One prominent collection is HNumeric, a library providing pure functional tools for vectors and matrices with syntax inspired by R and MATLAB. Released in the 2010s, HNumeric supports operations such as vector addition, dot products, matrix inversion, determinants, and transpositions, along with modules for statistics and CSV handling via DataFrames. It is licensed under BSD-3-Clause and emphasizes immutability for safer numerical programming. Another key package is Numeric.LinearAlgebra, part of the hmatrix library, which offers a comprehensive interface for linear algebra computations including matrix decompositions and linear systems solving. Since 2008, hmatrix has utilized bindings to the BLAS and LAPACK standards for efficient performance, while maintaining a purely functional Haskell API under a BSD-3-Clause license. This allows seamless integration of high-performance numerical routines without exposing low-level mutable structures. Haskell's enables efficient handling of infinite series in numerical contexts, such as representing real numbers through dynamic precision via infinite data structures that are evaluated only as needed. Additionally, libraries like those in hmatrix and related type-safe extensions provide compile-time guarantees for matrix operations, preventing dimension mismatches and ensuring operations align with expected types. Recent advancements in the (GHC) version 9.6, released in 2023, include significant latency reductions in the non-moving garbage collector, improving overall runtime performance for numerical applications by minimizing pauses during . Packages such as HNumeric and hmatrix are typically managed through tools like Cabal or Stack for dependency resolution and building.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.