Hubbry Logo
search
logo

Matrix multiplication algorithm

logo
Community Hub0 Subscribers
Write something...
Be the first to start a discussion here.
Be the first to start a discussion here.
See all
Matrix multiplication algorithm

Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors (perhaps over a network).

Directly applying the mathematical definition of matrix multiplication gives an algorithm that takes time on the order of n3 field operations to multiply two n × n matrices over that field (Θ(n3) in big O notation). Better asymptotic bounds on the time required to multiply matrices have been known since the Strassen's algorithm in the 1960s, but the optimal time (that is, the computational complexity of matrix multiplication) remains unknown. As of September 2025, the best bound on the asymptotic complexity of a matrix multiplication algorithm is O(n2.371339) time, given by Alman, Duan, Williams, Xu, Xu, and Zhou. However, this algorithm is a galactic algorithm because of the large constants and cannot be realized practically.

The definition of matrix multiplication is that if C = AB for an n × m matrix A and an m × p matrix B, then C is an n × p matrix with entries

From this, a simple algorithm can be constructed which loops over the indices i from 1 through n and j from 1 through p, computing the above using a nested loop:

This algorithm takes time Θ(nmp) (in asymptotic notation). A common simplification for the purpose of algorithm analysis is to assume that the inputs are all square matrices of size n × n, in which case the running time is Θ(n3), i.e., cubic in the size of the dimension.

The three loops in iterative matrix multiplication can be arbitrarily swapped with each other without an effect on correctness or asymptotic running time. However, the order can have a considerable impact on practical performance due to the memory access patterns and cache use of the algorithm; which order is best also depends on whether the matrices are stored in row-major order, column-major order, or a mix of both.

In particular, in the idealized case of a fully associative cache consisting of M bytes and b bytes per cache line (i.e. M/b cache lines), the above algorithm is sub-optimal for A and B stored in row-major order. When n > M/b, every iteration of the inner loop (a simultaneous sweep through a row of A and a column of B) incurs a cache miss when accessing an element of B. This means that the algorithm incurs Θ(n3) cache misses in the worst case. As of 2010, the speed of memories compared to that of processors is such that the cache misses, rather than the actual calculations, dominate the running time for sizable matrices.

The optimal variant of the iterative algorithm for A and B in row-major layout is a tiled version, where the matrix is implicitly divided into square tiles of size M by M:

See all
User Avatar
No comments yet.