Top Qs
Timeline
Chat
Perspective
Matrix-free methods
From Wikipedia, the free encyclopedia
Remove ads
In computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products.[1] Such methods can be preferable when the matrix is sufficiently large that storing and manipulating it would be prohibitively expensive with respect to memory or computation time, even with the use of methods for sparse matrices. Many iterative methods allow for a matrix-free implementation, including:
- the power method,
- the Lanczos algorithm,[2]
- Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG),[3]
- Wiedemann's coordinate recurrence algorithm,[4]
- the conjugate gradient method,[5]
- Krylov subspace methods.
Distributed solutions have also been explored using coarse-grain parallel software systems to achieve homogeneous solutions of linear systems.[6]
It is generally used in solving non-linear equations like Euler's equations in computational fluid dynamics. Matrix-free conjugate gradient method has been applied in the non-linear elasto-plastic finite element solver.[7] Solving these equations requires the calculation of the Jacobian which is costly in terms of CPU time and storage. To avoid this expense, matrix-free methods are employed. In order to remove the need to calculate the Jacobian, the Jacobian vector product is formed instead, which is in fact a vector itself. Manipulating and calculating this vector is easier than working with a large matrix or linear system.
Remove ads
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads