Top Qs
Timeline
Chat
Perspective
Matrix sign function
Generalization of signum function to matrices From Wikipedia, the free encyclopedia
Remove ads
In mathematics, the matrix sign function is a matrix function on square matrices analogous to the complex sign function.[1]
It was introduced by J.D. Roberts in 1971 as a tool for model reduction and for solving Lyapunov and Algebraic Riccati equation in a technical report of Cambridge University, which was later published in a journal in 1980.[2][3]
Definition
Summarize
Perspective
The matrix sign function is a generalization of the complex signum function
to the matrix valued analogue . Although the sign function is not analytic, the matrix function is well defined for all matrices that have no eigenvalue on the imaginary axis, see for example the Jordan-form-based definition (where the derivatives are all zero).
Remove ads
Properties
Theorem: Let , then .[1]
Theorem: Let , then is diagonalizable and has eigenvalues that are .[1]
Theorem: Let , then is a projector onto the invariant subspace associated with the eigenvalues in the right-half plane, and analogously for and the left-half plane.[1]
Theorem: Let , and be a Jordan decomposition such that corresponds to eigenvalues with positive real part and to eigenvalue with negative real part. Then , where and are identity matrices of sizes corresponding to and , respectively.[1]
Remove ads
Computational methods
Summarize
Perspective
The function can be computed with generic methods for matrix functions, but there are also specialized methods.
Newton iteration
The Newton iteration can be derived by observing that , which in terms of matrices can be written as , where we use the matrix square root. If we apply the Babylonian method to compute the square root of the matrix , that is, the iteration , and define the new iterate , we arrive at the iteration
,
where typically . Convergence is global, and locally it is quadratic.[1][2]
The Newton iteration uses the explicit inverse of the iterates .
Newton–Schulz iteration
To avoid the need of an explicit inverse used in the Newton iteration, the inverse can be approximated with one step of the Newton iteration for the inverse, , derived by Schulz(de) in 1933.[4] Substituting this approximation into the previous method, the new method becomes
.
Convergence is (still) quadratic, but only local (guaranteed for ).[1]
Remove ads
Applications
Summarize
Perspective
Solutions of Sylvester equations
Theorem:[2][3] Let and assume that and are stable, then the unique solution to the Sylvester equation, , is given by such that
Proof sketch: The result follows from the similarity transform
since
due to the stability of and .
The theorem is, naturally, also applicable to the Lyapunov equation. However, due to the structure the Newton iteration simplifies to only involving inverses of and .
Solutions of algebraic Riccati equations
There is a similar result applicable to the algebraic Riccati equation, .[1][2] Define as
Under the assumption that are Hermitian and there exists a unique stabilizing solution, in the sense that is stable, that solution is given by the over-determined, but consistent, linear system
Proof sketch: The similarity transform
and the stability of implies that
for some matrix .
Computations of matrix square-root
The Denman–Beavers iteration for the square root of a matrix can be derived from the Newton iteration for the matrix sign function by noticing that is a degenerate algebraic Riccati equation[3] and by definition a solution is the square root of .
Remove ads
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads