Top Qs
Timeline
Chat
Perspective
Birkhoff factorization
From Wikipedia, the free encyclopedia
Remove ads
In mathematics, Birkhoff factorization or Birkhoff decomposition, introduced by George David Birkhoff (1909), is a generalization of the LU decomposition (i.e. Gauss elimination) to loop groups.
The factorization of an invertible matrix with coefficients that are Laurent polynomials in is given by a product , where has entries that are polynomials in , is diagonal with for and , and has entries that are polynomials in . For a generic matrix we have .
Birkhoff factorization implies the Birkhoff–Grothendieck theorem of Grothendieck (1957) that vector bundles over the projective line are sums of line bundles.
There are several variations where the general linear group is replaced by some other reductive algebraic group, due to Alexander Grothendieck (1957). Birkhoff factorization follows from the Bruhat decomposition for affine Kac–Moody groups (or loop groups), and conversely the Bruhat decomposition for the affine general linear group follows from Birkhoff factorization together with the Bruhat decomposition for the ordinary general linear group.
Remove ads
Algorithm
Summarize
Perspective
There is an effective algorithm to compute the Birkhoff factorization. The following will be based on the book by Clancey-Gohberg,[1] where a more general case can also be found.
Note that, by the cofactor matrix formula, a matrix being invertible is equivalent to the determinant being a unit in the base ring. In our case this means that for some , as these are the only invertible elements in the ring of Laurent polynomials , and and are just nonzero constants in , because these are the only units in or . This means that and, in particular, . This will help us determine when the algorithm is finished.
First step: Replace by to cancel any denominators, i.e. so that is defined over . Let be the exponent at , note that this is now nonnegative.
Second step: Permute the rows and factor out the highest possible power of in each row, while staying over . The permutation has to ensure that the highest powers of are decreasing. Denote the permutation matrix, and the diagonal matrix of the powers, respectively.
Third step: If the sum of the powers from step 2 equals , we are done. Otherwise, perform row operations without pivoting such that at least one row becomes zero modulo . Put the factored powers back into our matrix and return to step 2.
By disallowing pivoting, we are asking that the matrix encoding the row operations is lower triangular.
The matrix to be returned to step 2 is:
Note that as long as the determinant of the matrix is not constant, the determinant is zero modulo , hence the rows are linearly dependent modulo . Therefore, this step can be carried out.
Conclusion: Once the from step 2 contains high enough powers, we can set , as this will have unit determinant by multiplicativity. In each iteration, the effect of our algorithm was multiplication by . Since the powers in are descending and is lower triangular, we find that contains only negative powers of . Furthermore, by multiplicativity of the determinant again, we find that . Thus we may take the product of these matrices obtained from all the iterations and set to be its inverse.
Finally, recalling step 1, we have now decomposed . Dividing through with and setting gives the result.
Example: Consider . The determinant is 1. The first step is done by replacing by which has determinant and so .
The second step is . The third step gives .
Returning the factored-out powers, we want to repeat step 2 on the matrix . Here, we can factor as , meeting our goal of . Compiling all these operations:
Therefore, dividing by , .
Remove ads
See also
Notes
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads