Random matrix
Matrix-valued random variable / From Wikipedia, the free encyclopedia
Dear Wikiwand AI, let's keep it short by simply answering these key questions:
Can you list the top facts and stats about Random matrices?
Summarize this article for a 10 year old
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.
Engineering
Random matrix theory can be applied to the electrical and communications engineering research efforts to study, model and develop Massive Multiple-Input Multiple-Output (MIMO) radio systems.
Physics
In nuclear physics, random matrices were introduced by Eugene Wigner to model the nuclei of heavy atoms.[1] Wigner postulated that the spacings between the lines in the spectrum of a heavy atom nucleus should resemble the spacings between the eigenvalues of a random matrix, and should depend only on the symmetry class of the underlying evolution.[2] In solid-state physics, random matrices model the behaviour of large disordered Hamiltonians in the mean-field approximation.
In quantum chaos, the Bohigas–Giannoni–Schmit (BGS) conjecture asserts that the spectral statistics of quantum systems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory.[3]
In quantum optics, transformations described by random unitary matrices are crucial for demonstrating the advantage of quantum over classical computation (see, e.g., the boson sampling model).[4] Moreover, such random unitary transformations can be directly implemented in an optical circuit, by mapping their parameters to optical circuit components (that is beam splitters and phase shifters).[5]
Random matrix theory has also found applications to the chiral Dirac operator in quantum chromodynamics,[6] quantum gravity in two dimensions,[7] mesoscopic physics,[8] spin-transfer torque,[9] the fractional quantum Hall effect,[10] Anderson localization,[11] quantum dots,[12] and superconductors[13]
Mathematical statistics and numerical analysis
In multivariate statistics, random matrices were introduced by John Wishart, who sought to estimate covariance matrices of large samples.[14] Chernoff-, Bernstein-, and Hoeffding-type inequalities can typically be strengthened when applied to the maximal eigenvalue (i.e. the eigenvalue of largest magnitude) of a finite sum of random Hermitian matrices.[15] Random matrix theory is used to study the spectral properties of random matrices—such as sample covariance matrices—which is of particular interest in high-dimensional statistics. Random matrix theory also saw applications in neuronal networks[16] and deep learning, with recent work utilizing random matrices to show that hyper-parameter tunings can be cheaply transferred between large neural networks without the need for re-training.[17]
In numerical analysis, random matrices have been used since the work of John von Neumann and Herman Goldstine[18] to describe computation errors in operations such as matrix multiplication. Although random entries are traditional "generic" inputs to an algorithm, the concentration of measure associated with random matrix distributions implies that random matrices will not test large portions of an algorithm's input space.[19]
Number theory
In number theory, the distribution of zeros of the Riemann zeta function (and other L-functions) is modeled by the distribution of eigenvalues of certain random matrices.[20] The connection was first discovered by Hugh Montgomery and Freeman Dyson. It is connected to the Hilbert–Pólya conjecture.
Free probability
The relation of free probability with random matrices[21] is a key reason for the wide use of free probability in other subjects. Voiculescu introduced the concept of freeness around 1983 in an operator algebraic context; at the beginning there was no relation at all with random matrices. This connection was only revealed later in 1991 by Voiculescu;[22] he was motivated by the fact that the limit distribution which he found in his free central limit theorem had appeared before in Wigner's semi-circle law in the random matrix context.
Computational neuroscience
In the field of computational neuroscience, random matrices are increasingly used to model the network of synaptic connections between neurons in the brain. Dynamical models of neuronal networks with random connectivity matrix were shown to exhibit a phase transition to chaos[23] when the variance of the synaptic weights crosses a critical value, at the limit of infinite system size. Results on random matrices have also shown that the dynamics of random-matrix models are insensitive to mean connection strength. Instead, the stability of fluctuations depends on connection strength variation[24][25] and time to synchrony depends on network topology.[26][27]
In the analysis of massive data such as fMRI, random matrix theory has been applied in order to perform dimension reduction. When applying an algorithm such as PCA, it is important to be able to select the number of significant components. The criteria for selecting components can be multiple (based on explained variance, Kaiser's method, eigenvalue, etc.). Random matrix theory in this content has its representative the Marchenko-Pastur distribution, which guarantees the theoretical high and low limits of the eigenvalues associated with a random variable covariance matrix. This matrix calculated in this way becomes the null hypothesis that allows one to find the eigenvalues (and their eigenvectors) that deviate from the theoretical random range. The components thus excluded become the reduced dimensional space (see examples in fMRI [28][29]).
Optimal control
In optimal control theory, the evolution of n state variables through time depends at any time on their own values and on the values of k control variables. With linear evolution, matrices of coefficients appear in the state equation (equation of evolution). In some problems the values of the parameters in these matrices are not known with certainty, in which case there are random matrices in the state equation and the problem is known as one of stochastic control.[30]: ch. 13 [31][32] A key result in the case of linear-quadratic control with stochastic matrices is that the certainty equivalence principle does not apply: while in the absence of multiplier uncertainty (that is, with only additive uncertainty) the optimal policy with a quadratic loss function coincides with what would be decided if the uncertainty were ignored, the optimal policy may differ if the state equation contains random coefficients.
Computational mechanics
In computational mechanics, epistemic uncertainties underlying the lack of knowledge about the physics of the modeled system give rise to mathematical operators associated with the computational model, which are deficient in a certain sense. Such operators lack certain properties linked to unmodeled physics. When such operators are discretized to perform computational simulations, their accuracy is limited by the missing physics. To compensate for this deficiency of the mathematical operator, it is not enough to make the model parameters random, it is necessary to consider a mathematical operator that is random and can thus generate families of computational models in the hope that one of these captures the missing physics. Random matrices have been used in this sense,[33][34] with applications in vibroacoustics, wave propagations, materials science, fluid mechanics, heat transfer, etc.
The most-commonly studied random matrix distributions are the Gaussian ensembles: GOE, GUE and GSE. They are often denoted by their Dyson index, β = 1 for GOE, β = 2 for GUE, and β = 4 for GSE. This index counts the number of real components per matrix element.
Definitions
The Gaussian unitary ensemble is described by the Gaussian measure with density
on the space of Hermitian matrices . Here
is a normalization constant, chosen so that the integral of the density is equal to one. The term unitary refers to the fact that the distribution is invariant under unitary conjugation. The Gaussian unitary ensemble models Hamiltonians lacking time-reversal symmetry.
The Gaussian orthogonal ensemble is described by the Gaussian measure with density
on the space of n × n real symmetric matrices H = (Hij)n
i,j=1. Its distribution is invariant under orthogonal conjugation, and it models Hamiltonians with time-reversal symmetry. Equivalently, it is generated by , where is an matrix with IID samples from the standard normal distribution.
The Gaussian symplectic ensemble is described by the Gaussian measure with density
on the space of n × n Hermitian quaternionic matrices, e.g. symmetric square matrices composed of quaternions, H = (Hij)n
i,j=1. Its distribution is invariant under conjugation by the symplectic group, and it models Hamiltonians with time-reversal symmetry but no rotational symmetry.
Point correlation functions
The ensembles as defined here have Gaussian distributed matrix elements with mean ⟨Hij⟩ = 0, and two-point correlations given by
from which all higher correlations follow by Isserlis' theorem.
Moment generating functions
The moment generating function for the GOE is
where is the Frobenius norm.
Spectral density
The joint probability density for the eigenvalues λ1, λ2, ..., λn of GUE/GOE/GSE is given by
|
(1) |
where Zβ,n is a normalization constant which can be explicitly computed, see Selberg integral. In the case of GUE (β = 2), the formula (1) describes a determinantal point process. Eigenvalues repel as the joint probability density has a zero (of th order) for coinciding eigenvalues .
The distribution of the largest eigenvalue for GOE, and GUE, are explicitly solvable.[35] They converge to the Tracy–Widom distribution after shifting and scaling appropriately.
Convergence to Wigner semicircular distribution
The spectrum, divided by , converges in distribution to the semicircular distribution on the interval : . Here is the variance of off-diagonal entries.
Distribution of level spacings
From the ordered sequence of eigenvalues , one defines the normalized spacings , where is the mean spacing. The probability distribution of spacings is approximately given by,
for the orthogonal ensemble GOE ,
for the unitary ensemble GUE , and
for the symplectic ensemble GSE .
The numerical constants are such that is normalized:
and the mean spacing is,
for .
Wigner matrices are random Hermitian matrices such that the entries
above the main diagonal are independent random variables with zero mean and have identical second moments.
Invariant matrix ensembles are random Hermitian matrices with density on the space of real symmetric/Hermitian/quaternionic Hermitian matrices, which is of the form where the function V is called the potential.
The Gaussian ensembles are the only common special cases of these two classes of random matrices. This is a consequence of a theorem by Porter and Rosenzweig.[36][37]