# Multivariate normal distribution

## Generalization of the one-dimensional normal distribution to higher dimensions / From Wikipedia, the free encyclopedia

#### Dear Wikiwand AI, let's keep it short by simply answering these key questions:

Can you list the top facts and stats about Multivariate normal distribution?

Summarize this article for a 10 year old

In probability theory and statistics, the **multivariate normal distribution**, **multivariate Gaussian distribution**, or **joint normal distribution** is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be *k*-variate normally distributed if every linear combination of its *k* components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.

**Quick Facts**Notation, Parameters ...

Probability density function | |||

Notation | ${\mathcal {N}}({\boldsymbol {\mu }},\,{\boldsymbol {\Sigma }})$ | ||
---|---|---|---|

Parameters |
∈ μR^{k} — locationΣ ∈ R^{k × k} — covariance (positive semi-definite matrix) | ||

Support |
∈ x + span(μΣ) ⊆ R^{k} | ||

$(2\pi )^{-k/2}\det({\boldsymbol {\Sigma }})^{-1/2}\,\exp \left(-{\frac {1}{2}}(\mathbf {x} -{\boldsymbol {\mu }})^{\mathrm {T} }{\boldsymbol {\Sigma }}^{-1}(\mathbf {x} -{\boldsymbol {\mu }})\right),$ exists only when Σ is positive-definite | |||

Mean |
μ | ||

Mode |
μ | ||

Variance |
Σ | ||

Entropy | ${\frac {k}{2}}\log {\mathord {\left(2\pi \mathrm {e} \right)}}+{\frac {1}{2}}\log \det {\mathord {\left({\boldsymbol {\Sigma }}\right)}}$ | ||

MGF | $\exp \!{\Big (}{\boldsymbol {\mu }}^{\mathrm {T} }\mathbf {t} +{\tfrac {1}{2}}\mathbf {t} ^{\mathrm {T} }{\boldsymbol {\Sigma }}\mathbf {t} {\Big )}$ | ||

CF | $\exp \!{\Big (}i{\boldsymbol {\mu }}^{\mathrm {T} }\mathbf {t} -{\tfrac {1}{2}}\mathbf {t} ^{\mathrm {T} }{\boldsymbol {\Sigma }}\mathbf {t} {\Big )}$ | ||

Kullback–Leibler divergence |
See § Kullback–Leibler divergence |