# Matrix (mathematics)

## Array of numbers / From Wikipedia, the free encyclopedia

#### Dear Wikiwand AI, let's keep it short by simply answering these key questions:

Can you list the top facts and stats about Matrix (mathematics)?

Summarize this article for a 10 year old

In mathematics, a **matrix** (pl.: **matrices**) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

For example,

is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a "$2\times 3$ matrix", or a matrix of dimension $2\times 3$.

Matrices are used to represent linear maps and allow explicit computations in linear algebra. Therefore, the study of matrices is a large part of linear algebra, and most properties and operations of abstract linear algebra can be expressed in terms of matrices. For example, matrix multiplication represents the composition of linear maps.

Not all matrices are related to linear algebra. This is, in particular, the case in graph theory, of incidence matrices, and adjacency matrices.^{[1]} This article focuses on matrices related to linear algebra, and, unless otherwise specified, all matrices represent linear maps or may be viewed as such.

Square matrices, matrices with the same number of rows and columns, play a major role in matrix theory. Square matrices of a given dimension form a noncommutative ring, which is one of the most common examples of a noncommutative ring. The determinant of a square matrix is a number associated to the matrix, which is fundamental for the study of a square matrix; for example, a square matrix is invertible if and only if it has a nonzero determinant, and the eigenvalues of a square matrix are the roots of a polynomial determinant.

In geometry, matrices are widely used for specifying and representing geometric transformations (for example rotations) and coordinate changes. In numerical analysis, many computational problems are solved by reducing them to a matrix computation, and this often involves computing with matrices of huge dimension. Matrices are used in most areas of mathematics and most scientific fields, either directly, or through their use in geometry and numerical analysis.

**Matrix theory** is the branch of mathematics that focuses on the study of matrices. It was initially a sub-branch of linear algebra, but soon grew to include subjects related to graph theory, algebra, combinatorics and statistics.

A *matrix* is a rectangular array of numbers (or other mathematical objects), called the *entries* of the matrix. Matrices are subject to standard operations such as addition and multiplication.^{[2]} Most commonly, a matrix over a field *F* is a rectangular array of elements of *F*.^{[3]}^{[4]} A **real matrix** and a **complex matrix** are matrices whose entries are respectively real numbers or complex numbers. More general types of entries are discussed below. For instance, this is a real matrix:

- $\mathbf {A} ={\begin{bmatrix}-1.3&0.6\\20.4&5.5\\9.7&-6.2\end{bmatrix}}.$

The numbers, symbols, or expressions in the matrix are called its *entries* or its *elements*. The horizontal and vertical lines of entries in a matrix are called *rows* and *columns*, respectively.

### Size

The size of a matrix is defined by the number of rows and columns it contains. There is no limit to the numbers of rows and columns a matrix (in the usual sense) can have as long as they are positive integers. A matrix with ${m}$ rows and ${n}$ columns is called an ${m\times n}$ matrix, or ${m}$-by-${n}$ matrix, where ${m}$ and ${n}$ are called its *dimensions*. For example, matrix ${\mathbf {A} }$ above is a ${3\times 2}$ matrix.

Matrices with a single row are called *row vectors*, and those with a single column are called *column vectors*. A matrix with the same number of rows and columns is called a *square matrix*.^{[5]} A matrix with an infinite number of rows or columns (or both) is called an *infinite matrix*. In some contexts, such as computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an *empty matrix*.

**More information**, ...

Name | Size | Example | Description | Notation |
---|---|---|---|---|

Row vector | 1 × n | ${\begin{bmatrix}3&7&2\end{bmatrix}}$ | A matrix with one row, sometimes used to represent a vector | ${a_{i}}$ |

Column vector | n × 1 | ${\begin{bmatrix}4\\1\\8\end{bmatrix}}$ | A matrix with one column, sometimes used to represent a vector | ${a_{j}}$ |

Square matrix | n × n | ${\begin{bmatrix}9&13&5\\1&11&7\\2&6&3\end{bmatrix}}$ | A matrix with the same number of rows and columns, sometimes used to represent a linear transformation from a vector space to itself, such as reflection, rotation, or shearing. | ${\mathbf {A} }$ |

The specifics of symbolic matrix notation vary widely, with some prevailing trends. Matrices are commonly written in square brackets or parentheses, so that an $m\times n$ matrix $\mathbf {A}$ is represented as

This may be abbreviated by writing only a single generic term, possibly along with indices, as in

or $\mathbf {A} =(a_{i,j})_{1\leq i,j\leq n}$ in the case that $n=m$.

Matrices are usually symbolized using upper-case letters (such as ${\mathbf {A} }$ in the examples above), while the corresponding lower-case letters, with two subscript indices (e.g., ${a_{11}}$, or ${a_{1,1}}$), represent the entries. In addition to using upper-case letters to symbolize matrices, many authors use a special typographical style, commonly boldface roman (non-italic), to further distinguish matrices from other mathematical objects. An alternative notation involves the use of a double-underline with the variable name, with or without boldface style, as in ${\underline {\underline {A}}}$.

The entry in the *i*-th row and *j*-th column of a matrix **A** is sometimes referred to as the ${i,j}$ or ${(i,j)}$ entry of the matrix, and commonly denoted by ${a_{i,j}}$ or ${a_{ij}}$. Alternative notations for that entry are ${\mathbf {A} [i,j]}$ and ${\mathbf {A} _{i,j}}$. For example, the $(1,3)$ entry of the following matrix $\mathbf {A}$ is 5 (also denoted ${a_{13}}$, ${a_{1,3}}$, $\mathbf {A} [1,3]$ or ${{\mathbf {A} }_{1,3}}$):

- $\mathbf {A} ={\begin{bmatrix}4&-7&\color {red}{5}&0\\-2&0&11&8\\19&1&-3&12\end{bmatrix}}$

Sometimes, the entries of a matrix can be defined by a formula such as $a_{i,j}=f(i,j)$. For example, each of the entries of the following matrix $\mathbf {A}$ is determined by the formula $a_{ij}=i-j$.

- $\mathbf {A} ={\begin{bmatrix}0&-1&-2&-3\\1&0&-1&-2\\2&1&0&-1\end{bmatrix}}$

In this case, the matrix itself is sometimes defined by that formula, within square brackets or double parentheses. For example, the matrix above is defined as ${\mathbf {A} }=[i-j]$ or ${\mathbf {A} }=((i-j))$. If matrix size is $m\times n$, the above-mentioned formula $f(i,j)$ is valid for any $i=1,\dots ,m$ and any $j=1,\dots ,n$. This can be either specified separately, or indicated using $m\times n$ as a subscript. For instance, the matrix $\mathbf {A}$ above is $3\times 4$, and can be defined as ${\mathbf {A} }=[i-j](i=1,2,3;j=1,\dots ,4)$ or ${\mathbf {A} }=[i-j]_{3\times 4}$.

Some programming languages utilize doubly subscripted arrays (or arrays of arrays) to represent an *m*-by-*n* matrix. Some programming languages start the numbering of array indexes at zero, in which case the entries of an *m*-by-*n* matrix are indexed by $0\leq i\leq m-1$ and $0\leq j\leq n-1$.^{[6]} This article follows the more common convention in mathematical writing where enumeration starts from 1.

An asterisk is occasionally used to refer to whole rows or columns in a matrix. For example, $a_{i}^{*}$ refers to the *i*-th row of **A**, while $a_{j}^{*}$ refers to the *j*-th column.

The set of all *m*-by-*n* real matrices is often denoted ${\mathcal {M}}(m,n),$ or ${\mathcal {M}}_{m\times n}(\mathbb {R} ).$ The set of all *m*-by-*n* matrices over another field, or over a ring R, is similarly denoted ${\mathcal {M}}(m,n,R),$ or ${\mathcal {M}}_{m\times n}(R).$ If *m* = *n*, such as in the case of square matrices, one does not repeat the dimension: ${\mathcal {M}}(n,R),$ or ${\mathcal {M}}_{n}(R).$^{[7]} Often, $M$ is used in place of ${\mathcal {M}}.$

There are a number of basic operations that can be applied on matrices. Some, such as *transposition* and *submatrix* do not depend on the nature of the entries. Others, such as *matrix addition*, *scalar multiplication*, *matrix multiplication*, and *row operations* involve operations on matrix entries and therefore require that matrix entries are numbers or belong to a field or a ring.^{[8]}

In this section, it is supposed that matrix entries belong to a fixed ring, that is typically a field of numbers.

### Addition, scalar multiplication, subtraction and transposition

The *sum* **A**+**B** of two *m*-by-*n* matrices **A** and **B** is calculated entrywise:

- (
**A**+**B**)_{i,j}=**A**_{i,j}+**B**_{i,j}, where 1 ≤*i*≤*m*and 1 ≤*j*≤*n*.

For example,

- ${\begin{bmatrix}1&3&1\\1&0&0\end{bmatrix}}+{\begin{bmatrix}0&0&5\\7&5&0\end{bmatrix}}={\begin{bmatrix}1+0&3+0&1+5\\1+7&0+5&0+0\end{bmatrix}}={\begin{bmatrix}1&3&6\\8&5&0\end{bmatrix}}$
- Scalar multiplication

The product *c***A** of a number *c* (also called a scalar in this context) and a matrix **A** is computed by multiplying every entry of **A** by *c*:

- (
*c***A**)_{i,j}=*c*·**A**_{i,j}.

This operation is called *scalar multiplication*, but its result is not named "scalar product" to avoid confusion, since "scalar product" is often used as a synonym for "inner product". For example:

- $2\cdot {\begin{bmatrix}1&8&-3\\4&-2&5\end{bmatrix}}={\begin{bmatrix}2\cdot 1&2\cdot 8&2\cdot -3\\2\cdot 4&2\cdot -2&2\cdot 5\end{bmatrix}}={\begin{bmatrix}2&16&-6\\8&-4&10\end{bmatrix}}$
- Subtraction

The subtraction of two *m*×*n* matrices is defined by composing matrix addition with scalar multiplication by –1:

- $\mathbf {A} -\mathbf {B} =\mathbf {A} +(-1)\cdot \mathbf {B}$
- Transposition

The *transpose* of an *m*-by-*n* matrix **A** is the *n*-by-*m* matrix **A**^{T} (also denoted **A**^{tr} or ^{t}**A**) formed by turning rows into columns and vice versa:

- (
**A**^{T})_{i,j}=**A**_{j,i}.

For example:

- ${\begin{bmatrix}1&2&3\\0&-6&7\end{bmatrix}}^{\mathrm {T} }={\begin{bmatrix}1&0\\2&-6\\3&7\end{bmatrix}}$

Familiar properties of numbers extend to these operations on matrices: for example, addition is commutative, that is, the matrix sum does not depend on the order of the summands: **A** + **B** = **B** + **A**.^{[9]}
The transpose is compatible with addition and scalar multiplication, as expressed by (*c***A**)^{T} = *c*(**A**^{T}) and (**A** + **B**)^{T} = **A**^{T} + **B**^{T}. Finally, (**A**^{T})^{T} = **A**.

### Matrix multiplication

*Multiplication* of two matrices is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If **A** is an *m*-by-*n* matrix and **B** is an *n*-by-*p* matrix, then their *matrix product* **AB** is the *m*-by-*p* matrix whose entries are given by dot product of the corresponding row of **A** and the corresponding column of **B**:^{[10]}

- $[\mathbf {AB} ]_{i,j}=a_{i,1}b_{1,j}+a_{i,2}b_{2,j}+\cdots +a_{i,n}b_{n,j}=\sum _{r=1}^{n}a_{i,r}b_{r,j},$

where 1 ≤ *i* ≤ *m* and 1 ≤ *j* ≤ *p*.^{[11]} For example, the underlined entry 2340 in the product is calculated as (2 × 1000) + (3 × 100) + (4 × 10) = 2340:

- ${\begin{aligned}{\begin{bmatrix}{\underline {2}}&{\underline {3}}&{\underline {4}}\\1&0&0\\\end{bmatrix}}{\begin{bmatrix}0&{\underline {1000}}\\1&{\underline {100}}\\0&{\underline {10}}\\\end{bmatrix}}&={\begin{bmatrix}3&{\underline {2340}}\\0&1000\\\end{bmatrix}}.\end{aligned}}$

Matrix multiplication satisfies the rules (**AB**)**C** = **A**(**BC**) (associativity), and (**A** + **B**)**C** = **AC** + **BC** as well as **C**(**A** + **B**) = **CA** + **CB** (left and right distributivity), whenever the size of the matrices is such that the various products are defined.^{[12]} The product **AB** may be defined without **BA** being defined, namely if **A** and **B** are *m*-by-*n* and *n*-by-*k* matrices, respectively, and *m* ≠ *k*. Even if both products are defined, they generally need not be equal, that is:

**AB**≠**BA**,

In other words, matrix multiplication is not commutative, in marked contrast to (rational, real, or complex) numbers, whose product is independent of the order of the factors.^{[10]} An example of two matrices not commuting with each other is:

- ${\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}{\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}={\begin{bmatrix}0&1\\0&3\\\end{bmatrix}},$

whereas

- ${\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}{\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}={\begin{bmatrix}3&4\\0&0\\\end{bmatrix}}.$

Besides the ordinary matrix multiplication just described, other less frequently used operations on matrices that can be considered forms of multiplication also exist, such as the Hadamard product and the Kronecker product.^{[13]} They arise in solving matrix equations such as the Sylvester equation.

### Row operations

There are three types of row operations:

- row addition, that is adding a row to another.
- row multiplication, that is multiplying all entries of a row by a non-zero constant;
- row switching, that is interchanging two rows of a matrix;

These operations are used in several ways, including solving linear equations and finding matrix inverses.

### Submatrix

A **submatrix** of a matrix is a matrix obtained by deleting any collection of rows and/or columns.^{[14]}^{[15]}^{[16]} For example, from the following 3-by-4 matrix, we can construct a 2-by-3 submatrix by removing row 3 and column 2:

- $\mathbf {A} ={\begin{bmatrix}1&\color {red}{2}&3&4\\5&\color {red}{6}&7&8\\\color {red}{9}&\color {red}{10}&\color {red}{11}&\color {red}{12}\end{bmatrix}}\rightarrow {\begin{bmatrix}1&3&4\\5&7&8\end{bmatrix}}.$

The minors and cofactors of a matrix are found by computing the determinant of certain submatrices.^{[16]}^{[17]}

A **principal submatrix** is a square submatrix obtained by removing certain rows and columns. The definition varies from author to author. According to some authors, a principal submatrix is a submatrix in which the set of row indices that remain is the same as the set of column indices that remain.^{[18]}^{[19]} Other authors define a principal submatrix as one in which the first *k* rows and columns, for some number *k*, are the ones that remain;^{[20]} this type of submatrix has also been called a **leading principal submatrix**.^{[21]}