Tensor decomposition
Process in algebra From Wikipedia, the free encyclopedia
In multilinear algebra, a tensor decomposition is any scheme for expressing a "data tensor" (M-way array) as a sequence of elementary operations acting on other, often simpler tensors.[1][2][3] Many tensor decompositions generalize some matrix decompositions.[4]
This article needs additional citations for verification. (June 2021) |
Tensors are generalizations of matrices to higher dimensions (or rather to higher orders, i.e. the higher number of dimensions) and can consequently be treated as multidimensional fields.[1][5] The main tensor decompositions are:
- Tensor rank decomposition;[6]
- Higher-order singular value decomposition;[7]
- Tucker decomposition;
- matrix product states, and operators or tensor trains;
- Online Tensor Decompositions[8][9][10]
- hierarchical Tucker decomposition;[11]
- block term decomposition[12][13][11][14]
Notation
Summarize
Perspective
This section introduces basic notations and operations that are widely used in the field.
Symbols | Definition |
---|---|
scalar, vector, row, matrix, tensor | |
vectorizing either a matrix or a tensor | |
matrixized tensor | |
mode-m product |
Introduction
A multi-way graph with K perspectives is a collection of K matrices with dimensions I × J (where I, J are the number of nodes). This collection of matrices is naturally represented as a tensor X of size I × J × K. In order to avoid overloading the term “dimension”, we call an I × J × K tensor a three “mode” tensor, where “modes” are the numbers of indices used to index the tensor.
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.