Non-negative least squares
Constrained least squares problem From Wikipedia, the free encyclopedia
In mathematical optimization, the problem of non-negative least squares (NNLS) is a type of constrained least squares problem where the coefficients are not allowed to become negative. That is, given a matrix A and a (column) vector of response variables y, the goal is to find[1]
- subject to x ≥ 0.
Here x ≥ 0 means that each component of the vector x should be non-negative, and ‖·‖2 denotes the Euclidean norm.
Non-negative least squares problems turn up as subproblems in matrix decomposition, e.g. in algorithms for PARAFAC[2] and non-negative matrix/tensor factorization.[3][4] The latter can be considered a generalization of NNLS.[1]
Another generalization of NNLS is bounded-variable least squares (BVLS), with simultaneous upper and lower bounds αi ≤ xi ≤ βi.[5]: 291 [6]
Quadratic programming version
Summarize
Perspective
The NNLS problem is equivalent to a quadratic programming problem
where Q = ATA and c = −AT y. This problem is convex, as Q is positive semidefinite and the non-negativity constraints form a convex feasible set.[7]
Algorithms
Summarize
Perspective
The first widely used algorithm for solving this problem is an active-set method published by Lawson and Hanson in their 1974 book Solving Least Squares Problems.[5]: 291 In pseudocode, this algorithm looks as follows:[1][2]
- Inputs:
- a real-valued matrix A of dimension m × n,
- a real-valued vector y of dimension m,
- a real value ε, the tolerance for the stopping criterion.
- Initialize:
- Set P = ∅.
- Set R = {1, ..., n}.
- Set x to an all-zero vector of dimension n.
- Set w = AT(y − Ax).
- Let wR denote the sub-vector with indexes from R
- Main loop: while R ≠ ∅ and max(wR) > ε:
- Let j in R be the index of max(wR) in w.
- Add j to P.
- Remove j from R.
- Let AP be A restricted to the variables included in P.
- Let s be vector of same length as x. Let sP denote the sub-vector with indexes from P, and let sR denote the sub-vector with indexes from R.
- Set sP = ((AP)T AP)−1 (AP)Ty
- Set sR to zero
- While min(sP) ≤ 0:
- Let α = min xi/xi − si for i in P where si ≤ 0.
- Set x to x + α(s − x).
- Move to R all indices j in P such that xj ≤ 0.
- Set sP = ((AP)T AP)−1 (AP)Ty
- Set sR to zero.
- Set x to s.
- Set w to AT(y − Ax).
- Output: x
This algorithm takes a finite number of steps to reach a solution and smoothly improves its candidate solution as it goes (so it can find good approximate solutions when cut off at a reasonable number of iterations), but is very slow in practice, owing largely to the computation of the pseudoinverse ((AP)T AP)−1.[1] Variants of this algorithm are available in MATLAB as the routine lsqnonneg[8][1] and in SciPy as optimize.nnls.[9]
Many improved algorithms have been suggested since 1974.[1] Fast NNLS (FNNLS) is an optimized version of the Lawson–Hanson algorithm.[2] Other algorithms include variants of Landweber's gradient descent method,[10] coordinate-wise optimization based on the quadratic programming problem above[7], and an active set method called TNT-NN.[11]
See also
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.