Top Qs
Timeline
Chat
Perspective
HHL algorithm
Quantum linear algebra algorithm offering exponential speedup under certain conditions From Wikipedia, the free encyclopedia
Remove ads
The Harrow–Hassidim–Lloyd (HHL) algorithm is a quantum algorithm for obtaining certain limited information about the solution to a system of linear equations, introduced by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. Specifically, the algorithm estimates quadratic functions of the solution vector to a given system.[1]
The algorithm is one of the main fundamental algorithms expected to provide a speedup over their classical counterparts, along with Shor's factoring algorithm and Grover's search algorithm. Assuming the system is sparse,[2] has a low condition number , and that the user is only interested in certain information about solution vector and not the entire vector itself, the algorithm has a runtime of , where is the number of variables. This offers an exponential speedup over the fastest classical algorithm, which runs in (or for positive semidefinite matrices).
An implementation of the HHL algorithm was first demonstrated in 2013 by three independent publications, consisting of simple systems on specially designed devices.[3][4][5] The first demonstration of a general-purpose version of the algorithm appeared in 2018.[6]
Remove ads
Overview
Given an Hermitian matrix and unit vector , the HHL algorithms prepares the quantum state whose amplitudes are the entries of the solution to the linear system . The algorithm cannot efficiently output the solution x itself, but allows one to efficiently estimate for a Hermitian matrix .
The algorithm first prepares the quantum state whose amplitudes are equal to the entries of . Using Hamiltonian simulation, the unitary operator is applied to for a superposition of different times t. The algorithm then uses quantum phase estimation to decompose in the eigenbasis of and find the corresponding eigenvalues . The state of the system after this step is approximately
where are the eigenvectors of A and is the j-th coefficient of b in the eigenbasis of A.
We would then like to apply the linear map taking to for some constant C. This map is not unitary and must be implemented using a quantum measurement with a nonzero probability of failure. After it succeeds, we have uncomputed the register and have a state proportional to
By performing the quantum measurement corresponding to M, we get an estimate of . One could use quantum tomography to retrieve all components of x, but this would require repeating the algorithm roughly N times.
Remove ads
Detailed description
Summarize
Perspective
Assumptions and initialization
The algorithm requires the following assumptions to hold:
- The algorithm requires A to be Hermitian so that it can be exponentiated into a unitary operator. If A is not Hermitian, one can define a Hermitian matrix and solve to obtain .
- The algorithm requires an efficient procedure to prepare . It is assumed that either has already been prepared or there exists some B which takes some quantum state to efficiently. Any error in the preparation of is ignored.
- The algorithm assumes that the state can be prepared efficiently, where for some large T. The coefficients of are chosen to minimize a certain quadratic loss function which induces error in the subroutine described below.
- The algorithm assumes that the unitary operator can be applied efficiently. This is possible using Hamiltonian simulation if A is s-sparse and efficiently row computable, meaning it has at most s nonzero entries per row which can be computed in time O(s) given a row index. One can then apply in time .
Uinvert subroutine
The key subroutine to the algorithm, denoted , is defined as follows using phase estimation:
- Prepare on register C
- Apply the conditional Hamiltonian evolution (sum)
- Apply the Fourier transform to the register C. Denote the resulting basis states with for k = 0, ..., T − 1. Define .
- Adjoin a three-dimensional register S in the state
- Reverse steps 1–3, uncomputing any garbage produced along the way.
The phase estimation procedure in steps 1-3 estimates the eigenvalues of A up to error .
The ancilla register in step 4 is needed to construct a state with inverted eigenvalues corresponding to the diagonalized inverse of A. The states 'nothing', 'well' and 'ill' are used to direct the loop body; 'nothing' indicates that the matrix inversion has not yet taken place, 'well' indicates that it has and the loop should halt, and 'ill' indicates that part of is in the ill-conditioned subspace of A and the algorithm cannot produce the desired inversion. Producing a state proportional to the inverse of A requires 'well' to be measured, after which the overall state collapses to the desired output.
Main loop
The main loop follows amplitude amplification: starting with , repeatedly apply
After each iteration, is measured and will produce a value of 'nothing', 'well', or 'ill.' The loop is repeated until 'well' is measured, which occurs with some probability . Using amplitude amplification achieves a given error using queries, as opposed to using naive repetition.
After successfully measuring 'well' on the system will be in a state proportional to
The quantum measurement corresponding to M then gives an estimate of .
Remove ads
Analysis
Classical efficiency
The best classical algorithm which produces the actual solution vector is Gaussian elimination, which runs in time.
If A is s-sparse and positive semi-definite, then the Conjugate Gradient method can be used to find the solution vector , which can be found in time by minimizing the quadratic function .
When only a summary statistic of the solution vector is needed, as is the case for the quantum algorithm for linear systems of equations, a classical computer can find an estimate of in .
Quantum efficiency
The runtime of the quantum algorithm for solving systems of linear equations originally proposed by Harrow et al. was shown to be , where is the error parameter and is the condition number of . This was subsequently improved to by Andris Ambainis[7] and to for large condition number cases by Peniel Tsemo et al[8], and a quantum algorithm with runtime polynomial in was developed by Childs et al.[9] Since the HHL algorithm maintains its logarithmic scaling in only for sparse or low rank matrices, Wossnig et al.[10] extended the HHL algorithm based on a quantum singular value estimation technique and provided a linear system algorithm for dense matrices which runs in time compared to the of the standard HHL algorithm.
Optimality
An important factor in the performance of the matrix inversion algorithm is the condition number , which represents the ratio of 's largest and smallest eigenvalues. As the condition number increases, the ease with which the solution vector can be found using gradient descent methods such as the conjugate gradient method decreases, as becomes closer to a matrix which cannot be inverted and the solution vector becomes less stable. This algorithm assumes that all singular values of the matrix lie between and 1, in which case the claimed run-time proportional to will be achieved. Therefore, the speedup over classical algorithms is increased further when is a .[1]
If the run-time of the algorithm were made poly-logarithmic in then problems solvable on n qubits could be solved in poly(n) time, causing the complexity class BQP to be equal to PSPACE.[1]
Error analysis
Performing the Hamiltonian simulation, which is the dominant source of error, is done by simulating . Assuming that is s-sparse, this can be done with an error bounded by a constant , which will translate to the additive error achieved in the output state .
The phase estimation step errs by in estimating , which translates into a relative error of in . If , taking induces a final error of . This requires that the overall run-time efficiency be increased proportional to to minimize error.
Remove ads
Experimental realization
Summarize
Perspective
While a general-purpose quantum computer does not yet exist, one can still try to execute a proof of concept implementation of the HHL algorithm. This remained a challenge for years, until three groups independently did so in 2013.
On February 5, 2013, a group led by Stefanie Barz reported an implementation of the HHL algorithm on a photonic quantum computer. The implementation used two consecutive entangling gates on the same pair of polarization-encoded qubits. Two separately controlled NOT gates were realized where the successful operation of the first was heralded by a measurement of two ancillary photons. Experimental measurements of the fidelity in the obtained output state ranged from 64.7% to 98.1% due to the influence of higher-order emissions from spontaneous parametric down-conversion.[4]
On February 8, 2013, Pan et al. reported a proof-of-concept experimental demonstration of the quantum algorithm using a 4-qubit NMR quantum computer. The implementation was tested using linear systems of 2 variables. Across three experiments, the solution vector was obtained with over 96% fidelity.[5]
On February 18, 2013, Cai et al. reported an experimental demonstration solving 2-by-2 linear systems. The quantum circuit was optimized and compiled into a linear optical network with four photonic qubits and four controlled logic gates, which were used to coherently implement the subroutines of the HHL algorithm. For various input vectors, the realization gave solutions with fidelities ranging from 0.825 to 0.993.[11]
Another experimental demonstration using NMR for solving an 8*8 system was reported by Wen et al.[12] in 2018 using the algorithm developed by Subaşı et al.[13]
Remove ads
Proposed applications
Summarize
Perspective
Several concrete applications of the HHL algorithm have been proposed, which analyze the algorithm's input assumptions and output guarantees for particular problems.
- Electromagnetic scattering
- Clader et al. gave a version of the HHL algorithm which allows a preconditioner to be included, which can be used improve the dependence on the condition number. The algorithm was applied to solving for the radar cross-section of a complex shape, which was one of the first examples of an application of the HHL algorithm to a concrete problem.[14]
- Solving linear differential equations
- Berry proposed an algorithm for solving linear, time-dependent initial value problems using the HHL algorithm.[15]
- Solving nonlinear differential equations
- Two groups proposed[16] efficient algorithms for numerically integrating dissipative nonlinear ordinary differential equations. Liu et al.[17] utilized Carleman linearization for second order equations and Lloyd et al.[18] used a mean field linearization method inspired by the nonlinear Schrödinger equation for general order nonlinearities. The resulting linear equations are solved using quantum algorithms for linear differential equations.
- Finite element method
- The finite element method approximates linear partial differential equations using large systems of linear equations. Montanaro and Pallister demonstrate that the HHL algorithm can achieve a polynomial quantum speedup for the resulting linear systems. Exponential speedups are not expected for problems in a fixed dimension or for which the solution meets certain smoothness conditions, such as certain high-order problems in many-body dynamics, or some problems in computational finance.[19]
- Least-squares fitting
- Wiebe et al. gave a quantum algorithm to determine the quality of a least-squares fit. The optimal coefficients cannot be calculated directly from the output of the quantum algorithm, but the algorithm still outputs the optimal least-squares error.[20]
- Machine learning
- Many quantum machine learning algorithms have been developed, a large number of which use the HHL algorithm as a subroutine. The runtime of certain classical algorithms is often polynomial in the size and dimension of a dataset, while the HHL algorithm can give an exponential speedup in some cases. However, a line work initiated by Ewin Tang has found that for most quantum machine learning algorithms, there are classical algorithms giving the same exponential speedups with similar input assumptions.
- Finance
- Proposals for using HHL in finance include solving partial differential equations for the Black–Scholes equation and determining portfolio optimization via a Markowitz solution.[21]
- Quantum chemistry
- The linearized coupled cluster method in quantum chemistry can be recast as a system of linear equations. In 2023, Baskaran et al. proposed the use of HHL algorithm to solve the resulting linear systems.[22] The number of state register qubits in the quantum algorithm is the logarithm of the number of excitations, offering an exponential reduction in the number of required qubits when compared to using the variational quantum eigensolver or quantum phase estimation.
Remove ads
Implementation difficulties
Summarize
Perspective
Recognizing the importance of the HHL algorithm in the field of quantum machine learning, Scott Aaronson[23] analyzes the caveats and factors that could limit the actual quantum advantage of the algorithm.
- the solution vector, , has to be efficiently prepared in the quantum state. If the vector is not close to uniform, the state preparation is likely to be costly, and if it takes steps the exponential advantage of HHL would vanish.
- the QPE phases calls for the generation of the unitary , and its controlled application. The efficiency of this step depends on the matrix being sparse and 'well conditioned' (low ). Otherwise, the application of would grow as and once again, the algorithm's quantum advantage would vanish.
- lastly, the vector, , is not readily accessible. The HHL algorithm enables learning a 'summary' of the vector, namely the result of measuring the expectation of an operator . If actual values of are needed, then HHL would need to be repeated times, killing the exponential speed-up. However, three ways of avoiding getting the actual values have been proposed: first, if only some properties of the solution are needed;[24] second, if the results are needed only to feed downstream matrix operations; third, if only a sample of the solution is needed.[25]
Remove ads
See also
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads