Top Qs
Timeline
Chat
Perspective
Classical capacity
Term in quantum information theory From Wikipedia, the free encyclopedia
Remove ads
In quantum information theory, the classical capacity of a quantum channel is the maximum rate at which classical data can be sent over it error-free in the limit of many uses of the channel.
Background
Summarize
Perspective
Mixed states and quantum channels
A mixed quantum state is a unit trace, positive operator known as a density operator, and is often denoted by , , , etc. The simplest model for a quantum channel is a classical-quantum channel
which sends the classical letter at the transmitting end to a quantum state at the receiving end, with noise possibly introduced in between. The receiver's task is to perform a measurement to determine the input of the sender. If the states are perfectly distinguishable from one another (i.e., if they have orthogonal supports such that for ) and the channel is noiseless, then perfect decoding is trivially possible. If the states all commute with each other then the channel is effectively classical. The situation becomes nontrivial only when the states have overlapping support and do not necessarily commute.
Quantum measurements
The most general way to describe a quantum measurement is with a positive operator-valued measure, whose elements are typically denoted as . These operators should satisfy positivity and completeness in order to form a valid POVM:
The probabilistic interpretation of quantum mechanics states that if someone measures a quantum state using a measurement device corresponding to the POVM , then the probability for obtaining outcome is equal to
and the post-measurement state is
if the person measuring obtains outcome .
Classical communication over quantum channels
The above is sufficient to consider a classical classical communication scheme over a cq channel. The sender uses a cq channel to map a classical letter x to a quantum state , which is then sent through some noisy quantum channel, and then measured using some POVM by the receiver, who obtains another classical letter.
Precise definition
The classical capacity can be defined as the maximum rate achievable by a coding scheme for classical information transmission, which can be defined as follows.[1]
Definition. (Coding scheme) A -coding scheme for classical information transmission using a quantum channel is given by pair of an encoding map and a decoding POVM such that with respect to the Hilbert-Schmidt inner product for all .
Definition. (Achievable rate) A rate is achievable for the channel if either or and for any there exists a -coding scheme such that and both hold.
Remove ads
Holevo-Schumacher-Westmoreland theorem
Summarize
Perspective
The Holevo information (also called the Holevo quantity) of a quantum channel can be defined as
where is a classical-quantum state of the form
for some probability distribution and density operators which can be input to the given channel.
Schumacher and Westmoreland in 1997,[2] and Holevo independently in 1998,[3] proved that the classical capacity of a quantum channel can be equivalently defined as
Gentle measurement lemma
The gentle measurement lemma states that a measurement succeeding with high probability does not disturb the state too much on average.
Lemma. (Winter) Given an ensemble with expected density operator , suppose that an operator with succeeds with probability on the state :
Then the subnormalized state is close in expected trace distance to the original state :
The gentle measurement lemma has the following analog which holds for any operators , , such that :
| 1 |
The quantum information-theoretic interpretation of this inequality is that the probability of obtaining outcome from a quantum measurement acting on the state is bounded by the sum of the probability of obtaining on summed and the distinguishability of the two states and .
Non-commutative union bound
Lemma. (Sen's bound)[4] For a subnormalized state such that and , and for projectors , ... , we have
Intuitively, Sen's bound is a sort of "non-commutative union bound" because it is analogous to the union bound from classical probability theory: where are events. The analogous quantum bound would be
if we think of as a projector onto the intersection of subspaces. However, this only holds if the projectors , ..., commute (choosing , , and gives a counterexample). If the projectors are non-commuting, then one must use a non-commutative or quantum union bound.
Proof
We now prove the HSW theorem with Sen's non-commutative union bound. We first describe how the code is chosen, then give the construction of Bob's POVM, and finally analyze the error of the protocol.
Encoding map
We first describe how Alice and Bob agree on a random choice of code. They have the channel and a distribution . They choose classical sequences according to the IID\ distribution . After selecting them, they label them with indices as . This leads to the following quantum codewords:
The quantum codebook is then . The average state of the codebook is then
| 2 |
where .
Decoding POVM construction
Sen's bound from the above lemma suggests a method for Bob to decode a state that Alice transmits. Bob should first ask "Is the received state in the average typical subspace?" He can do this operationally by performing a typical subspace measurement corresponding to . Next, he asks in sequential order, "Is the received codeword in the conditionally typical subspace?" This is in some sense equivalent to the question, "Is the received codeword the transmitted codeword?" He can ask these questions operationally by performing the measurements corresponding to the conditionally typical projectors .
Why should this sequential decoding scheme work well? The reason is that the transmitted codeword lies in the typical subspace on average:
where the inequality follows from (\ref{eq:1st-typ-prop}). Also, the projectors are "good detectors" for the states (on average) because the following condition holds from conditional quantum typicality:
Error analysis
The probability of detecting the codeword correctly under our sequential decoding scheme is equal to
where we make the abbreviation . (Observe that we project into the average typical subspace just once.) Thus, the probability of an incorrect detection for the codeword is given by
and the average error probability of this scheme is equal to
Instead of analyzing the average error probability, we analyze the expectation of the average error probability, where the expectation is with respect to the random choice of code:
| 3 |
Our first step is to apply Sen's bound to the above quantity. But before doing so, we should rewrite the above expression just slightly, by observing that
Substituting into (3) (and forgetting about the small term for now) gives an upper bound of
We then apply Sen's bound to this expression with and the sequential projectors as , , ..., . This gives the upper bound Due to concavity of the square root, we can bound this expression from above by
where the second bound follows by summing over all of the codewords not equal to the codeword (this sum can only be larger).
We now focus exclusively on showing that the term inside the square root can be made small. Consider the first term:
where the first inequality follows from (1) and the second inequality follows from the gentle operator lemma and the properties of unconditional and conditional typicality. Consider now the second term and the following chain of inequalities:
The first equality follows because the codewords and are independent since they are different. The second equality follows from (2). The first inequality follows from (\ref{eq:3rd-typ-prop}). Continuing, we have
The first inequality follows from and exchanging the trace with the expectation. The second inequality follows from (\ref{eq:2nd-cond-typ}). The next two are straightforward.
Putting everything together, we get our final bound on the expectation of the average error probability:
Thus, as long as we choose , there exists a code with vanishing error probability.
Remove ads
Non-additivity of the classical capacity
The HSW theorem can be seen as expressing the classical capacity of a channel in terms of a regularization of the Holevo -quantity over multiple uses of . An open problem in quantum information theory was to determine if the -quantity is additive, which would imply that the classical capacity could be expressed using a single use of .[5] However, channels giving counterexamples to this statement were eventually given by Matthew Hastings in 2009.[6] Follow-up work showed that this is a generic phenomenon, in the sense that a channel chosen randomly from a natural probability distribution will give a counterexample with high probability. (This stands in contrast to proofs using the probabilistic method, where random sampling is shown to give a counterexample only with nonzero probability.) A proof of this can be given using Dvoretzky's theorem.[5]
See also
References
- Wilde, Mark M. (2017), Quantum Information Theory, Cambridge University Press, arXiv:1106.1445, Bibcode:2011arXiv1106.1445W, doi:10.1017/9781316809976.001, S2CID 2515538
- Guha, Saikat; Tan, Si-Hui; Wilde, Mark M. (2012), "Explicit capacity-achieving receivers for optical communication and quantum reading", IEEE International Symposium on Information Theory Proceedings (ISIT 2012), pp. 551–555, arXiv:1202.0518, doi:10.1109/ISIT.2012.6284251, ISBN 978-1-4673-2579-0, S2CID 8786400.
Remove ads
Notes
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads