Top Qs
Timeline
Chat
Perspective
Eaton's inequality
From Wikipedia, the free encyclopedia
Remove ads
In probability theory, Eaton's inequality is a bound on the largest values of a linear combination of bounded random variables. This inequality was described in 1974 by Morris L. Eaton.[1]
Statement of the inequality
Summarize
Perspective
Let {Xi} be a set of real independent random variables, each with an expected value of zero and bounded above by 1 ( |Xi | ≤ 1, for 1 ≤ i ≤ n). The variates do not have to be identically or symmetrically distributed. Let {ai} be a set of n fixed real numbers with
Eaton showed that
where φ(x) is the probability density function of the standard normal distribution.
A related bound is Edelman's[citation needed]
where Φ(x) is cumulative distribution function of the standard normal distribution.
Pinelis has shown that Eaton's bound can be sharpened:[2]
A set of critical values for Eaton's bound have been determined.[3]
Remove ads
Related inequalities
Summarize
Perspective
Let {ai} be a set of independent Rademacher random variables – P( ai = 1 ) = P( ai = −1 ) = 1/2. Let Z be a normally distributed variate with a mean 0 and variance of 1. Let {bi} be a set of n fixed real numbers such that
This last condition is required by the Riesz–Fischer theorem which states that
will converge if and only if
is finite.
Then
for f(x) = | x |p. The case for p ≥ 3 was proved by Whittle[4] and p ≥ 2 was proved by Haagerup.[5]
If f(x) = eλx with λ ≥ 0 then
Let
Then[7]
The constant in the last inequality is approximately 4.4634.
An alternative bound is also known:[8]
This last bound is related to the Hoeffding's inequality.
In the uniform case where all the bi = n−1/2 the maximum value of Sn is n1/2. In this case van Zuijlen has shown that[9]
where μ is the mean and σ is the standard deviation of the sum.
Remove ads
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads