Top Qs
Timeline
Chat
Perspective
McDiarmid's inequality
Probability and computer science concept From Wikipedia, the free encyclopedia
Remove ads
In probability theory and theoretical computer science, McDiarmid's inequality (named after Colin McDiarmid [1]) is a concentration inequality which bounds the deviation between the sampled value and the expected value of certain functions when they are evaluated on independent random variables. McDiarmid's inequality applies to functions that satisfy a bounded differences property, meaning that replacing a single argument to the function while leaving all other arguments unchanged cannot cause too large of a change in the value of the function.
Remove ads
Statement
Summarize
Perspective
A function satisfies the bounded differences property if substituting the value of the th coordinate changes the value of by at most . More formally, if there are constants such that for all , and all ,
McDiarmid's Inequality[2]—Let satisfy the bounded differences property with bounds .
Consider independent random variables where for all . Then, for any ,
and as an immediate consequence,
Remove ads
Extensions
Summarize
Perspective
Unbalanced distributions
A stronger bound may be given when the arguments to the function are sampled from unbalanced distributions, such that resampling a single argument rarely causes a large change to the function value.
McDiarmid's Inequality (unbalanced)[3][4]—Let satisfy the bounded differences property with bounds .
Consider independent random variables drawn from a distribution where there is a particular value which occurs with probability . Then, for any ,
This may be used to characterize, for example, the value of a function on graphs when evaluated on sparse random graphs and hypergraphs, since in a sparse random graph, it is much more likely for any particular edge to be missing than to be present.
Differences bounded with high probability
McDiarmid's inequality may be extended to the case where the function being analyzed does not strictly satisfy the bounded differences property, but large differences remain very rare.
McDiarmid's Inequality (Differences bounded with high probability)[5]—Let be a function and be a subset of its domain and let be constants such that for all pairs and ,
Consider independent random variables where for all . Let and let . Then, for any ,
and as an immediate consequence,
There exist stronger refinements to this analysis in some distribution-dependent scenarios,[6] such as those that arise in learning theory.
Sub-Gaussian and sub-exponential norms
Let the th centered conditional version of a function be
so that is a random variable depending on random values of .
McDiarmid's Inequality (Sub-Gaussian norm)[7][8]—Let be a function. Consider independent random variables where for all .
Let refer to the th centered conditional version of . Let denote the sub-Gaussian norm of a random variable.
Then, for any ,
McDiarmid's Inequality (Sub-exponential norm)[8]—Let be a function. Consider independent random variables where for all .
Let refer to the th centered conditional version of . Let denote the sub-exponential norm of a random variable.
Then, for any ,
Bennett and Bernstein forms
Refinements to McDiarmid's inequality in the style of Bennett's inequality and Bernstein inequalities are made possible by defining a variance term for each function argument. Let
McDiarmid's Inequality (Bennett form)[4]—Let satisfy the bounded differences property with bounds . Consider independent random variables where for all . Let and be defined as at the beginning of this section.
Then, for any ,
McDiarmid's Inequality (Bernstein form)[4]—Let satisfy the bounded differences property with bounds . Let and be defined as at the beginning of this section.
Then, for any ,
Remove ads
Proof
The following proof of McDiarmid's inequality[2] constructs the Doob martingale tracking the conditional expected value of the function as more and more of its arguments are sampled and conditioned on, and then applies a martingale concentration inequality (Azuma's inequality). An alternate argument avoiding the use of martingales also exists, taking advantage of the independence of the function arguments to provide a Chernoff-bound-like argument.[4]
For better readability, we will introduce a notational shorthand: will denote for any and integers , so that, for example,
Pick any . Then, for any , by triangle inequality,
and thus is bounded.
Since is bounded, define the Doob martingale (each being a random variable depending on the random values of ) as
for all and , so that .
Now define the random variables for each
Since are independent of each other, conditioning on does not affect the probabilities of the other variables, so these are equal to the expressions
Note that . In addition,
Then, applying the general form of Azuma's inequality to , we have
The one-sided bound in the other direction is obtained by applying Azuma's inequality to and the two-sided bound follows from a union bound.
Remove ads
See also
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads