Loading AI tools
Problem in probability theory From Wikipedia, the free encyclopedia
In probability theory, the coupon collector's problem refers to mathematical analysis of "collect all coupons and win" contests. It asks the following question: if each box of a given product (e.g., breakfast cereals) contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? An alternative statement is: given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as .[lower-alpha 1] For example, when n = 50 it takes about 225[lower-alpha 2] trials on average to collect all 50 coupons.
By definition of Stirling numbers of the second kind, the probability that exactly T draws are needed isBy manipulating the generating function of the Stirling numbers, we can explicitly calculate all moments of T:In general, the k-th moment is , where is the derivative operator . For example, the 0th moment isand the 1st moment is , which can be explicitly evaluated to , etc.
Let time T be the number of draws needed to collect all n coupons, and let ti be the time to collect the i-th coupon after i − 1 coupons have been collected. Then . Think of T and ti as random variables. Observe that the probability of collecting a new coupon is . Therefore, has geometric distribution with expectation . By the linearity of expectations we have:
Here Hn is the n-th harmonic number. Using the asymptotics of the harmonic numbers, we obtain:
where is the Euler–Mascheroni constant.
Using the Markov inequality to bound the desired probability:
The above can be modified slightly to handle the case when we've already collected some of the coupons. Let k be the number of coupons already collected, then:
And when then we get the original result.
Using the independence of random variables ti, we obtain:
since (see Basel problem).
Bound the desired probability using the Chebyshev inequality:
A stronger tail estimate for the upper tail be obtained as follows. Let denote the event that the -th coupon was not picked in the first trials. Then
Thus, for , we have . Via a union bound over the coupons, we obtain
This section is based on.[3]
Define a discrete random process by letting be the number of coupons not yet seen after draws. The random process is just a sequence generated by a Markov chain with states , and transition probabilitiesNow define then it is a martingale, sinceConsequently, we have . In particular, we have a limit law for any . This suggests to us a limit law for .
More generally, each is a martingale process, which allows us to calculate all moments of . For example, giving another limit law . More generally, meaning that has all moments converging to constants, so it converges to some probability distribution on .
Let be the random variable with the limit distribution. We haveBy introducing a new variable , we can sum up both sides explicitly:giving .
At the limit, we have , which is precisely what the limit law states.
By taking the derivative multiple times, we find that , which is a Poisson distribution.
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.