Loading AI tools

Statistical estimator converging in probability to a true parameter as sample size increases From Wikipedia, the free encyclopedia

In statistics, a **consistent estimator** or **asymptotically consistent estimator** is an estimator—a rule for computing estimates of a parameter *θ*_{0}—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to *θ*_{0}. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to *θ*_{0} converges to one.

In practice one constructs an estimator as a function of an available sample of size *n*, and then imagines being able to keep collecting data and expanding the sample *ad infinitum*. In this way one would obtain a sequence of estimates indexed by *n*, and consistency is a property of what occurs as the sample size “grows to infinity”. If the sequence of estimates can be mathematically shown to converge in probability to the true value *θ*_{0}, it is called a consistent estimator; otherwise the estimator is said to be **inconsistent**.

Consistency as defined here is sometimes referred to as **weak consistency**. When we replace convergence in probability with almost sure convergence, then the estimator is said to be **strongly consistent**. Consistency is related to bias; see bias versus consistency.

Formally speaking, an estimator *T _{n}* of parameter

i.e. if, for all *ε* > 0

An estimator *T _{n}* of parameter

A more rigorous definition takes into account the fact that *θ* is actually unknown, and thus, the convergence in probability must take place for every possible value of this parameter. Suppose {*p _{θ}*:

This definition uses *g*(*θ*) instead of simply *θ*, because often one is interested in estimating a certain function or a sub-vector of the underlying parameter. In the next example, we estimate the location parameter of the model, but not the scale:

Suppose one has a sequence of statistically independent observations {*X*_{1}, *X*_{2}, ...} from a normal *N*(*μ*, *σ*^{2}) distribution. To estimate *μ* based on the first *n* observations, one can use the sample mean: *T _{n}* = (

From the properties of the normal distribution, we know the sampling distribution of this statistic: *T*_{n} is itself normally distributed, with mean *μ* and variance *σ*^{2}/*n*. Equivalently, has a standard normal distribution:

as *n* tends to infinity, for any fixed *ε* > 0. Therefore, the sequence *T _{n}* of sample means is consistent for the population mean

The notion of asymptotic consistency is very close, almost synonymous to the notion of convergence in probability. As such, any theorem, lemma, or property which establishes convergence in probability may be used to prove the consistency. Many such tools exist:

- In order to demonstrate consistency directly from the definition one can use the inequality
^{[3]}

the most common choice for function *h* being either the absolute value (in which case it is known as Markov inequality), or the quadratic function (respectively Chebyshev's inequality).

- Another useful result is the continuous mapping theorem: if
*T*is consistent for_{n}*θ*and*g*(·) is a real-valued function continuous at point*θ*, then*g*(*T*) will be consistent for_{n}*g*(*θ*):^{[4]}

- Slutsky's theorem can be used to combine several different estimators, or an estimator with a non-random convergent sequence. If
*T*→_{n}^{d}*α*, and*S*→_{n}^{p}*β*, then^{[5]}

- If estimator
*T*is given by an explicit formula, then most likely the formula will employ sums of random variables, and then the law of large numbers can be used: for a sequence {_{n}*X*} of random variables and under suitable conditions,_{n}

- If estimator
*T*is defined implicitly, for example as a value that maximizes certain objective function (see extremum estimator), then a more complicated argument involving stochastic equicontinuity has to be used._{n}^{[6]}

An estimator can be unbiased but not consistent. For example, for an iid sample {*x*^{}_{1},..., *x ^{}_{n}*} one can use

However, if a sequence of estimators is unbiased *and* converges to a value, then it is consistent, as it must converge to the correct value.

Alternatively, an estimator can be biased but consistent. For example, if the mean is estimated by it is biased, but as , it approaches the correct value, and so it is consistent.

Important examples include the sample variance and sample standard deviation. Without Bessel's correction (that is, when using the sample size instead of the degrees of freedom ), these are both negatively biased but consistent estimators. With the correction, the corrected sample variance is unbiased, while the corrected sample standard deviation is still biased, but less so, and both are still consistent: the correction factor converges to 1 as sample size grows.

Here is another example. Let be a sequence of estimators for .

We can see that , , and the bias does not converge to zero.

- Efficient estimator
- Fisher consistency — alternative, although rarely used concept of consistency for the estimators
- Regression dilution
- Statistical hypothesis testing
- Instrumental variables estimation

Seamless Wikipedia browsing. On steroids.

Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.

Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.