Loading AI tools
Continuous probability distribution From Wikipedia, the free encyclopedia
In probability theory and statistics, the F-distribution or F-ratio, also known as Snedecor's F distribution or the Fisher–Snedecor distribution (after Ronald Fisher and George W. Snedecor), is a continuous probability distribution that arises frequently as the null distribution of a test statistic, most notably in the analysis of variance (ANOVA) and other F-tests.[2][3][4][5]
Probability density function | |||
Cumulative distribution function | |||
Parameters | d1, d2 > 0 deg. of freedom | ||
---|---|---|---|
Support | if , otherwise | ||
CDF | |||
Mean |
for d2 > 2 | ||
Mode |
for d1 > 2 | ||
Variance |
for d2 > 4 | ||
Skewness |
for d2 > 6 | ||
Excess kurtosis | see text | ||
Entropy |
[1] | ||
MGF | does not exist, raw moments defined in text and in [2][3] | ||
CF | see text |
The F-distribution with d1 and d2 degrees of freedom is the distribution of
where and are independent random variables with chi-square distributions with respective degrees of freedom and .
It can be shown to follow that the probability density function (pdf) for X is given by
for real x > 0. Here is the beta function. In many applications, the parameters d1 and d2 are positive integers, but the distribution is well-defined for positive real values of these parameters.
The cumulative distribution function is
where I is the regularized incomplete beta function.
The expectation, variance, and other details about the F(d1, d2) are given in the sidebox; for d2 > 8, the excess kurtosis is
The k-th moment of an F(d1, d2) distribution exists and is finite only when 2k < d2 and it is equal to
The F-distribution is a particular parametrization of the beta prime distribution, which is also called the beta distribution of the second kind.
The characteristic function is listed incorrectly in many standard references (e.g.,[3]). The correct expression [7] is
where U(a, b, z) is the confluent hypergeometric function of the second kind.
In instances where the F-distribution is used, for example in the analysis of variance, independence of and (defined above) might be demonstrated by applying Cochran's theorem.
Equivalently, since the chi-squared distribution is the sum of squares of independent standard normal random variables, the random variable of the F-distribution may also be written
where and , is the sum of squares of random variables from normal distribution and is the sum of squares of random variables from normal distribution .
In a frequentist context, a scaled F-distribution therefore gives the probability , with the F-distribution itself, without any scaling, applying where is being taken equal to . This is the context in which the F-distribution most generally appears in F-tests: where the null hypothesis is that two independent normal variances are equal, and the observed sums of some appropriately selected squares are then examined to see whether their ratio is significantly incompatible with this null hypothesis.
The quantity has the same distribution in Bayesian statistics, if an uninformative rescaling-invariant Jeffreys prior is taken for the prior probabilities of and .[8] In this context, a scaled F-distribution thus gives the posterior probability , where the observed sums and are now taken as known.
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.