Partial correlation
Concept in probability theory and statistics / From Wikipedia, the free encyclopedia
Dear Wikiwand AI, let's keep it short by simply answering these key questions:
Can you list the top facts and stats about Partial correlation?
Summarize this article for a 10 year old
In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed. When determining the numerical relationship between two variables of interest, using their correlation coefficient will give misleading results if there is another confounding variable that is numerically related to both variables of interest. This misleading information can be avoided by controlling for the confounding variable, which is done by computing the partial correlation coefficient. This is precisely the motivation for including other right-side variables in a multiple regression; but while multiple regression gives unbiased results for the effect size, it does not give a numerical value of a measure of the strength of the relationship between the two variables of interest.
For example, given economic data on the consumption, income, and wealth of various individuals, consider the relationship between consumption and income. Failing to control for wealth when computing a correlation coefficient between consumption and income would give a misleading result, since income might be numerically related to wealth which in turn might be numerically related to consumption; a measured correlation between consumption and income might actually be contaminated by these other correlations. The use of a partial correlation avoids this problem.
Like the correlation coefficient, the partial correlation coefficient takes on a value in the range from –1 to 1. The value –1 conveys a perfect negative correlation controlling for some variables (that is, an exact linear relationship in which higher values of one variable are associated with lower values of the other); the value 1 conveys a perfect positive linear relationship, and the value 0 conveys that there is no linear relationship.
The partial correlation coincides with the conditional correlation if the random variables are jointly distributed as the multivariate normal, other elliptical, multivariate hypergeometric, multivariate negative hypergeometric, multinomial, or Dirichlet distribution, but not in general otherwise.[1]
Formally, the partial correlation between X and Y given a set of n controlling variables Z = {Z1, Z2, ..., Zn}, written ρXY·Z, is the correlation between the residuals eX and eY resulting from the linear regression of X with Z and of Y with Z, respectively. The first-order partial correlation (i.e., when n = 1) is the difference between a correlation and the product of the removable correlations divided by the product of the coefficients of alienation of the removable correlations. The coefficient of alienation, and its relation with joint variance through correlation are available in Guilford (1973, pp. 344–345).[2]
Using linear regression
A simple way to compute the sample partial correlation for some data is to solve the two associated linear regression problems and calculate the correlation between the residuals. Let X and Y be random variables taking real values, and let Z be the n-dimensional vector-valued random variable. Let xi, yi and zi denote the ith of i.i.d. observations from some joint probability distribution over real random variables X, Y, and Z, with zi having been augmented with a 1 to allow for a constant term in the regression. Solving the linear regression problem amounts to finding (n+1)-dimensional regression coefficient vectors and such that
where is the number of observations, and is the scalar product between the vectors and .
The residuals are then
and the sample partial correlation is then given by the usual formula for sample correlation, but between these new derived values:
In the first expression the three terms after minus signs all equal 0 since each contains the sum of residuals from an ordinary least squares regression.
Example
Consider the following data on three variables, X, Y, and Z:
X | Y | Z |
---|---|---|
2 | 1 | 0 |
4 | 2 | 0 |
15 | 3 | 1 |
20 | 4 | 1 |
Computing the Pearson correlation coefficient between variables X and Y results in approximately 0.970, while computing the partial correlation between X and Y, using the formula given above, gives a partial correlation of 0.919. The computations were done using R with the following code.
> X <- c(2,4,15,20)
> Y <- c(1,2,3,4)
> Z <- c(0,0,1,1)
> mm1 <- lm(X~Z)
> res1 <- mm1$residuals
> mm2 <- lm(Y~Z)
> res2 <- mm2$residuals
> cor(res1,res2)
[1] 0.919145
> cor(X,Y)
[1] 0.9695016
> generalCorr::parcorMany(cbind(X,Y,Z))
nami namj partij partji rijMrji
[1,] "X" "Y" "0.8844" "1" "-0.1156"
[2,] "X" "Z" "0.1581" "1" "-0.8419"
The lower part of the above code reports generalized nonlinear partial correlation coefficient between X and Y after removing the nonlinear effect of Z to be 0.8844. Also, the generalized partial correlation coefficient between X and Z after removing the nonlinear effect of Y to be 0.1581. See the R package `generalCorr' and its vignettes for details. Simulation and other details are in Vinod (2017) "Generalized correlation and kernel causality with applications in development economics," Communications in Statistics - Simulation and Computation, vol. 46, [4513, 4534], available online: 29 Dec 2015, URL https://doi.org/10.1080/03610918.2015.1122048.
Using recursive formula
It can be computationally expensive to solve the linear regression problems. Actually, the nth-order partial correlation (i.e., with |Z| = n) can be easily computed from three (n - 1)th-order partial correlations. The zeroth-order partial correlation ρXY·Ø is defined to be the regular correlation coefficient ρXY.
It holds, for any that[3]
Naïvely implementing this computation as a recursive algorithm yields an exponential time complexity. However, this computation has the overlapping subproblems property, such that using dynamic programming or simply caching the results of the recursive calls yields a complexity of .
Note in the case where Z is a single variable, this reduces to:[citation needed]
Using matrix inversion
The partial correlation can also be written in terms of the joint precision matrix. Consider a set of random variables, of cardinality n. We want the partial correlation between two variables and given all others, i.e., . Suppose the (joint/full) covariance matrix is positive definite and therefore invertible. If the precision matrix is defined as , then
-
(1)
Computing this requires , the inverse of the covariance matrix which runs in time (using the sample covariance matrix to obtain a sample partial correlation). Note that only a single matrix inversion is required to give all the partial correlations between pairs of variables in .
To prove Equation (1), return to the previous notation (i.e. ) and start with the definition of partial correlation: ρXY·Z is the correlation between the residuals eX and eY resulting from the linear regression of X with Z and of Y with Z, respectively.
First, suppose are the coefficients for linear regression fit; that is,
Write the joint covariance matrix for the vector as
where
Then the standard formula for linear regression gives
Hence, the residuals can be written as
Note that has expectation zero because of the inclusion of an intercept term in . Computing the covariance now gives
-
(2)
Next, write the precision matrix in a similar block form:
Then, by Schur's formula for block-matrix inversion,
The entries of the right-hand-side matrix are precisely the covariances previously computed in (2), giving
Using the formula for the inverse of a 2×2 matrix gives
So indeed, the partial correlation is
as claimed in (1).