Ridge regression

Regularization technique for ill-posed problems / From Wikipedia, the free encyclopedia

Dear Wikiwand AI, let's keep it short, summarize this topic like I'm... Ten years old or a College student

Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated.[1] It has been used in many fields including econometrics, chemistry, and engineering.[2] Also known as Tikhonov regularization, named for Andrey Tikhonov, it is a method of regularization of ill-posed problems.[lower-alpha 1] It is particularly useful to mitigate the problem of multicollinearity in linear regression, which commonly occurs in models with large numbers of parameters.[3] In general, the method provides improved efficiency in parameter estimation problems in exchange for a tolerable amount of bias (see bias–variance tradeoff).[4]

The theory was first introduced by Hoerl and Kennard in 1970 in their Technometrics papers “RIDGE regressions: biased estimation of nonorthogonal problems” and “RIDGE regressions: applications in nonorthogonal problems”.[5][6][1] This was the result of ten years of research into the field of ridge analysis.[7]

Ridge regression was developed as a possible solution to the imprecision of least square estimators when linear regression models have some multicollinear (highly correlated) independent variables—by creating a ridge regression estimator (RR). This provides a more precise ridge parameters estimate, as its variance and mean square estimator are often smaller than the least square estimators previously derived.[8][2]