Top Qs
Timeline
Chat
Perspective

One in ten rule

Statistical rule of thumb From Wikipedia, the free encyclopedia

Remove ads
Remove ads

In statistics, the one in ten rule is a rule of thumb for how many predictor parameters can be estimated from data when doing regression analysis (in particular proportional hazards models in survival analysis and logistic regression) while keeping the risk of overfitting and finding spurious correlations low. The rule states that one predictive variable can be studied for every ten events.[1][2][3][4] For logistic regression the number of events is given by the size of the smallest of the outcome categories, and for survival analysis it is given by the number of uncensored events.[3] In other words: for each feature we need 10 observations/labels.

For example, if a sample of 200 patients is studied and 20 patients die during the study (so that 180 patients survive), the one in ten rule implies that two pre-specified predictors can reliably be fitted to the total data. Similarly, if 100 patients die during the study (so that 100 patients survive), ten pre-specified predictors can be fitted reliably. If more are fitted, the rule implies that overfitting is likely and the results will not predict well outside the training data. It is not uncommon to see the 1:10 rule violated in fields with many variables (e.g. gene expression studies in cancer), decreasing the confidence in reported findings.[5]

Remove ads

Improvements

Summarize
Perspective

A "one in 20 rule" has been suggested, indicating the need for shrinkage of regression coefficients, and a "one in 50 rule" for stepwise selection with the default p-value of 5%.[4][6] Other studies, however, show that the one in ten rule may be too conservative as a general recommendation and that five to nine events per predictor can be enough, depending on the research question.[7]

More recently, a study has shown that the ratio of events per predictive variable is not a reliable statistic for estimating the minimum number of events for estimating a logistic prediction model.[8] Instead, the number of predictor variables, the total sample size (events + non-events) and the events fraction (events / total sample size) can be used to calculate the expected prediction error of the model that is to be developed.[9] One can then estimate the required sample size to achieve an expected prediction error that is smaller than a predetermined allowable prediction error value.[9]

Alternatively, three requirements for prediction model estimation have been suggested: the model should have a global shrinkage factor of ≥ 0.9, an absolute difference of ≤ 0.05 in the model's apparent and adjusted Nagelkerke R2, and a precise estimation of the overall risk or rate in the target population.[10] The necessary sample size and number of events for model development are then given by the values that meet these requirements.[10]

Remove ads

Other modalities

For highly correlated input data the one-in-10 rule (10 observations or labels needed per feature) may not be directly applicable due to the high correlation of the features: For images there is a rule of thumb that per class 1000 examples are needed.[11] This would mean that for a binary classification of images (with fictive 1000 pixel x 1000 pixel per image, i.e. 1 000 000 features per image), we would only require 2000 labels /1 000 0000 pixel = 0.002 labels per pixel or 0.002 labels per feature. This is however only due to the high (spatial) correlation of pixels.

Remove ads

Literature

  • David A. Freedman (1983) "A Note on Screening Regression Equations," The American Statistician, 37:2, 152–155, doi:10.1080/00031305.1983.10482729

References

Loading content...
Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads