Top Qs
Timeline
Chat
Perspective

Deviance information criterion

Diagnostic statistic used in Bayesian model selection From Wikipedia, the free encyclopedia

Remove ads

The deviance information criterion (DIC) is a hierarchical modeling generalization of the Akaike information criterion (AIC). It is particularly useful in Bayesian model selection problems where the posterior distributions of the models have been obtained by Markov chain Monte Carlo (MCMC) simulation. DIC is an asymptotic approximation as the sample size becomes large, like AIC. It is only valid when the posterior distribution is approximately multivariate normal.

Remove ads

Definition

Summarize
Perspective

Define the deviance as , where are the data, are the unknown parameters of the model and is the likelihood function. is a constant that cancels out in all calculations that compare different models, and which therefore does not need to be known.

There are two calculations in common usage for the effective number of parameters of the model. The first, as described in Spiegelhalter et al. (2002, p. 587), is , where is the expectation of . The second, as described in Gelman et al. (2004, p. 182), is . The larger the effective number of parameters is, the easier it is for the model to fit the data, and so the deviance needs to be penalized.

The deviance information criterion is calculated as

or equivalently as

From this latter form, the connection with AIC is more evident.

Remove ads

Motivation

The idea is that models with smaller DIC should be preferred to models with larger DIC. Models are penalized both by the value of , which favors a good fit, but also (similar to AIC) by the effective number of parameters . Since will decrease as the number of parameters in a model increases, the term compensates for this effect by favoring models with a smaller number of parameters.

An advantage of DIC over other criteria in the case of Bayesian model selection is that the DIC is easily calculated from the samples generated by a Markov chain Monte Carlo simulation. AIC requires calculating the likelihood at its maximum over , which is not readily available from the MCMC simulation. But to calculate DIC, simply compute as the average of over the samples of , and as the value of evaluated at the average of the samples of . Then the DIC follows directly from these approximations. Claeskens and Hjort (2008, Ch. 3.5) show that the DIC is large-sample equivalent to the natural model-robust version of the AIC.

Remove ads

Assumptions

In the derivation of DIC, it is assumed that the specified parametric family of probability distributions that generate future observations encompasses the true model. This assumption does not always hold, and it is desirable to consider model assessment procedures in that scenario.

Also, the observed data are used both to construct the posterior distribution and to evaluate the estimated models. Therefore, DIC tends to select over-fitted models.

Extensions

Summarize
Perspective

A resolution to the issues above was suggested by Ando (2007), with the proposal of the Bayesian predictive information criterion (BPIC). Ando (2010, Ch. 8) provided a discussion of various Bayesian model selection criteria. To avoid the over-fitting problems of DIC, Ando (2011) developed Bayesian model selection criteria from a predictive view point. The criterion is calculated as

The first term is a measure of how well the model fits the data, while the second term is a penalty on the model complexity. Note that the p in this expression is the predictive distribution rather than the likelihood above.

Remove ads

Other Applications of DIC

Summarize
Perspective

DIC was used in multiple S-Plus (and subsequently R) libraries for fitting likelihood-based models in the 1990's (having precedent over the Bayesian methods, to the extent they overlap); usually presented as a generalization of AIC. The DIC was defined by Hastie and Tibshirani (1990 p160, eqn 6.32)[1] for the weighted smoothers used in Generalized Additive Models, and the requisite deviance and effective degrees-of-freedom calculations were incorporated into the GAM library (Hastie, 1991).[2]

The aic method in S-Plus and R is credited (initially) to Pinheiro and Bates, developed in conjunction with nlme software,[3] and subsequently backported to other libraries (some are simply AIC; others requiring DIC-style approximations).

In the context of local likelihood, a deviance information criterion is defined Loader (1999 p69, def'n 4.4),[4] with a derivation based on jackknifed leave-one-out cross-validation, with effective degrees-of-freedom calculations explicitly in terms of likelihood derivatives. This leads to some small technical differences compared to the Hastie and Tibshirani approach.

Irizarry (2001) [5] also has an extensive development of information criteria for local likelihood. Unlike the global techniques in the above sources, the criteria developed by Irizarry are applied when estimating at a single point in the predictor space, and so are applicable to locally adaptive smoothers considered by other authors.

Remove ads

See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads