Top Qs
Timeline
Chat
Perspective
Mean signed deviation
From Wikipedia, the free encyclopedia
Remove ads
Remove ads
In statistics, the mean signed difference (MSD),[1] also known as mean signed deviation, mean signed error, or mean bias error[2] is a sample statistic that summarizes how well a set of estimates match the quantities that they are supposed to estimate. It is one of a number of statistics that can be used to assess an estimation procedure, and it would often be used in conjunction with a sample version of the mean square error.
For example, suppose a linear regression model has been estimated over a sample of data, and is then used to extrapolate predictions of the dependent variable out of sample after the out-of-sample data points have become available. Then would be the i-th out-of-sample value of the dependent variable, and would be its predicted value. The mean signed deviation is the average value of
Remove ads
Definition
The mean signed difference is derived from a set of n pairs, , where is an estimate of the parameter in a case where it is known that . In many applications, all the quantities will share a common value. When applied to forecasting in a time series analysis context, a forecasting procedure might be evaluated using the mean signed difference, with being the predicted value of a series at a given lead time and being the value of the series eventually observed for that time-point. The mean signed difference is defined to be
Remove ads
Use Cases
The mean signed difference is often useful when the estimations are biased from the true values in a certain direction. If the estimator that produces the values is unbiased, then . However, if the estimations are produced by a biased estimator, then the mean signed difference is a useful tool to understand the direction of the estimator's bias.
Remove ads
See also
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads