# Training, validation, and test data sets

## Tasks in machine learning / From Wikipedia, the free encyclopedia

#### Dear Wikiwand AI, let's keep it short by simply answering these key questions:

Can you list the top facts and stats about Training data set?

Summarize this article for a 10 year old

In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data.^{[1]} Such algorithms function by making data-driven predictions or decisions,^{[2]} through building a mathematical model from input data. These input data used to build the model are usually divided into multiple data sets. In particular, three data sets are commonly used in different stages of the creation of the model: training, validation, and test sets.

The model is initially fit on a **training data set**,^{[3]} which is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model.^{[4]} The model (e.g. a naive Bayes classifier) is trained on the training data set using a supervised learning method, for example using optimization methods such as gradient descent or stochastic gradient descent. In practice, the training data set often consists of pairs of an input vector (or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as the *target* (or *label*). The current model is run with the training data set and produces a result, which is then compared with the *target*, for each input vector in the training data set. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include both variable selection and parameter estimation.

Successively, the fitted model is used to predict the responses for the observations in a second data set called the **validation data set**.^{[3]} The validation data set provides an unbiased evaluation of a model fit on the training data set while tuning the model's hyperparameters^{[5]} (e.g. the number of hidden units—layers and layer widths—in a neural network^{[4]}). Validation data sets can be used for regularization by early stopping (stopping training when the error on the validation data set increases, as this is a sign of over-fitting to the training data set).^{[6]}
This simple procedure is complicated in practice by the fact that the validation data set's error may fluctuate during training, producing multiple local minima. This complication has led to the creation of many ad-hoc rules for deciding when over-fitting has truly begun.^{[6]}

Finally, the **test data set** is a data set used to provide an unbiased evaluation of a *final* model fit on the training data set.^{[5]} If the data in the test data set has never been used in training (for example in cross-validation), the test data set is also called a **holdout data set**. The term "validation set" is sometimes used instead of "test set" in some literature (e.g., if the original data set was partitioned into only two subsets, the test set might be referred to as the validation set).^{[5]}

Deciding the sizes and strategies for data set division in training, test and validation sets is very dependent on the problem and data available.^{[7]}