The initial subset of data is called the training set; the other subset(s) are called validation or testing sets.
The theory of cross-validation was pioneered by Seymour Geisser. It is important in guarding against testing hypotheses suggested by the data (called "Type III error"), especially where further samples are hazardous, costly or impossible (uncomfortable science) to collect.
Holdout validation is not cross-validation in the common sense, because the data are never crossed over. Observations are chosen randomly from the initial sample to form the validation data, and the remaining observations are retained as the training data. Normally, less than a third of the initial sample is used for validation data.
This method randomly splits the dataset into training and validation data. For each such split, the classifier is retrained with the training data and validated on the remaining data. The results from each split can then be averaged. The advantage of this method (over k-fold cross validation) is that the proportion of the training/validation split is not dependent on the number of iterations (folds). The disadvantage of this method is that some observations may never be selected in the validation subsample, whereas others may be selected more than once. In other words, validation subsets may overlap.
In K-fold cross-validation, the original sample is partitioned into K subsamples. Of the K subsamples, a single subsample is retained as the validation data for testing the model, and the remaining K − 1 subsamples are used as training data. The cross-validation process is then repeated K times (the folds), with each of the K subsamples used exactly once as the validation data. The K results from the folds then can be averaged (or otherwise combined) to produce a single estimation. The advantage of this method over repeated random sub-sampling is that all observations are used for both training and validation, and each observation is used for validation exactly once. 10-fold cross-validation is commonly used.
If there are many more positive instances than negative instances in a dataset, there is a chance that a given fold may not contain any negative instances. To ensure that this does not happen, stratified K-fold cross-validation is used where each fold contains roughly the same proportion of class labels as in the original set of samples.
As the name suggests, leave-one-out cross-validation (LOOCV) involves using a single observation from the original sample as the validation data, and the remaining observations as the training data. This is repeated such that each observation in the sample is used once as the validation data. This is the same as a K-fold cross-validation with K being equal to the number of observations in the original sample, though efficient algorithms exist in some cases, for example with kernel regression and with Tikhonov regularization.
The validation set can be used for example to watch out for overfitting, or to choose the best input parameters for a classifier model. In that case the data is split in three parts: training, test and validation. Then:
1) Use the training data in a cross-validation scheme like 10-fold or 2/3 - 1/3 to simply estimate the average quality (e.g. error rate (or accuracy) or F1-score of a classifier)
2) Leave an additional subset of data (the validation set) to use to adjust these additional parameters or adjust the structure (such as number of layers or neurons in a neural network, or number of nodes in a decision tree) of the model, for example: you can use the validation set to decide when to stop growing the decision tree, thus to test for overfitting; or to choose the best parameters such as in Rocchio or SVM classifiers: in this case you obtain a model with a given set of parameters based on the training set, then estimate the quality using the validation set. Repeat this for many parameter/structure choices and select the choice with best quality on the validation set
3) Finally take this best choice of parameters+model from step 2 and use it to estimate the quality on the test data
as you can see that you use:
- the training set to compute the model,
- the validation set to choose the best parameters of this model (in case there are "additional" parameters that cannot be computed based on training)
- the test data as the final "judge" to get an estimate of the quality on new data that was used neither to train the model, nor to determine its underlying parameters or structure or complexity of this model
See related topic: Early stopping
Value added; value gained: operators cross-utilize ingredients and experiment with portion sizes to keep food costs in line while delivering value to customers.(NEW PRODUCT PIPELINE)
Jan 01, 2009; [ILLUSTRATIONS OMITTED] At a time when it takes big value to get cash-strapped guests in the restaurant door, food-cost control...
Double duty: a slack economy forces even more focus on the age-old process of cross-utilization.(MENU ENGINEERING)(stock-keeping units))
Aug 01, 2008; Good things happen when kitchens cross-utilize ingredients: SKUs (stock-keeping units) remain at a minimum and production...