Cross-validation

Cross-validation

[kraws-val-i-dey-shuhn, kros-]
Cross-validation, sometimes called rotation estimation , is the statistical practice of partitioning a sample of data into subsets such that the analysis is initially performed on a single subset, while the other subset(s) are retained for subsequent use in confirming and validating the initial analysis.

The initial subset of data is called the training set; the other subset(s) are called validation or testing sets.

The theory of cross-validation was pioneered by Seymour Geisser. It is important in guarding against testing hypotheses suggested by the data (called "Type III error"), especially where further samples are hazardous, costly or impossible (uncomfortable science) to collect.

Common types of cross-validation

Holdout validation

Holdout validation is not cross-validation in the common sense, because the data are never crossed over. Observations are chosen randomly from the initial sample to form the validation data, and the remaining observations are retained as the training data. Normally, less than a third of the initial sample is used for validation data.

Repeated random sub-sampling validation

This method randomly splits the dataset into training and validation data. For each such split, the classifier is retrained with the training data and validated on the remaining data. The results from each split can then be averaged. The advantage of this method (over k-fold cross validation) is that the proportion of the training/validation split is not dependent on the number of iterations (folds). The disadvantage of this method is that some observations may never be selected in the validation subsample, whereas others may be selected more than once. In other words, validation subsets may overlap.

K-fold cross-validation

In K-fold cross-validation, the original sample is partitioned into K subsamples. Of the K subsamples, a single subsample is retained as the validation data for testing the model, and the remaining K − 1 subsamples are used as training data. The cross-validation process is then repeated K times (the folds), with each of the K subsamples used exactly once as the validation data. The K results from the folds then can be averaged (or otherwise combined) to produce a single estimation. The advantage of this method over repeated random sub-sampling is that all observations are used for both training and validation, and each observation is used for validation exactly once. 10-fold cross-validation is commonly used.

If there are many more positive instances than negative instances in a dataset, there is a chance that a given fold may not contain any negative instances. To ensure that this does not happen, stratified K-fold cross-validation is used where each fold contains roughly the same proportion of class labels as in the original set of samples.

Leave-one-out cross-validation

As the name suggests, leave-one-out cross-validation (LOOCV) involves using a single observation from the original sample as the validation data, and the remaining observations as the training data. This is repeated such that each observation in the sample is used once as the validation data. This is the same as a K-fold cross-validation with K being equal to the number of observations in the original sample, though efficient algorithms exist in some cases, for example with kernel regression and with Tikhonov regularization.

Error estimation

The parameter estimation error can be computed. Common error metrics are the Mean squared error (MSE) and the Root mean squared error (RMSE), respectively the estimated variance and standard deviation of the cross validation.

Using a validation Set

There can be a validation set independent from the training the testing sets. It is often not discussed under the cross-validation topics, because "cross-validation" usually refers to just the training/testing split including 10-fold,etc.

The validation set can be used for example to watch out for overfitting, or to choose the best input parameters for a classifier model. In that case the data is split in three parts: training, test and validation. Then:

1) Use the training data in a cross-validation scheme like 10-fold or 2/3 - 1/3 to simply estimate the average quality (e.g. error rate (or accuracy) or F1-score of a classifier)

2) Leave an additional subset of data (the validation set) to use to adjust these additional parameters or adjust the structure (such as number of layers or neurons in a neural network, or number of nodes in a decision tree) of the model, for example: you can use the validation set to decide when to stop growing the decision tree, thus to test for overfitting; or to choose the best parameters such as in Rocchio or SVM classifiers: in this case you obtain a model with a given set of parameters based on the training set, then estimate the quality using the validation set. Repeat this for many parameter/structure choices and select the choice with best quality on the validation set

3) Finally take this best choice of parameters+model from step 2 and use it to estimate the quality on the test data

as you can see that you use:

- the training set to compute the model,

- the validation set to choose the best parameters of this model (in case there are "additional" parameters that cannot be computed based on training)

- the test data as the final "judge" to get an estimate of the quality on new data that was used neither to train the model, nor to determine its underlying parameters or structure or complexity of this model

See related topic: Early stopping

References

See also

External links

Search another word or see cross-validationon Dictionary | Thesaurus |Spanish
Copyright © 2014 Dictionary.com, LLC. All rights reserved.
  • Please Login or Sign Up to use the Recent Searches feature
FAVORITES
RECENT

;