When a data set is unbalanced (when the number of samples in different classes vary greatly) the error rate of a classifier is not representative of the true performance of the classifier. This can easily be understood by an example: If there are for example 990 samples from class A and only 10 samples from class B, the classifier can easily be biased towards class A. If the classifier classifies all the samples as class A, the accuracy will be 99%. This is not a good indication of the classifier's true performance. The classifier has a 100% recognition rate for class A but a 0% recognition rate for class B.
In the example confusion matrix below, of the 8 actual cats, the system predicted that three were dogs, and of the six dogs, it predicted that one was a rabbit and two were cats. We can see from the matrix that the system in question has trouble distinguishing between cats and dogs, but can make the distinction between rabbits and other types of animals pretty well.
For example, consider a model which predicts for 10,000 Insurance Claims whether each case is Fraudulent. This model correctly predicts 9,700 non-fraudulent cases, and 100 fraudulent cases. The model also incorrectly predicts 150 cases which are not fraudulent to be fraudulent, and 50 cases which are fraudulent to be non-fraudulent. The resulting Table of Confusion is shown below.
Table 2: Example Table of Confusion.
Patent No. 7,707,032 Issued on April 27, Assigned to National Cheng Kung University for Speech Data Matching Method, System (Taiwanese Inventors)
Apr 29, 2010; ALEXANDRIA, Va., May 1 -- Jhing-Fa Wang and Po-Chuan Lin, both of Tainan, Taiwan, and Li-Chang Wen of Sanchong, Taiwan, have...
Warner Plans Socko Support for `Matrix'.(Warner Brothers marketing campaign for 'Matrix' pay-per-view special)(Brief Article)
Oct 11, 1999; NEW YORK -- Warner Bros. will support the November pay-per-view debut of The Matrix with advertising aimed at the 18-to-49...