Hubbry Logo
Receiver operating characteristicReceiver operating characteristicMain
Open search
Receiver operating characteristic
Community hub
Receiver operating characteristic
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Receiver operating characteristic
Receiver operating characteristic
from Wikipedia
ROC curve of three predictors of peptide cleaving in the proteasome.

A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the performance of a binary classifier model (although it can be generalized to multiple classes) at varying threshold values. ROC analysis is commonly applied in the assessment of diagnostic test performance in clinical epidemiology.

The ROC curve is the plot of the true positive rate (TPR) against the false positive rate (FPR) at each threshold setting.

The ROC can also be thought of as a plot of the statistical power as a function of the Type I Error of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities). The ROC curve is thus the sensitivity as a function of false positive rate.[1]

Given that the probability distributions for both true positive and false positive are known, the ROC curve is obtained as the cumulative distribution function (CDF, area under the probability distribution from to the discrimination threshold) of the detection probability in the y-axis versus the CDF of the false positive probability on the x-axis.

ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from (and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way to the cost/benefit analysis of diagnostic decision making.

Terminology

[edit]

The true-positive rate is also known as sensitivity or probability of detection.[2] The false-positive rate is also known as the probability of false alarm[2] and equals (1 − specificity). The ROC is also known as a relative operating characteristic curve, because it is a comparison of two operating characteristics (TPR and FPR) as the criterion changes.[3]

History

[edit]

The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields, starting in 1941, which led to its name ("receiver operating characteristic").[4]

It was soon introduced to psychology to account for the perceptual detection of stimuli. ROC analysis has been used in medicine, radiology, biometrics, forecasting of natural hazards,[5] meteorology,[6] model performance assessment,[7] and other areas for many decades and is increasingly used in machine learning and data mining research.

Basic concept

[edit]

A classification model (classifier or diagnosis[8]) is a mapping of instances between certain classes/groups. Because the classifier or diagnosis result can be an arbitrary real value (continuous output), the classifier boundary between classes must be determined by a threshold value (for instance, to determine whether a person has hypertension based on a blood pressure measure). Or it can be a discrete class label, indicating one of the classes.

Consider a two-class prediction problem (binary classification), in which the outcomes are labeled either as positive (p) or negative (n). There are four possible outcomes from a binary classifier. If the outcome from a prediction is p and the actual value is also p, then it is called a true positive (TP); however if the actual value is n then it is said to be a false positive (FP). Conversely, a true negative (TN) has occurred when both the prediction outcome and the actual value are n, and a false negative (FN) is when the prediction outcome is n while the actual value is p.

To get an appropriate example in a real-world problem, consider a diagnostic test that seeks to determine whether a person has a certain disease. A false positive in this case occurs when the person tests positive, but does not actually have the disease. A false negative, on the other hand, occurs when the person tests negative, suggesting they are healthy, when they actually do have the disease.

Consider an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows:

Predicted condition Sources: [9][10][11][12][13][14][15][16]
Total population
= P + N
Predicted positive Predicted negative Informedness, bookmaker informedness (BM)
= TPR + TNR − 1
Prevalence threshold (PT)
= TPR × FPR − FPR/TPR − FPR
Actual condition
Real Positive (P) [a] True positive (TP),
hit[b]
False negative (FN),
miss, underestimation
True positive rate (TPR), recall, sensitivity (SEN), probability of detection, hit rate, power
= TP/P = 1 − FNR
False negative rate (FNR),
miss rate
type II error [c]
= FN/P = 1 − TPR
Real Negative (N)[d] False positive (FP),
false alarm, overestimation
True negative (TN),
correct rejection[e]
False positive rate (FPR),
probability of false alarm, fall-out
type I error [f]
= FP/N = 1 − TNR
True negative rate (TNR),
specificity (SPC), selectivity
= TN/N = 1 − FPR
Prevalence
= P/P + N
Positive predictive value (PPV), precision
= TP/TP + FP = 1 − FDR
False omission rate (FOR)
= FN/TN + FN = 1 − NPV
Positive likelihood ratio (LR+)
= TPR/FPR
Negative likelihood ratio (LR−)
= FNR/TNR
Accuracy (ACC)
= TP + TN/P + N
False discovery rate (FDR)
= FP/TP + FP = 1 − PPV
Negative predictive value (NPV)
= TN/TN + FN = 1 − FOR
Markedness (MK), deltaP (Δp)
= PPV + NPV − 1
Diagnostic odds ratio (DOR)
= LR+/LR−
Balanced accuracy (BA)
= TPR + TNR/2
F1 score
= 2 PPV × TPR/PPV + TPR = 2 TP/2 TP + FP + FN
Fowlkes–Mallows index (FM)
= PPV × TPR
phi or Matthews correlation coefficient (MCC)
= TPR × TNR × PPV × NPV - FNR × FPR × FOR × FDR
Threat score (TS), critical success index (CSI), Jaccard index
= TP/TP + FN + FP
  1. ^ the number of real positive cases in the data
  2. ^ A test result that correctly indicates the presence of a condition or characteristic
  3. ^ Type II error: A test result which wrongly indicates that a particular condition or attribute is absent
  4. ^ the number of real negative cases in the data
  5. ^ A test result that correctly indicates the absence of a condition or characteristic
  6. ^ Type I error: A test result which wrongly indicates that a particular condition or attribute is present


ROC space

[edit]
The ROC space and plots of the four prediction examples.
The ROC space for a "better" and "worse" classifier.

The contingency table can derive several evaluation "metrics" (see infobox). To draw a ROC curve, only the true positive rate (TPR) and false positive rate (FPR) are needed (as functions of some classifier parameter). The TPR defines how many correct positive results occur among all positive samples available during the test. FPR, on the other hand, defines how many incorrect positive results occur among all negative samples available during the test.

A ROC space is defined by FPR and TPR as x and y axes, respectively, which depicts relative trade-offs between true positive (benefits) and false positive (costs). Since TPR is equivalent to sensitivity and FPR is equal to 1 − specificity, the ROC graph is sometimes called the sensitivity vs (1 − specificity) plot. Each prediction result or instance of a confusion matrix represents one point in the ROC space.

The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC space, representing 100% sensitivity (no false negatives) and 100% specificity (no false positives). The (0,1) point is also called a perfect classification. A random guess would give a point along a diagonal line (the so-called line of no-discrimination) from the bottom left to the top right corners (regardless of the positive and negative base rates).[17] An intuitive example of random guessing is a decision by flipping coins. As the size of the sample increases, a random classifier's ROC point tends towards the diagonal line. In the case of a balanced coin, it will tend to the point (0.5, 0.5).

The diagonal divides the ROC space. Points above the diagonal represent good classification results (better than random); points below the line represent bad results (worse than random). Note that the output of a consistently bad predictor could simply be inverted to obtain a good predictor.

Consider four prediction results from 100 positive and 100 negative instances:

A B C C′
TP = 63 FN = 37 100
FP = 28 TN = 72 100
91 109 200
TP = 77 FN = 23 100
FP = 77 TN = 23 100
154 46 200
TP = 24 FN = 76 100
FP = 88 TN = 12 100
112 88 200
TP = 76 FN = 24 100
FP = 12 TN = 88 100
88 112 200
TPR = 0.63 TPR = 0.77 TPR = 0.24 TPR = 0.76
FPR = 0.28 FPR = 0.77 FPR = 0.88 FPR = 0.12
PPV = 0.69 PPV = 0.50 PPV = 0.21 PPV = 0.86
F1 = 0.66 F1 = 0.61 F1 = 0.23 F1 = 0.81
ACC = 0.68 ACC = 0.50 ACC = 0.18 ACC = 0.82

Plots of the four results above in the ROC space are given in the figure. The result of method A clearly shows the best predictive power among A, B, and C. The result of B lies on the random guess line (the diagonal line), and it can be seen in the table that the accuracy of B is 50%. However, when C is mirrored across the center point (0.5,0.5), the resulting method C′ is even better than A. This mirrored method simply reverses the predictions of whatever method or test produced the C contingency table. Although the original C method has negative predictive power, simply reversing its decisions leads to a new predictive method C′ which has positive predictive power. When the C method predicts p or n, the C′ method would predict n or p, respectively. In this manner, the C′ test would perform the best. The closer a result from a contingency table is to the upper left corner, the better it predicts, but the distance from the random guess line in either direction is the best indicator of how much predictive power a method has. If the result is below the line (i.e. the method is worse than a random guess), all of the method's predictions must be reversed in order to utilize its power, thereby moving the result above the random guess line.

Curves in ROC space

[edit]

In binary classification, the class prediction for each instance is often made based on a continuous random variable , which is a "score" computed for the instance (e.g. the estimated probability in logistic regression). Given a threshold parameter , the instance is classified as "positive" if , and "negative" otherwise. follows a probability density if the instance actually belongs to class "positive", and if otherwise. Therefore, the true positive rate is given by and the false positive rate is given by . The ROC curve plots parametrically versus with as the varying parameter.

For example, imagine that the blood protein levels in diseased people and healthy people are normally distributed with means of 2 g/dL and 1 g/dL respectively. A medical test might measure the level of a certain protein in a blood sample and classify any number above a certain threshold as indicating disease. The experimenter can adjust the threshold (green vertical line in the figure), which will in turn change the false positive rate. Increasing the threshold would result in fewer false positives (and more false negatives), corresponding to a leftward movement on the curve. The actual shape of the curve is determined by how much overlap the two distributions have.

Criticisms

[edit]
Example of receiver operating characteristic (ROC) curve highlighting the area under the curve (AUC) sub-area with low sensitivity and low specificity in red and the sub-area with high or sufficient sensitivity and specificity in green.[18]

Several studies criticize certain applications of the ROC curve and its area under the curve as measurements for assessing binary classifications when they do not capture the information relevant to the application.[19][18][20][21][22]

The main criticism to the ROC curve described in these studies regards the incorporation of areas with low sensitivity and low specificity (both lower than 0.5) for the calculation of the total area under the curve (AUC).,[20] as described in the plot on the right.

According to the authors of these studies, that portion of area under the curve (with low sensitivity and low specificity) regards confusion matrices where binary predictions obtain bad results, and therefore should not be included for the assessment of the overall performance. Moreover, that portion of AUC indicates a space with high or low confusion matrix threshold which is rarely of interest for scientists performing a binary classification in any field.[20]

Another criticism to the ROC and its area under the curve is that they say nothing about precision and negative predictive value.[18]

A high ROC AUC, such as 0.9 for example, might correspond to low values of precision and negative predictive value, such as 0.2 and 0.1 in the [0, 1] range. If one performed a binary classification, obtained an ROC AUC of 0.9 and decided to focus only on this metric, they might overoptimistically believe their binary test was excellent. However, if this person took a look at the values of precision and negative predictive value, they might discover their values are low.

The ROC AUC summarizes sensitivity and specificity, but does not inform regarding precision and negative predictive value.[18]

Further interpretations

[edit]

Sometimes, the ROC is used to generate a summary statistic. Common versions are:

  • the intercept of the ROC curve with the line at 45 degrees orthogonal to the no-discrimination line - the balance point where Sensitivity = Specificity
  • the intercept of the ROC curve with the tangent at 45 degrees parallel to the no-discrimination line that is closest to the error-free point (0,1) – also called Youden's J statistic and generalized as Informedness[citation needed]
  • the area between the ROC curve and the no-discrimination line multiplied by two and subtraction of one is called the Gini coefficient, especially in the context of credit scoring.[23] It should not be confused with the measure of statistical dispersion also called Gini coefficient.
  • the area between the full ROC curve and the triangular ROC curve including only (0,0), (1,1) and one selected operating point – Consistency[24]
  • the area under the ROC curve, or "AUC" ("area under curve"), or A' (pronounced "a-prime"),[25] or "c-statistic" ("concordance statistic").[26]
  • the sensitivity index d′ (pronounced "d-prime"), the distance between the mean of the distribution of activity in the system under noise-alone conditions and its distribution under signal-alone conditions, divided by their standard deviation, under the assumption that both these distributions are normal with the same standard deviation. Under these assumptions, the shape of the ROC is entirely determined by d′.

However, any attempt to summarize the ROC curve into a single number loses information about the pattern of tradeoffs of the particular discriminator algorithm.

Probabilistic interpretation

[edit]

The area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative').[27] In other words, when given one randomly selected positive instance and one randomly selected negative instance, AUC is the probability that the classifier will be able to tell which one is which.

This can be seen as follows: the area under the curve is given by (the integral boundaries are reversed as large threshold has a lower value on the x-axis)

where is the score for a positive instance and is the score for a negative instance, and and are probability densities as defined in previous section.

If and follows two Gaussian distributions, then .


Area under the curve

[edit]

It can be shown that the AUC is closely related to the Mann–Whitney U,[28][29] which tests whether positives are ranked higher than negatives. For a predictor , an unbiased estimator of its AUC can be expressed by the following Wilcoxon-Mann-Whitney statistic:[30]

where denotes an indicator function which returns 1 if otherwise return 0; is the set of negative examples, and is the set of positive examples.

In the context of credit scoring, a rescaled version of AUC is often used:

.

is referred to as Gini index or Gini coefficient,[31] but it should not be confused with the measure of statistical dispersion that is also called Gini coefficient. is a special case of Somers' D.

It is also common to calculate the Area Under the ROC Convex Hull (ROC AUCH = ROCH AUC) as any point on the line segment between two prediction results can be achieved by randomly using one or the other system with probabilities proportional to the relative length of the opposite component of the segment.[32] It is also possible to invert concavities – just as in the figure the worse solution can be reflected to become a better solution; concavities can be reflected in any line segment, but this more extreme form of fusion is much more likely to overfit the data.[33]

The machine learning community most often uses the ROC AUC statistic for model comparison.[34] This practice has been questioned because AUC estimates are quite noisy and suffer from other problems.[35][36][37] Nonetheless, the coherence of AUC as a measure of aggregated classification performance has been vindicated, in terms of a uniform rate distribution,[38] and AUC has been linked to a number of other performance metrics such as the Brier score.[39]

Another problem with ROC AUC is that reducing the ROC Curve to a single number ignores the fact that it is about the tradeoffs between the different systems or performance points plotted and not the performance of an individual system, as well as ignoring the possibility of concavity repair, so that related alternative measures such as Informedness[citation needed] or DeltaP are recommended.[24][40] These measures are essentially equivalent to the Gini for a single prediction point with DeltaP' = Informedness = 2AUC-1, whilst DeltaP = Markedness represents the dual (viz. predicting the prediction from the real class) and their geometric mean is the Matthews correlation coefficient.[citation needed]

Whereas ROC AUC varies between 0 and 1 — with an uninformative classifier yielding 0.5 — the alternative measures known as Informedness,[citation needed] Certainty [24] and Gini Coefficient (in the single parameterization or single system case)[citation needed] all have the advantage that 0 represents chance performance whilst 1 represents perfect performance, and −1 represents the "perverse" case of full informedness always giving the wrong response.[41] Bringing chance performance to 0 allows these alternative scales to be interpreted as Kappa statistics. Informedness has been shown to have desirable characteristics for Machine Learning versus other common definitions of Kappa such as Cohen Kappa and Fleiss Kappa.[citation needed][42]

Sometimes it can be more useful to look at a specific region of the ROC Curve rather than at the whole curve. It is possible to compute partial AUC.[43] For example, one could focus on the region of the curve with low false positive rate, which is often of prime interest for population screening tests.[44] Another common approach for classification problems in which P ≪ N (common in bioinformatics applications) is to use a logarithmic scale for the x-axis.[45]

The ROC area under the curve is also called c-statistic or c statistic.[46]

Other measures

[edit]
TOC Curve

The Total Operating Characteristic (TOC) also characterizes diagnostic ability while revealing more information than the ROC. For each threshold, ROC reveals two ratios, TP/(TP + FN) and FP/(FP + TN). In other words, ROC reveals and . On the other hand, TOC shows the total information in the contingency table for each threshold.[47] The TOC method reveals all of the information that the ROC method provides, plus additional important information that ROC does not reveal, i.e. the size of every entry in the contingency table for each threshold. TOC also provides the popular AUC of the ROC.[48]

ROC Curve

These figures are the TOC and ROC curves using the same data and thresholds. Consider the point that corresponds to a threshold of 74. The TOC curve shows the number of hits, which is 3, and hence the number of misses, which is 7. Additionally, the TOC curve shows that the number of false alarms is 4 and the number of correct rejections is 16. At any given point in the ROC curve, it is possible to glean values for the ratios of and . For example, at threshold 74, it is evident that the x coordinate is 0.2 and the y coordinate is 0.3. However, these two values are insufficient to construct all entries of the underlying two-by-two contingency table.

Detection error tradeoff graph

[edit]
Example DET graph

An alternative to the ROC curve is the detection error tradeoff (DET) graph, which plots the false negative rate (missed detections) vs. the false positive rate (false alarms) on non-linearly transformed x- and y-axes. The transformation function is the quantile function of the normal distribution, i.e., the inverse of the cumulative normal distribution. It is, in fact, the same transformation as zROC, below, except that the complement of the hit rate, the miss rate or false negative rate, is used. This alternative spends more graph area on the region of interest. Most of the ROC area is of little interest; one primarily cares about the region tight against the y-axis and the top left corner – which, because of using miss rate instead of its complement, the hit rate, is the lower left corner in a DET plot. Furthermore, DET graphs have the useful property of linearity and a linear threshold behavior for normal distributions.[49] The DET plot is used extensively in the automatic speaker recognition community, where the name DET was first used. The analysis of the ROC performance in graphs with this warping of the axes was used by psychologists in perception studies halfway through the 20th century,[citation needed] where this was dubbed "double probability paper".[50]

Z-score

[edit]

If a standard score is applied to the ROC curve, the curve will be transformed into a straight line.[51] This z-score is based on a normal distribution with a mean of zero and a standard deviation of one. In memory strength theory, one must assume that the zROC is not only linear, but has a slope of 1.0. The normal distributions of targets (studied objects that the subjects need to recall) and lures (non studied objects that the subjects attempt to recall) is the factor causing the zROC to be linear.

The linearity of the zROC curve depends on the standard deviations of the target and lure strength distributions. If the standard deviations are equal, the slope will be 1.0. If the standard deviation of the target strength distribution is larger than the standard deviation of the lure strength distribution, then the slope will be smaller than 1.0. In most studies, it has been found that the zROC curve slopes constantly fall below 1, usually between 0.5 and 0.9.[52] Many experiments yielded a zROC slope of 0.8. A slope of 0.8 implies that the variability of the target strength distribution is 25% larger than the variability of the lure strength distribution.[53]

Another variable used is d' (d prime) (discussed above in "Other measures"), which can easily be expressed in terms of z-values. Although d' is a commonly used parameter, it must be recognized that it is only relevant when strictly adhering to the very strong assumptions of strength theory made above.[54]

The z-score of an ROC curve is always linear, as assumed, except in special situations. The Yonelinas familiarity-recollection model is a two-dimensional account of recognition memory. Instead of the subject simply answering yes or no to a specific input, the subject gives the input a feeling of familiarity, which operates like the original ROC curve. What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1. However, when adding the recollection component, the zROC curve will be concave up, with a decreased slope. This difference in shape and slope result from an added element of variability due to some items being recollected. Patients with anterograde amnesia are unable to recollect, so their Yonelinas zROC curve would have a slope close to 1.0.[55]

History

[edit]

The ROC curve was first used during World War II for the analysis of radar signals before it was employed in signal detection theory.[56] Following the attack on Pearl Harbor in 1941, the United States military began new research to increase the prediction of correctly detected Japanese aircraft from their radar signals. For these purposes they measured the ability of a radar receiver operator to make these important distinctions, which was called the Receiver Operating Characteristic.[57]

In the 1950s, ROC curves were employed in psychophysics to assess human (and occasionally non-human animal) detection of weak signals.[56] In medicine, ROC analysis has been extensively used in the evaluation of diagnostic tests.[58][59] ROC curves are also used extensively in epidemiology and medical research and are frequently mentioned in conjunction with evidence-based medicine. In radiology, ROC analysis is a common technique to evaluate new radiology techniques.[60] In the social sciences, ROC analysis is often called the ROC Accuracy Ratio, a common technique for judging the accuracy of default probability models. ROC curves are widely used in laboratory medicine to assess the diagnostic accuracy of a test, to choose the optimal cut-off of a test and to compare diagnostic accuracy of several tests.

ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.[61]

ROC curves are also used in verification of forecasts in meteorology.[62]

Radar in detail

[edit]

As mentioned ROC curves are critical to radar operation and theory. The signals received at a receiver station, as reflected by a target, are often of very low energy, in comparison to the noise floor. The ratio of signal to noise is an important metric when determining if a target will be detected. This signal to noise ratio is directly correlated to the receiver operating characteristics of the whole radar system, which is used to quantify the ability of a radar system.

Consider the development of a radar system. A specification for the abilities of the system may be provided in terms of probability of detect, , with a certain tolerance for false alarms, . A simplified approximation of the required signal to noise ratio at the receiver station can be calculated by solving[63]

for the signal to noise ratio . Here, is not in decibels, as is common in many radar applications. Conversion to decibels is through . From this figure, the common entries in the radar range equation (with noise factors) may be solved, to estimate the required effective radiated power.

ROC curves beyond binary classification

[edit]

The extension of ROC curves for classification problems with more than two classes is cumbersome. Two common approaches for when there are multiple classes are (1) average over all pairwise AUC values[64] and (2) compute the volume under surface (VUS).[65][66] To average over all pairwise classes, one computes the AUC for each pair of classes, using only the examples from those two classes as if there were no other classes, and then averages these AUC values over all possible pairs. When there are c classes there will be c(c − 1) / 2 possible pairs of classes.

The volume under surface approach has one plot a hypersurface rather than a curve and then measure the hypervolume under that hypersurface. Every possible decision rule that one might use for a classifier for c classes can be described in terms of its true positive rates (TPR1, . . . , TPRc). It is this set of rates that defines a point, and the set of all possible decision rules yields a cloud of points that define the hypersurface. With this definition, the VUS is the probability that the classifier will be able to correctly label all c examples when it is given a set that has one randomly selected example from each class. The implementation of a classifier that knows that its input set consists of one example from each class might first compute a goodness-of-fit score for each of the c2 possible pairings of an example to a class, and then employ the Hungarian algorithm to maximize the sum of the c selected scores over all c! possible ways to assign exactly one example to each class.

Given the success of ROC curves for the assessment of classification models, the extension of ROC curves for other supervised tasks has also been investigated. Notable proposals for regression problems are the so-called regression error characteristic (REC) Curves [67] and the Regression ROC (RROC) curves.[68] In the latter, RROC curves become extremely similar to ROC curves for classification, with the notions of asymmetry, dominance and convex hull. Also, the area under RROC curves is proportional to the error variance of the regression model.

See also

[edit]

References

[edit]
[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Receiver operating characteristic (ROC) curve is a graphical representation of the trade-off between the true positive rate (sensitivity) and the (1 - specificity) for a binary classifier as the discrimination threshold varies. It plots sensitivity on the y-axis against 1 - specificity on the x-axis, allowing evaluation of a diagnostic test's or model's performance across different cutoff points without assuming a fixed threshold. The area under the ROC curve (AUC), ranging from 0.5 (random performance) to 1.0 (perfect discrimination), serves as a threshold-independent summary metric of overall accuracy. Originating from signal detection theory during , where it assessed operators' ability to distinguish signals from noise, the ROC framework was developed to quantify detection performance under varying conditions. It gained prominence in the 1970s through applications in and , evolving into a standard tool for analyzing continuous diagnostic tests by plotting empirical points from multiple thresholds or fitting smooth curves using models like the binormal distribution. Construction involves calculating sensitivity (true positives / (true positives + false negatives)) and specificity (true negatives / (true negatives + false positives)) at each threshold, then connecting the resulting points to form the curve. In medical diagnostics, ROC curves are essential for comparing imaging modalities, such as evaluating chest radiographs for detecting abnormalities, and selecting optimal thresholds that balance . Beyond medicine, they are widely applied in to assess binary classifiers in tasks like detection and ecological modeling, where AUC helps compare algorithms under imbalanced datasets. The method's robustness to makes it valuable in fields requiring reliable performance evaluation, though extensions like precision-recall curves address limitations in highly skewed data.

Fundamentals

Terminology

In binary classification tasks, instances are categorized into one of two mutually exclusive classes: the positive class (P), representing the event or condition of interest (e.g., presence of a ), and the negative class (N), representing its absence. The total number of positive instances is denoted as P=TP+FNP = \text{TP} + \text{FN}, and the total number of negative instances as N=FP+TNN = \text{FP} + \text{TN}. A binary classifier's outcomes are summarized in a , which cross-tabulates actual versus predicted class labels to count four possible results: true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN). A true positive (TP) counts instances that are actually positive and correctly predicted as positive. A false positive (FP) counts instances that are actually negative but incorrectly predicted as positive. A true negative (TN) counts instances that are actually negative and correctly predicted as negative. A false negative (FN) counts instances that are actually positive but incorrectly predicted as negative. The confusion matrix is structured as follows:
Predicted PositivePredicted Negative
Actual PositiveTPFN
Actual NegativeFPTN
The true positive rate (TPR), also known as sensitivity or , measures the proportion of actual positive instances correctly identified and is calculated as TPR=TPTP+FN=TPP.\text{TPR} = \frac{\text{TP}}{\text{TP} + \text{FN}} = \frac{\text{TP}}{P}. The false positive rate (FPR), equivalent to 1 minus specificity, measures the proportion of actual negative instances incorrectly identified as positive and is calculated as FPR=FPFP+TN=FPN.\text{FPR} = \frac{\text{FP}}{\text{FP} + \text{TN}} = \frac{\text{FP}}{N}. In practice, many classifiers produce a continuous score indicating the likelihood of an instance belonging to the positive class, and a decision threshold is applied to yield binary predictions: instances exceeding the threshold are classified as positive, while those below are classified as negative. For instance, in spam detection, an with a score above 0.5 might be classified as spam (positive), resulting in TP if it is spam, FP if it is not, FN if spam but below threshold, or TN if non-spam and below threshold. These TPR and FPR values, computed across varying thresholds, form the basis for plotting the ROC curve.

Basic Concept

The receiver operating characteristic (ROC) analysis serves as a fundamental tool for evaluating the performance of binary classifiers by illustrating the trade-offs between sensitivity, or true positive rate (TPR), and specificity, or 1 minus the (FPR), across varying discrimination thresholds. In tasks, where outcomes are divided into positive and negative classes, ROC analysis provides a comprehensive view of how well a model distinguishes between them, independent of a single fixed threshold, allowing for informed decisions based on the relative costs of . At its core, the intuition behind ROC analysis lies in the probabilistic nature of classifier outputs, which are typically continuous scores representing the likelihood of an instance belonging to the positive class. By adjusting the decision threshold applied to these scores, one can shift the balance between correctly identifying true positives (increasing sensitivity) and avoiding false positives (increasing specificity), as a higher threshold makes the classifier more conservative, reducing false positives at the expense of missing some true positives, and vice versa. This threshold variation highlights the inherent : improving one metric often degrades the other, enabling practitioners to select an suited to the application's priorities, such as prioritizing detection over accuracy in high-stakes scenarios. A practical example is a medical diagnostic for a , like cancer, using a level as the classifier score. If the threshold is set high (e.g., ≥43.3 units), the achieves high specificity (correctly identifying all healthy patients, FPR=0) but moderate sensitivity (detecting 67% of diseased patients, TPR=0.67), minimizing unnecessary treatments but risking missed diagnoses. Lowering the threshold (e.g., ≥29.0 units) boosts sensitivity to 100% (catching all cases) but drops specificity to 43% (more false alarms among healthy patients), illustrating how ROC analysis visualizes these compromises to guide clinical threshold selection. In ROC space, where the x-axis represents 1-specificity (FPR) and the y-axis represents sensitivity (TPR), a random classifier, which has no discriminatory power, produces points along the diagonal line from (0,0) to (1,1), equivalent to flipping a coin for predictions. Conversely, a perfect classifier achieves the ideal point at (0,1), attaining 100% sensitivity with 0% false positives, fully separating the classes without error.

Historical Development

Origins in Signal Detection

The receiver operating characteristic (ROC) was developed during by electrical engineers and radar experts, primarily in the United States, to evaluate the performance of systems in detecting signals amid . This tool was essential for quantifying how effectively operators could identify genuine signals in the presence of background interference, thereby improving the reliability of systems in combat scenarios. Early work in , including contributions from figures like J.I. Marcum in , addressed the critical need for accurate target identification amid wartime uncertainties. The early terminology "receiver operating characteristic" stemmed from engineering concepts used to assess sensitivity and performance under noisy conditions, adapting these to operator and . In the broader context, the ROC was integrated into and research to model human decision-making processes, helping operators set thresholds for calling "target present" or "noise" despite perceptual ambiguities and environmental variability. This approach emphasized the probabilistic nature of detection, balancing the risks of misses and false alarms in high-stakes operations. A seminal publication detailing ROC principles in radar detection theory was "The Theory of Signal Detectability" by W.W. Peterson, T.G. Birdsall, and W.C. Fox in 1954, which formalized the framework for subsequent advancements.

Adoption and Evolution Across Fields

The (ROC) framework, originating from , saw significant adoption in during the 1950s and 1960s as researchers sought to quantify human sensory discrimination beyond traditional threshold models. This period marked a shift toward probabilistic models of perception, where ROC curves enabled the separation of sensitivity from response bias in experiments involving detection tasks, such as identifying faint stimuli amid noise. The seminal work by Green and Swets formalized these applications, demonstrating how ROC analysis could evaluate observer performance across varying decision criteria in auditory and visual . In the and , ROC analysis transitioned into medical diagnostics, particularly , where it became a standard for assessing the accuracy of imaging systems and diagnostic tests against gold standards. Pioneering efforts at institutions like the extended ROC methodology to evaluate trade-offs in false positives and negatives for detecting abnormalities in X-rays and scans, addressing limitations of accuracy metrics that ignored . Key contributions included Metz's elucidation of ROC principles for radiologic applications, which facilitated comparisons of diagnostic modalities and influenced designs for test validation. By the , works like Swets and Pickett's evaluation framework solidified ROC as essential for minimizing bias in medical decision-making. From the 1990s onward, ROC gained prominence in machine learning and pattern recognition for comparing classifier performance in binary decision problems, offering a threshold-independent measure superior to error rates in noisy or variable environments. This adoption was driven by the need to benchmark algorithms in tasks like optical character recognition and speech processing, where ROC curves visualized the spectrum of operating points. A landmark contribution was Bradley's 1997 analysis, which advocated the area under the ROC curve (AUC) as a robust summary statistic for evaluating machine learning algorithms, influencing its widespread use in empirical studies. Subsequent milestones included the integration of ROC into bioinformatics around the late 1990s, where it supported and by assessing classification accuracy in high-dimensional genomic data. This era also highlighted ROC's utility for imbalanced datasets, common in biological applications, as demonstrated in early works emphasizing its resilience to class prevalence compared to precision-recall alternatives. Post-2020 developments have increasingly applied ROC in for fairness audits, particularly in detecting and mitigating bias across demographic subgroups in AI models for healthcare and scoring. Studies from 2021 to 2025 have used subgroup-specific ROC curves to quantify disparate performance, such as varying AUCs for mortality in underrepresented populations, guiding equitable threshold selection to reduce discriminatory outcomes. For instance, analyses of AI tools employed ROC to evaluate bias in multi-group settings, revealing how data imbalances exacerbate inequities and informing mitigation strategies like reweighting. These applications underscore ROC's evolving role in ensuring responsible AI deployment.

ROC Curve Construction

ROC Space

The ROC space is a two-dimensional graphical framework used to evaluate and compare the performance of binary classifiers by plotting their true positive rate (TPR) against the (FPR). This provides a standardized way to visualize trade-offs between correctly identifying positive instances and incorrectly classifying negative ones, independent of specific decision thresholds or class distributions. The space is bounded by a , with both axes ranging from 0 to 1, where the x-axis represents the FPR (the proportion of negative instances incorrectly classified as positive) and the y-axis represents the TPR (the proportion of positive instances correctly classified). Key points in ROC space illustrate fundamental classifier behaviors. The origin at (0,0) corresponds to a classifier that predicts no positive instances, resulting in zero true positives and zero false positives. The point (1,1) represents a classifier that predicts all instances as positive, yielding complete true positives but also all possible false positives. The diagonal line y = x traces the performance of a random classifier, where TPR equals FPR at every point, indicating no discriminatory power beyond chance. An ideal classifier achieves the point (0,1), detecting all positives without any false positives, while a completely worthless classifier lies at (1,0), generating only false positives and missing all true positives. ROC curves within this space exhibit monotonicity, meaning that as the FPR increases along the curve, the TPR never decreases, reflecting the sequential adjustment of classification thresholds from strict to lenient. The convex hull of a set of achievable ROC points delineates the boundary of optimal performance, enclosing all potentially superior classifiers while excluding suboptimal ones below it. This hull ensures that only classifiers on or above it are considered viable, as any point inside can be dominated by a convex combination of hull points for any cost or prevalence scenario. Visually, ROC space thus serves as a canvas for plotting these curves, with the upper-left corner approaching perfection and the lower-right indicating failure, facilitating intuitive assessment of classifier efficacy.

Generating ROC Curves

To generate an ROC curve for a binary classifier, begin with a of labeled instances (positive and negative classes) where the classifier assigns a continuous or ordinal score to each instance, representing the estimated probability of belonging to the positive class. Sort the instances in decreasing order of score, and systematically vary a decision threshold θ across the range of possible values, typically placing thresholds midway between consecutive distinct scores to avoid ties. For each θ, classify instances with scores above θ as positive and below as negative, then compute the true positive rate (TPR, or sensitivity) as the fraction of actual positives correctly classified and the (FPR, or 1-specificity) as the fraction of actual negatives incorrectly classified; each pair (FPR(θ), TPR(θ)) forms a point on the curve. Mathematically, the ROC curve is a parametric plot of TPR(θ) against FPR(θ) as the threshold θ varies from negative infinity (where all instances are classified positive, yielding TPR=1, FPR=1) to positive infinity (where all are negative, yielding TPR=0, FPR=0). This process traces the classifier's performance across all possible trade-offs between true positives and false positives. In the discrete case, where scores take finite values, the ROC curve consists of a finite set of points corresponding to the distinct thresholds, connected by straight line segments to form a step-like function; for visualization or analysis, linear interpolation between points or other smoothing techniques can approximate a continuous curve, though the exact convex hull of the points represents the achievable performance envelope. Consider a simple example with a small dataset: suppose there are 5 positive and 5 negative instances scored by a binary classifier as [0.9, 0.8, 0.7, 0.6, 0.5] for positives and [0.4, 0.3, 0.2, 0.1, 0.0] for negatives. Sorting all scores descending and varying θ (e.g., θ=0.65 yields TPR=0.6, FPR=0.0; θ=0.35 yields TPR=1.0, FPR=0.2), the resulting points might include (FPR=0.0, TPR=0.8), demonstrating how lowering θ increases both TPR and FPR. In the context of signal detection theory, the parametric form of the ROC curve can use the likelihood ratio as the threshold parameter, where the decision rule classifies an observation as signal-present if the likelihood ratio Λ (ratio of signal-plus-noise density to noise-only density at the observation) exceeds a criterion β; the operating point on the curve then corresponds to this β, with the curve's slope at that point equaling β.

Performance Evaluation

Area Under the Curve

The area under the receiver operating characteristic (ROC) curve, commonly denoted as AUC, quantifies the overall performance of a binary classifier by measuring the of the true positive rate (TPR, also known as sensitivity) with respect to the (FPR, 1 - specificity) from FPR = 0 to FPR = 1. This represents the of TPR for a randomly selected FPR in the range [0, 1]. Mathematically, the AUC is given by the formula: AUC=01TPR(FPR)dFPR\text{AUC} = \int_{0}^{1} \text{TPR}(\text{FPR}) \, d\text{FPR} For empirical ROC curves generated from discrete threshold points (as described in ROC curve generation), the integral is approximated using the trapezoidal rule, which sums the areas of trapezoids formed by connecting consecutive points (FPR_i, TPR_i) and (FPR_{i+1}, TPR_{i+1}). The approximation is: AUCi=1n1(TPRi+1+TPRi)2×(FPRi+1FPRi)\text{AUC} \approx \sum_{i=1}^{n-1} \frac{(\text{TPR}_{i+1} + \text{TPR}_i)}{2} \times (\text{FPR}_{i+1} - \text{FPR}_i) where the points include the origin (0, 0) and endpoint (1, 1). A key probabilistic interpretation of the AUC is that it equals the probability that a randomly chosen positive instance is ranked higher (i.e., assigned a higher score) than a randomly chosen negative instance by the classifier. This equivalence holds because the ROC curve summarizes the classifier's ranking ability across thresholds, and it is identical to the Mann-Whitney (normalized) for comparing scores between positive and negative classes. AUC values range from to 1, where an AUC of 1.0 indicates a perfect classifier with no overlap in scores between classes, an AUC of 0.5 corresponds to random guessing (equivalent to a diagonal line in ROC space), and values below 0.5 suggest a classifier performing worse than random, often implying an inverted decision rule. To illustrate computation, consider an empirical ROC curve with points (FPR, TPR): (, ), (0.2, 0.6), (0.5, 0.8), (1.0, 1.0). Applying the :
  • First segment: (0+0.6)2×(0.2[0](/page/0))=0.06\frac{(0 + 0.6)}{2} \times (0.2 - [0](/page/0)) = 0.06
  • Second segment: (0.6+0.8)2×(0.50.2)=0.21\frac{(0.6 + 0.8)}{2} \times (0.5 - 0.2) = 0.21
  • Third segment: (0.8+1.0)2×(1.00.5)=0.45\frac{(0.8 + 1.0)}{2} \times (1.0 - 0.5) = 0.45
Summing these yields an approximate AUC of 0.72. In practice, libraries like scikit-learn implement this via the roc_auc_score function, which handles the point sorting and trapezoidal integration automatically from prediction scores and true labels.

Other Metrics

In addition to the area under the ROC curve (AUC), several other metrics derived from the ROC curve provide supplementary insights into classifier performance, particularly for threshold selection or domain-specific emphases like high specificity. These metrics address scenarios where a single summary measure like AUC may not fully capture practical needs, such as identifying an optimal operating point or focusing on regions of clinical or operational relevance. Youden's J statistic, defined as J=TPR+(1FPR)1J = \text{TPR} + (1 - \text{FPR}) - 1, where TPR is the true positive rate (sensitivity) and FPR is the false positive rate (1 - specificity), quantifies the maximum vertical distance between the ROC curve and the chance line (the diagonal from (0,0) to (1,1)). This metric, introduced by Youden in 1950, reaches its maximum value at the threshold that optimizes the trade-off between sensitivity and specificity, making it particularly useful for diagnostic tests where balanced error rates are desired. The closest-to-(0,1) distance metric selects the optimal threshold by minimizing the from points on the ROC curve to the ideal point (0,1) in ROC space, calculated as d=(FPR0)2+(TPR1)2d = \sqrt{(\text{FPR} - 0)^2 + (\text{TPR} - 1)^2}
Add your contribution
Related Hubs
User Avatar
No comments yet.