Skip to main content

Table 1 Common terms

From: Three myths about risk thresholds for prediction models

AUC

Area under the curve, in this case the receiver operating characteristic curve. A measure of discrimination. For prediction models based on logistic regression, this corresponds to the probability that a randomly selected diseased patient had a higher risk prediction than a randomly selected patient who does not have the disease.

Calibration

Correspondence between predicted and observed risks usually assessed in calibration plots or by calibration intercepts and slopes.

Sensitivity

The proportion of true positives in truly diseased patients.

Specificity

The proportion of true negatives in truly non-diseased patients.

Positive predictive value

The proportion of true positives in patients classified as positive.

Negative predictive value

The proportion of true negatives in patients classified as negative.

Decision curve analysis

A method to evaluate classifications for a range of possible thresholds, reflecting different costs of false positives and benefits of true positives.

Net reclassification improvement

Net reclassification improvement, reflecting reclassifications in the right direction when making decisions based on one prediction model compared to another.

STRATOS

STRengthening Analytical Thinking for Observational Studies