Subscribe to RSS
The Use of Information Graphs to Evaluate and Compare Diagnostic Tests
Received 20 January 2001
Accepted 11 September 2001
07 February 2018 (online)
Objectives: The purpose of this communication is to demonstrate the use of “information graphs” as a means of characterizing diagnostic test performance.
Methods: Basic concepts in information theory allow us to quantify diagnostic uncertainty and diagnostic information. Given the probabilities of the diagnoses that can explain a patient’s condition, the entropy of that distribution is a measure of our uncertainty about the diagnosis. The relative entropy of the posttest probabilities with respect to the pretest probabilities quantifies the amount of information gained by diagnostic testing. Mutual information is the expected value of relative entropy and, hence, provides a measure of expected diagnostic information. These concepts are used to derive formulas for calculating diagnostic information as a function of pretest probability for a given pair of test operating characteristics.
Results: Plots of diagnostic information as a function of pretest probability are constructed to evaluate and compare the performance of three tests commonly used in the diagnosis of coronary artery disease. The graphs illustrate the critical role that the pretest probability plays in determining diagnostic test information.
Conclusions: Information graphs summarize diagnostic test performance and offer a way to evaluate and compare diagnostic tests.
- 1 Benish WA. Relative entropy as a measure of diagnostic information. Med Decis Making 1999; 19: 202-6.
- 2 Cover TM, Thomas JA. Elements of Information Theory. New York: John Wiley & Sons; 1991
- 3 San Román JA, Vilacosta I, Castillo JA. et al. Selection of the optimal stress test for the diagnosis of coronary artery disease. Heart 1998; 80: 370-6.
- 4 Diamond GA, Hirsch M, Forrester JS. et al. Application of information theory to clinical diagnostic testing. The electrocardiographic stress test. Circulation 1981; 63: 915-21.
- 5 Casscells W, Schoenberger A, Graboys TB. Interpretation by physicians of clinical laboratory results. N Engl J Med 1978; 299: 999-1001.
- 6 Eddy DM. Probabilistic reasoning in clinical medicine: problems and opportunities. In: Kahneman D, Slovic P, Tversky A. eds. Judgement under Uncertainty: Heuristics and Biases. Cambridge, UK: Cambridge University Press; 1982: 249-67.
- 7 Metz CE, Goodenough DJ, Rossmann K. Evaluation of receiver operating characteristic curve data in terms of information theory, with applications in radiography. Radiology 1973; 109: 297-303.
- 8 Shannon CE, Weaver W. The Mathematical Theory of Communication. Urbana, IL: University of Illinois Press; 1949
- 9 Asch DA, Patton JP, Hershey JC. Knowing for the sake of knowing: the value of prognostic information. Med Decis Making 1990; 10: 47-57.
- 10 Schneider TD, Stromo GD, Gold L, Ehrenfeucht A. Information content of binding sites on nucleotide sequences. J Mol Biol 1986; 188: 415-31.
- 11 Kullback S, Leibler RA. On information and sufficiency. Ann Math Stat 1951; 2: 79-86.
- 12 Tiwari JL, Park MS. Information theory approach to the clustering of serologic reaction patterns. Tranplant Proc 1978; 10: 857-9.
- 13 Henson DB, Chauhan BC. Informational content of visual field location in glaucoma. Doc Ophthalmol 1985; 59: 341-52.
- 14 Somoza E, Steer RA, Beck AT, Clark DA. Differentiating major depression and panic disorders by self-report and clinical rating scales: ROC analysis and information theory. Behav Res Ther 1994; 32: 771-82.
- 15 Lee WC. Selecting diagnostic tests for ruling out or ruling in disease: the use of the Kull-back-Leibler distance. Int J Epidemiol 1999; 28: 521-5.
- 16 Somoza E, Mossman D. Comparing and optimizing diagnostic tests: an information-theoretical approach. Med Decis Making 1992; 12: 179-88.