Methods Inf Med 2000; 39(04/05): 325-331
DOI: 10.1055/s-0038-1634399
Original Article
Schattauer GmbH

Evaluation of Three Swedish ICD-10 Primary Care Versions: Reliability and Ease of Use in Diagnostic Coding

G. Nilsson
1   Family Medicine Stockholm, Karolinska Institute, Sweden
,
H. Petersson
2   Medical Informatics, Linköping University, Sweden
,
H. Åhlfeldt
2   Medical Informatics, Linköping University, Sweden
,
L.-E. Strender
1   Family Medicine Stockholm, Karolinska Institute, Sweden
› Author Affiliations
Further Information

Publication History

Publication Date:
08 February 2018 (online)

Abstract:

If computer-stored information is to be useful for purposes other than patient care, reliability of the data is of utmost importance. In primary healthcare settings, however, it has been found to be poor. This paper presents a study on the influence of coding tools on reliability and user acceptance. Six general practitioners coded 152 medical problems each by means of three versions of ICD-10, one with a compositional structure. At code level the reliability was poor and was almost identical when the three versions were compared. At aggregated level the reliability was good and somewhat better in the compositional structure. Ideas for improved user acceptance arose, and the study explored the need for several different tools to retrieve diagnostic codes.

 
  • REFERENCES

  • 1 Patientjournaler med datorstöd (Computer-supported patient records). Stockholm: SPRI; 1995
  • 2 Klassifikation av sjukdomar 1987.. Primärvård (Classification of Diseases 1987. Primary Care). Stockholm: Socialstyrelsen; 1987
  • 3 Klassifikation av sjukdomar och hälsoproblem 1997.. Primärvård (Classification of Diseases and Health Problems 1997. Primary Care). Stockholm: Socialstyrelsen; 1997
  • 4 Ridderikhoff J, van Herk E. A diagnostic support system in general practice: is it feasible?. Int J Med Inform 1997; 45: 133-43.
  • 5 Gill PW, Leaper DJ, Guillou PJ, Staniland JR, Horrocks JC, de Dombal FT. Observer variation in clinical diagnosis. A computer-aided assessment of its magnitude and importance in 552 patients with abdominal pain. Method Inform Med 1973; 12: 108-13.
  • 6 Zarling EJ, Sexton H, Milnor P. Failure to diagnose acute myocardial infarction. The clinicopathologic experience at a large community hospital. JAMA 1983; 250: 1177-81.
  • 7 de Dombal FT, Dallos V, McAdam WA. Can computer aided teaching packages improve clinical care in patients with acute abdominal pain?. Br Med J 1991; 302: 1495-7.
  • 8 James NK, Reid CD. Plastic surgery audit codes: are the results reproducible?. Br J Plast Surg 1991; 44: 62-4.
  • 9 Bentsen BG. The accuracy of recording patient problems in family practice. J Med Educ 1976; 51: 311-6.
  • 10 Bridges-Webb C. Classifying and coding morbidity in general practice: validity and reliability in an international trial. J Fam Pract 1986; 23: 147-50.
  • 11 Britt H, Angelis M, Harris E. The reliability and validity of doctor-recorded morbidity data in active data collection systems. Scand J Prim Health Care 1998; 16: 50-5.
  • 12 Dixon J, Sanderson C, Elliot P, Walls P, Jones J, Petticrew M. Assessment of the reproducibility of clinical coding in routinely collected hospital activity data: a study in two hospitals. J Public Health Med 1998; 20: 63-9.
  • 13 Hohnloser JH, Kadle P, Peurner F. Coding Clinical Information: Analysis of Clinicians Using Computerized Coding. Method Inform Med 1996; 35: 104-7.
  • 14 De Bruijn LM, Hasman A, Arends JW. Automatic coding of diagnostic reports. Method Inform Med 1998; 37: 260-5.
  • 15 Rector AL. Faithfulness or comparability [editorial; comment]. Method Inform Med 1996; 35: 218-9.
  • 16 Bolton P, Mira M, Usher H, Prior G. A model for the evaluation of computerised codes. The Gabrieli Medical Nomenclature as an example. Aust Fam Physician 1997; 26 Suppl 2: S76-8.
  • 17 Petersson H, Nilsson G, Åhlfeldt H, Malmberg B-G, Wigertz O. Design and implementation of a World Wide Web accessible database for the Swedish ICD-10 primary care version using a concept system approach. In: Masys DR. editor. Proceedings of the 1997 AMIA Annual Fall Symposium; 1997 Oct 25-29; Nashville, USA. Philadelphia: Hanley & Belfus; 1997: 885.
  • 18 Weed LL. Medical records, patient care, and medical education. Chicago: Year Book Medical Publishers; 1969
  • 19 Gaston-Johansson F. Measurement of pain: the psychometric properties of the Pain-O-Meter, a simple, inexpensive pain assessment tool that could change health care. J Pain Symptom Manage 1996; 12: 172-81.
  • 20 Altman DG. Practical Statistics for Medical Research. London: Chapman and Hall; 1991
  • 21 Gjorup T. The Kappa coefficient and the prevalence of a diagnosis. Method Inform Med 1988; 27: 184-6.
  • 22 Brennan P, Silman A. Statistical methods for assessing observer variability in clinical measures. Br Med J 1992; 304: 1491-4.
  • 23 Haas M. Statistical methodology for reliability studies. J Manipulative Physiol Ther 1991; 14: 119-32.
  • 24 Anderson JE. Reliability of morbidity data in family practice. J Fam Pract 1980; 4: 677-83.
  • 25 Nuopponen A. Begreppssystem för terminologisk analys (Concept Systems for Terminological Analysis) [dissertation]. English summary. Vaasa: University of Vaasa; 1994
  • 26 Rossi Mori A, Consorti F, Galeazzi E. Standards to support development of terminological systems for healthcare telematics. Method Inform Med 1998; 37: 551-63.
  • 27 Van Ginneken AM. The Structure of Data in Medical Records. In: Van Bemmel JH, McCray AT. editors. Yearbook of Medical Informatics 1995. Stuttgart, New York: Schattauer; 1995: 61-70.