Appl Clin Inform 2018; 09(01): 156-162
DOI: 10.1055/s-0038-1627475
Research Article
Schattauer GmbH Stuttgart

The Reliability of Electronic Health Record Data Used for Obstetrical Research

Molly R. Altman
,
Karen Colorafi
,
Kenn B. Daratha
Weitere Informationen

Publikationsverlauf

02. August 2017

01. Januar 2018

Publikationsdatum:
07. März 2018 (online)

Abstract

Background Hospital electronic health record (EHR) data are increasingly being called upon for research purposes, yet only recently has it been tested to examine its reliability. Studies that have examined reliability of EHR data for research purposes have varied widely in methods used and field of inquiry, with little reporting of the reliability of perinatal and obstetric variables in the current literature.

Objective To assess the reliability of data extracted from a commercially available inpatient EHR as compared with manually abstracted data for common attributes used in obstetrical research.

Methods Data extracted through automated EHR reports for 3,250 women who delivered a live infant at a large hospital in the Pacific Northwest were compared with manual chart abstraction for the following perinatal measures: delivery method, labor induction, labor augmentation, cervical ripening, vertex presentation, and postpartum hemorrhage.

Results Almost perfect agreement was observed for all four modes of delivery (vacuum assisted: kappa = 0.92; 95% confidence interval [CI] = 0.88–0.95, forceps assisted: kappa = 0.90; 95%CI = 0.76–1.00, cesarean delivery: kappa = 0.91; 95%CI = 0.90–0.93, and spontaneous vaginal delivery: kappa = 0.91; 95%CI = 0.90–0.93). Cervical ripening demonstrated substantial agreement (kappa = 0.77; 95%CI = 0.73–0.80); labor induction (kappa = 0.65; 95%CI = 0.62–0.68) and augmentation (kappa = 0.54; 95%CI = 0.49–0.58) demonstrated moderate agreement between the two data sources. Vertex presentation (kappa = 0.35; 95%CI = 0.31–0.40) and post-partum hemorrhage (kappa = 0.21; 95%CI = 0.13–0.28) demonstrated fair agreement.

Conclusion Our study demonstrates variability in the reliability of obstetrical data collected and reported through the EHR. While delivery method was satisfactorily reliable in our sample, other examined perinatal measures were less so when compared with manual chart abstraction. The use of multiple modalities for assessing reliability presents a more consistent and rigorous approach for assessing reliability of data from EHR systems and underscores the importance of requiring validation of automated EHR data for research purposes.

Protection of Human and Animal Subjects

This study has been approved by the academic research institution's Institutional Review Board as well as by the study healthcare institution.


 
  • References

  • 1 The American Presidency Project. Executive Order 13335 Incentives for the Use of Health Information Technology and Establishing the Position of the National Health Information Technology Coordinator. 2004. Available at: https://www.gpo.gov/fdsys/pkg/WCPD-2004-05-03/pdf/WCPD-2004-05-03-Pg702.pdf . Accessed July 25, 2017
  • 2 Henry J, Pylypchuk Y, Searcy T, Patel V. Adoption of Electronic Health Record Systems among US Non-Federal Acute Care Hospitals: 2008–2015. The Office of National Coordinator for Health Information Technology; 2016
  • 3 Moen H, Peltonen L-M, Heimonen J. , et al. Comparison of automatic summarisation methods for clinical free text notes. Artif Intell Med 2016; 67: 25-37
  • 4 Turner M, Barber M, Dodds H, Dennis M, Langhorne P, Macleod M-J. ; Scottish Stroke Care Audit. Agreement between routine electronic hospital discharge and Scottish Stroke Care Audit (SSCA) data in identifying stroke in the Scottish population. BMC Health Serv Res 2015; 15 (01) 583
  • 5 Urech TH, Woodard LD, Virani SS, Dudley RA, Lutschg MZ, Petersen LA. Calculations of financial incentives for providers in a pay-for-performance program: Manual review versus data from structured fields in electronic health records. Med Care 2015; 53 (10) 901-907
  • 6 Futoma J, Morris J, Lucas J. A comparison of models for predicting early hospital readmissions. J Biomed Inform 2015; 56: 229-238
  • 7 Branch-Elliman W, Strymish J, Kudesia V, Rosen AK, Gupta K. Natural language processing for real-time catheter-associated urinary tract infection surveillance: results of a pilot implementation trial. Infect Control Hosp Epidemiol 2015; 36 (09) 1004-1010
  • 8 Funderburk A, Nawab U, Abraham S. , et al. Temporal association between reflux-like behaviors and gastroesophageal reflux in preterm and term infants. J Pediatr Gastroenterol Nutr 2016; 62 (04) 556-561
  • 9 Hure AJ, Chojenta CL, Powers JR, Byles JE, Loxton D. Validity and reliability of stillbirth data using linked self-reported and administrative datasets. J Epidemiol 2015; 25 (01) 30-37
  • 10 Knake LA, Ahuja M, McDonald EL. , et al. Quality of EHR data extractions for studies of preterm birth in a tertiary care center: guidelines for obtaining reliable data. BMC Pediatr 2016; 16 (01) 59
  • 11 Flood M, Small R. Researching labour and birth events using health information records: methodological challenges. Midwifery 2009; 25 (06) 701-710
  • 12 Lain SJ, Hadfield RM, Raynes-Greenow CH. , et al. Quality of data in perinatal population health databases: a systematic review. Med Care 2012; 50 (04) e7-e20
  • 13 Altman MR, Murphy SM, Fitzgerald CE, Andersen HF, Daratha KB. The cost of nurse-midwifery care: use of interventions, resources, and associated costs in the hospital setting. Womens Health Issues 2017; 27 (04) 434-440
  • 14 Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med 2005; 37 (05) 360-363
  • 15 Alsara A, Warner DO, Li G, Herasevich V, Gajic O, Kor DJ. Derivation and validation of automated electronic search strategies to identify pertinent risk factors for postoperative acute lung injury. Mayo Clin Proc 2011; 86 (05) 382-388
  • 16 Smischney NJ, Velagapudi VM, Onigkeit JA, Pickering BW, Herasevich V, Kashyap R. Derivation and validation of a search algorithm to retrospectively identify mechanical ventilation initiation in the intensive care unit. BMC Med Inform Decis Mak 2014; 14: 55
  • 17 Chandra S, Agarwal D, Hanson A. , et al. The use of an electronic medical record based automatic calculation tool to quantify risk of unplanned readmission to the intensive care unit: a validation study. J Crit Care 2011; 26 (06) 634.e9-634.e15
  • 18 Kahn MG, Eliason BB, Bathurst J. Quantifying clinical data quality using relative gold standards. AMIA Annual Symposium Proceedings 2010; 2010: 356-360
  • 19 Makam AN, Nguyen OK, Moore B, Ma Y, Amarasingham R. Identifying patients with diabetes and the earliest date of diagnosis in real time: an electronic health record case-finding algorithm. BMC Med Inform Decis Mak 2013; 13 (01) 81
  • 20 Penman-Aguilar A, Tucker MJ, Groom AV. , et al. Validation of algorithm to identify American Indian/Alaska Native pregnant women at risk from pandemic H1N1 influenza. Am J Obstet Gynecol 2011; 204 (06) (Suppl. 01) S46-S53
  • 21 Singh B, Singh A, Ahmed A. , et al. Derivation and validation of automated electronic search strategies to extract Charlson comorbidities from electronic medical records. Mayo Clin Proc 2012; 87 (09) 817-824
  • 22 Tien M, Kashyap R, Wilson GA. , et al. Retrospective derivation and validation of an automated electronic search algorithm to identify post operative cardiovascular and thromboembolic complications. Appl Clin Inform 2015; 6 (03) 565-576
  • 23 Carroll RJ, Thompson WK, Eyler AE. , et al. Portability of an algorithm to identify rheumatoid arthritis in electronic health records. J Am Med Inform Assoc 2012; 19 (e1): e162-e169
  • 24 Lapham GT, Rubinsky AD, Shortreed SM. , et al. Comparison of provider-documented and patient-reported brief intervention for unhealthy alcohol use in VA outpatients. Drug Alcohol Depend 2015; 153: 159-166
  • 25 Luef BM, Andersen LB, Renäult KM, Nohr EA, Jørgensen JS, Christesen HT. Validation of hospital discharge diagnoses for hypertensive disorders of pregnancy. Acta Obstet Gynecol Scand 2016; 95 (11) 1288-1294
  • 26 Herrett E, Gallagher AM, Bhaskaran K. , et al. Data resource profile: clinical practice research datalink (CPRD). Int J Epidemiol 2015; 44 (03) 827-836
  • 27 Selby NM, Crowley L, Fluck RJ. , et al. Use of electronic results reporting to diagnose and monitor AKI in hospitalized patients. Clin J Am Soc Nephrol 2012; 7 (04) 533-540
  • 28 Ndira SP, Rosenberger KD, Wetter T. Assessment of data quality of and staff satisfaction with an electronic health record system in a developing country (Uganda): a qualitative and quantitative comparative study. Methods Inf Med 2008; 47 (06) 489-498
  • 29 Aronsky D, Haug PJ. Assessing the quality of clinical data in a computer-based record for calculating the pneumonia severity index. J Am Med Inform Assoc 2000; 7 (01) 55-65
  • 30 Jiggins K. A content analysis of the meaningful use clinical summary: do clinical summaries promote patient engagement?. Prim Health Care Res Dev 2016; 17 (03) 238-251
  • 31 Sammon CJ, Miller A, Mahtani KR. , et al. Missing laboratory test data in electronic general practice records: analysis of rheumatoid factor recording in the clinical practice research datalink. Pharmacoepidemiol Drug Saf 2015; 24 (05) 504-509