Summary
Objectives: Evidence-based clinical scores are used frequently in clinical practice, but data
collection and data entry can be time consuming and hinder their use. We investigated
the programmability of 168 common clinical calculators for automation within electronic
health records.
Methods: We manually reviewed and categorized variables from 168 clinical calculators as being
extractable from structured data, unstructured data, or both. Advanced data retrieval
methods from unstructured data sources were tabulated for diagnoses, non-laboratory
test results, clinical history, and examination findings.
Results: We identified 534 unique variables, of which 203/534 (37.8%) were extractable from
structured data and 269/534 (50.4.7%) were potentially extractable using advanced
techniques. Nearly half (265/534, 49.6%) of all variables were not retrievable. Only
26/168 (15.5%) of scores were completely programmable using only structured data and
43/168 (25.6%) could potentially be programmable using widely available advanced information
retrieval techniques. Scores relying on clinical examination findings or clinical
judgments were most often not completely programmable.
Conclusion: Complete automation is not possible for most clinical scores because of the high
prevalence of clinical examination findings or clinical judgments – partial automation
is the most that can be achieved. The effect of fully or partially automated score
calculation on clinical efficiency and clinical guideline adherence requires further
study.
Citation: Aakre C, Dziadzko M, Keegan MT, Herasevich V. Automating clinical score calculation
within the electronic health record: A feasibility assessment. Appl Clin Inform 2017;
8: 369–380 https://doi.org/10.4338/ACI-2016-09-RA-0149
Keywords
Automation - decision support algorithm - clinical score - knowledge translation -
workflow - clinical practice guideline