CC BY-NC-ND 4.0 · Appl Clin Inform 2022; 13(01): 315-321
DOI: 10.1055/s-0042-1743241
State of the Art/Best Practice Paper

Building a Learning Health System: Creating an Analytical Workflow for Evidence Generation to Inform Institutional Clinical Care Guidelines

Dev Dash*
1   Department of Medicine, Stanford University School of Medicine Stanford, California, United States
,
Arjun Gokhale*
1   Department of Medicine, Stanford University School of Medicine Stanford, California, United States
,
Birju S. Patel
2   Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, California, United States
,
Alison Callahan
2   Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, California, United States
,
Jose Posada
1   Department of Medicine, Stanford University School of Medicine Stanford, California, United States
,
Gomathi Krishnan
2   Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, California, United States
,
William Collins
1   Department of Medicine, Stanford University School of Medicine Stanford, California, United States
,
Ron Li
1   Department of Medicine, Stanford University School of Medicine Stanford, California, United States
,
Kevin Schulman
1   Department of Medicine, Stanford University School of Medicine Stanford, California, United States
,
Lily Ren
1   Department of Medicine, Stanford University School of Medicine Stanford, California, United States
,
Nigam H. Shah
2   Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, California, United States
› Author Affiliations
 

Abstract

Background One key aspect of a learning health system (LHS) is utilizing data generated during care delivery to inform clinical care. However, institutional guidelines that utilize observational data are rare and require months to create, making current processes impractical for more urgent scenarios such as those posed by the COVID-19 pandemic. There exists a need to rapidly analyze institutional data to drive guideline creation where evidence from randomized control trials are unavailable.

Objectives This article provides a background on the current state of observational data generation in institutional guideline creation and details our institution's experience in creating a novel workflow to (1) demonstrate the value of such a workflow, (2) demonstrate a real-world example, and (3) discuss difficulties encountered and future directions.

Methods Utilizing a multidisciplinary team of database specialists, clinicians, and informaticists, we created a workflow for identifying and translating a clinical need into a queryable format in our clinical data warehouse, creating data summaries and feeding this information back into clinical guideline creation.

Results Clinical questions posed by the hospital medicine division were answered in a rapid time frame and informed creation of institutional guidelines for the care of patients with COVID-19. The cost of setting up a workflow, answering the questions, and producing data summaries required around 300 hours of effort and $300,000 USD.

Conclusion A key component of an LHS is the ability to learn from data generated during care delivery. There are rare examples in the literature and we demonstrate one such example along with proposed thoughts of ideal multidisciplinary team formation and deployment.


#

Background and Significance

In 2015, the National Academy of Medicine set a goal for 90% of clinical decisions by 2020 to be supported by accurate, timely, and up-to-date clinical information reflecting the best available evidence.[1] While randomized control trials (RCTs) remain the gold standard for producing clinical evidence, only 65%, and as little as 14% for some medical disciplines, of clinical decisions are supported by RCTs, and the scope of these decisions is limited by cost and external validity.[2] [3] [4] In fact, the percentage of recommendations supported by RCTs has decreased in fields like cardiology over the prior decade and many of these guidelines are derived from expert opinion.[5] [6] There exists a clear need for other evidence sources to support guideline generation.

Learning health systems (LHSs) may bridge this evidence gap by creating the infrastructure and processes to leverage data created through care delivery.[7] The National Academy of Medicine defined an LHS as a system where “science, informatics, incentives, and culture are aligned for continuous improvement and innovation, with best practices seamlessly embedded in the delivery process and new knowledge captured as an integral by-product of the delivery experience.”[8] [9] Though definitions of an LHS vary, a core tenet is the ability to analyze care delivery and learn from institutional data. Given the increase in the number of hospitals with a certified electronic health record (EHR), learning from a health system's own care delivery and visualizing findings in a meaningful manner should be possible.[10] [11] With the assistance of a librarian, a literature search was conducted in the PubMed to locate relevant English language literature published between January 2010 to November 2021 and used relevant keywords related to “hospital guideline” and “development.” Of the 89 articles returned regarding how institutional guidelines are created, we found that most relied on expert opinion and review of the literature. Only two articles discussed the use of observational data to inform creation of institutional guidelines. Both related to antibiotic stewardship and had limited duration with time frames that would have limited utility in more urgent situations.[12] [13] Therefore, while the concept of an LHS has generated significant interest, there are no examples of wide-scale use of observational data that address a range of disciplines and are intended to persist to continuously inform institutional care guidelines. The ability to learn from observational health data remains challenging across health systems and analyzing these data to generate usable evidence requires significant effort.[14] [15] We have previously demonstrated the ability to answer questions prompted during individual patient care.[16] In 2019, our institution led the world's first pilot of an informatics consultation service using routinely collected data on millions of individuals to provide on-demand evidence not addressed by primary literature or other data sources.[17] [18] [19] Although this need was first identified over a decade ago,[20] we are aware of few health systems and organizations that have been able to translate foreground questions to generate insights such as Kaiser Permanente, Geisinger, the Food and Drug Administration Sentinel Initiative, Mental Health Research Network, and Vaccine Safety Datalink.[21] While these efforts were remarkable in their translation of observational data into clinical insights, a novel contribution of our work is the short timeframe from need identification to data to guide the design of practice guidelines.[16] [19]

The COVID-19 pandemic presented unprecedented challenges, and while some systems were able to rapidly implement surveillance mechanisms,[22] the pandemic exposed equipment and personnel capacity limitations as well as a shortfall of systems to perform rapid analyses of observational data.[15] On a local level, in early January 2021 at Stanford Hospital, a 605 bed quaternary care hospital located in Stanford, California, the general medicine service appeared at risk of being overwhelmed by new cases of COVID-19. The hospital medicine division sought to standardize care and rapidly create and disseminate guidelines for the inpatient management of these patients leveraging existing data that had been generated during the care of COVID-19 patients.

In the current work, we describe the creation of an interdisciplinary team anchored by clinical informatics fellows that used existing institutional infrastructure[18] [19] [23] to generate timely and actionable insights from existing data to support the design of institutional management guidelines for COVID-19 patients.

We describe both the process and infrastructure, as well as the evidence generated to inform hospital medicine division guidelines for the care of COVID-19 patients as an example of what an LHS can accomplish. We conclude with a discussion of future steps on how institutions can leverage their existing fellowship programs, informatics teams, and data science expertise to jump-start LHS efforts.


#

Methods

The workflow and methodology of our institution's previous informatics consultation service was repurposed to answer questions for near real-time clinical decision support at a population level for continuous guideline evolution, as shown in [Fig. 1].[19] This was accomplished by a multidisciplinary team consisting of practicing clinicians, electronic medical record reporting specialists, data scientists, and clinical informaticians (CIs). Briefly, the workflow consisted of specifying clinical questions, translating the details of those questions into patient cohort definitions which are then executed as queries to our clinical data warehouse (CDW), and conducting statistical analyses appropriate to the question(s) over the retrieved patient data. The question, cohort definitions, analyses, and their results were then summarized as written reports and discussed with the requesting team in an in-person debrief. Details of the workflow, data retrieval, and analysis tools developed for this service are further described in our previous work.[19]

Zoom Image
Fig. 1 This figure illustrates the iterative cycles that can be performed to create and continuously update institutional guidelines. The process starts with a clinical need that arises from an institutional concern (e.g., resource constraints at our institution during COVID-19 pandemic). This concern is then translated into a PICO-T question as described in the Methods section. Next, report writers query the clinical data warehouses and this information is validated by clinical informaticians. Once the data are validated, data scientists create report summaries and the evidence is presented to the initiators of the request to inform clinical guideline creation. As a clinical need evolves over time, this process can undergo further iterative cycles to refine guidelines to adapt to a changing environment.

Analytical Process Need Identification

Practicing clinicians from the hospital medicine division (end-users) identified clinical questions with practice changing implications to inform the creation of guidelines for the management of patients with COVID-19. This operational need arose from a need to mitigate a predicted upcoming inpatient surge of COVID-19 patients, with a turnaround request of less than 2 weeks for the results. To do so, an analytical process was created to streamline the process.


#

Analytical Team Creation

The aforementioned team was assembled by an associate chief information officer (CIO; N.H.S.) and the databases containing sufficient information to run the study were identified. The Epic Clarity and STARR-OMOP (Stanford Research Repository-Observational Medical Outcomes Partnership), a separate CDW that has converted the Epic Clarity schema to the OMOP common data model (CDM), were deemed to contain the necessary clinical information and this informed the staffing of EHR database specialists with expertise in those databases. Data scientists with an understanding of data integrity issues and experience with clinical research and communication were also required to summarize findings in an actionable manner. CIs with understanding of clinical workflows and the clinical data model were needed to identify where key data elements were generated and stored as well as perform data validation.


#

Analytical Process in Use

Practicing clinicians posed questions which were then formalized into Population/Intervention/Comparator/Outcome (PICO) questions by a data science and CI team. EHR-based phenotyping was performed by this team to define the cohorts of interest. Patient cohorts were created by the data scientists using the Observational Health Data Science and Informatics (OHDSI) ATLAS Cohort Creation Tool,[24] an open-source, web-based clinical data search tool designed to be used on the OMOP CDM and operating over Stanford Medicine's CDW, STARR.[25]

With the cohorts defined, the two databases were accessed separately and an iterative process of data validation was performed by the clinical informaticists in conjunction with data scientists and database specialists. During this process, EHR database specialists, data scientists, and CIs focused on creating cohort definitions given knowledge of the clinical workflow and where data were stored in the Clarity database. This was followed by sanity checks on cohort generation and troubleshooting issues that affected sensitivity and specificity of the defined phenotypes. As an example, there were multiple ways to define ventilated patients such as O2 delivery method, documentation on ventilator flow sheet rows, new addition of an airway record, and signing of a new ventilator setting order. Troubleshooting via database queries and verification of retrieved data revealed that initiation of a new ventilator setting order was the most accurate of the above definitions. Other examples of findings from this process included the retirement of an old C-reactive protein (CRP) laboratory record in use prior to COVID, redefining the “COVID positive” phenotype to use COVID nasal swab test positivity rather than International Classification of Diseases (ICD) codes, excluding patients who had “do not intubate” orders from the denominator of potential intubation events, and expanding criteria to find adult patients who had been accidentally registered under pediatric testing departments. These final phenotypes are included in [Supplementary Appendix A] (available in the online version) for reproducibility. Once the cohorts were properly defined, we ascertained the location of salient pieces of information within the EHR, through synchronous meetings between CIs and database specialists. In our cohort, a COVID-19 test positivity as defined SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) test result “detected” was more specific than using the application of a COVID-19 ICD code. Other features within the EHR such as D-dimer, CRP, supplemental oxygen information, and ventilator usage required iterative meetings with the database specialists to ensure accuracy of cohort generation. Data scientists then conducted analyses to answer the questions received and summarize the results in conjunction with CIs through written and oral reports.


#

Result Dissemination

Finally, debrief sessions were scheduled to perform a warm handoff of the reports to end-user clinicians who then utilized them in creating and refining clinical care guidelines. We used this approach to answer four clinical questions posed by the hospital medicine division pertinent to the management of COVID-19 patients. These results were then disseminated via a presentation at hospital grand rounds and electronically via email.


#
#

Results

We were able to successfully answer all four questions, with answers summarized here for brevity and detailed in a full PICO format in [Supplementary Appendix A] (available in the online version).

  • Question 1. How often are COVID-19 patients with high oxygen requirements transferred to the ICU (intensive care unit)? How often are they placed on mechanical ventilation?

  • Answer 1: 46% of COVID-19 patients with high oxygen requirements were transferred to the ICU and of those, 31% were intubated and placed on mechanical ventilation.

  • Question 2. Compared with patients who were not discharged on supplemental home oxygen, are patients discharged with home oxygen orders more likely to be readmitted to the ED (emergency department) within 30 days?

  • Answer 2: 4.4% of patients discharged with supplemental home oxygen compared with 4.8% of patients discharged without supplemental home oxygen were readmitted to the ED within 30 days.

  • Question 3: Do COVID-19 patients with an elevated D-dimer at time of admission have a higher rate of VTE (venous thromboembolism) or transfer to the ICU compared with those with normal D-dimer levels at the time of admission?

  • Answer 3: 32% of patients with an elevated D-dimer test result were transferred to the ICU or had a record of VTE compared with 15.9% of patients with a normal D-dimer test result.

  • Question 4. Do COVID-19 patients with an elevated CRP level at the time of admission have a higher rate of transfer to the ICU compared with those with normal CRP levels at the time of admission?

  • Answer 4: 28% of patients with an abnormal CRP test result were transferred to the ICU compared with 28.2% of patients with a normal CRP test result.

The answers to these questions informed institutional guideline creation, helped alleviate concerns about resource constraints, and informed possible measures in the setting of high levels of ICU utilization. In particular, data demonstrating that patients discharged with home oxygen compared with those discharged on room air had similar 30 day readmission rates to the ED-alleviated concerns about needing to increase patient length of stay until the resolution of an oxygen requirement. Observing that 14.3% of patients with significant oxygen requirements progressed to intubation helped address concerns about being able to predict ICU demand and reassure clinicians that a significant proportion of these patients could be monitored on the general medical service if needed. Finally, seeing the potential prognostic value of an admission D-dimer for ICU transfer and VTE helped inform the development of screening guidelines for VTE on the COVID-19 service. The cohorts were not of sufficient size to enable subgroup analyses by age, sex, race, or ethnicity.

Answering the questions using our re-purposed informatics consult service required over 300 hours of effort from a multidisciplinary team. These hours included efforts in performing an intake of the clinical questions, refining the questions into a PICO format, performing data integrity checks, and creating data summaries. The hours were then converted into a dollar amount using standard salary rates. After accounting for the fixed costs in creating the information technology, informatics, and data science infrastructure, the total approximate cost amounted to $300,000 USD for the four reports.


#

Discussion

Currently, institutional clinical management guidelines rely on a combination of literature review and expert opinion. Health care systems have the requisite data, expertise, and staffing to rapidly generate evidence from local institutional data for clinical questions that are unaddressed in traditional literature sources. The urgent nature of the above questions, posed at a time when it appeared COVID-19 cases may overwhelm our hospital, highlights the importance of creating institutional infrastructure to rapidly generate insights from observational data. Such infrastructure must include data warehouses, a multidisciplinary team, and an analytical process. We utilized such a process to answer questions about oxygen therapy, VTE risks, and ICU transfer risks as shown, and demonstrate the feasibility of rapidly generating data summaries using observational data to support institutional guidelines.

The four key questions were identified and answered within a 2-week time frame to help inform guidelines for the care of patients with COVID-19 on the hospital medicine service which would have been helpful in supporting triage decisions had the COVID-19 surge happened. Our efforts to define and allocate resources needed to operationalize this analytic process help inform future planning. Although we presented one disease process and four associated questions, similar inquiries can be generalized to clinical questions regarding management across all medical specialties. With proper resources and an analytic process in place, given sufficient patient level data, it should be possible to individualize recommendations as more homogenous cohorts can be created through subsetting.[26]

This effort also demonstrated the value of an interdisciplinary team in answering clinically oriented questions using EHR data and the role that board-certified CIs can play on such teams. While each team member had a specific role at different points in the process, continuous collaboration between all was critical in refining questions and adjusting to issues presented by the data. Informaticians played a particularly important role in ensuring that the populations and interventions of interest were appropriately defined in order for the EHR specialists and data scientists to accurately build patient cohorts. Broad questions of interest to clinicians such as “Can patients be discharged with home oxygen safely?” needed to be translated into detailed population, intervention, and outcome definitions by CIs and accordingly, collaboration was critical in building patient cohorts. For example, consensus and compromise between the data science team and clinicians were needed to define an effective but not overly complicated phenotype for an intubation event. Collaboration between the groups also led to the discovery of key data integrity issues, such as intermittent registration of adult patients that showed up initially under pediatrics emergency medicine due to legacy construction hold overs, inclusion of surgical patients without COVID-19 who had been given COVID-19 encounter diagnoses as part of surgical prescreening, and the exclusion of a D-dimer test that had been retired during the middle of the study period as it was not part of the currently available laboratory tests. Rectifying these issues led to significant changes in cohort sizes. Additionally, initial reports of the intubation rate for patients with a high oxygen requirement were lower than expected by clinical suspicion, leading the clinical team to review patient charts and find that patients with do not intubate orders had not been excluded from the initial cohort. These data validation steps would not have been possible without knowledge about inner workings of our hospital's clinical workflows, data scientists with a high degree of familiarity with the EHR, and involving clinicians who were involved in care of the patient cohort of interest. Although the particular application was a singular use case, it expands upon our previously described informatics consult service from generating personalized real-world evidence to the use of these data to support clinical care guidelines.[18] While this effort did not focus on iterating within a specific disease, it serves as a proof of concept, and the underlying strategies are fundamental to iterative cycles that continuously provide evidence within our institution and will inform future clinical care guideline creation across diseases. Additionally, in the event of a changing institutional concern in relation to COVID-19, the created infrastructure can be easily adapted to support evolving clinical care guidelines.

The process, as detailed in the Methods section, relied on resources such as the ATLAS cohort analysis tool and the STARR-OMOP database as well as human resources. Indeed, database management and assessment has been highlighted as critical in the development of an LHS.[15] To create and deliver these four reports in less than 2 weeks, over 300 hours of effort from a multidisciplinary team comprising 11 members including clinical end-users, clinical informaticists, data scientists, and EHR data specialists were required at a total estimated cost of $300,000 USD. While there were large initial costs in team formation and defining an effective workflow, the marginal cost of producing additional reports is likely close to $20,000 USD. This cost also does not take into account the research and development necessary to create and maintain the tools and databases, which are supported via the office of the senior associate dean of research. The majority of the effort was spent in performing steps upstream from data analysis such as population definition, data extraction, and data validation, suggesting that to improve the cost effectiveness of future projects, efforts should be spent on supporting diverse and complex data extraction processes and ensuring data integrity of the data sources. In comparison, estimates of running an RCT from phases 1 to 3 come to $285 million USD in 2014 and additionally required 6.4 years.[27]

There were multiple ways in which our process could have been streamlined to help reduce costs and turnaround time. An ideal LHS analytic process starts with high-level institutional support, especially to facilitate protecting time for personnel such as specific EHR/database professionals and clinicians and financially supporting the creation and maintenance of data warehouses. Clinical questions asked by end-users should be presented in a PICO format to provide clarity regarding the specific ask. A clear point of contact, ideally CIs, is helpful to refine these clinical questions. Additionally, a clear turnaround time and rationale of the PICO question from the clinical team will help minimize scope creep. After this initial intake, the PICO question is to be further refined by CIs along with data scientists who will then interface with database experts in an iterative manner for optimal phenotyping and real-time data validation. An understanding of which data items are present in which databases is key to knowing what questions can be answered. Having multiple teams query different databases can help with verifying data fidelity as well as containing costs.[28] The data validation steps are laborious and not easy to streamline, but critical for uptake of the final results, and best done by CIs who have a clear understanding of the workflow and data model. Random sampling of patient charts and comparing intermediate results pulled from relational databases to operational facing databases (e.g., ones used by business intelligence) is helpful at this phase. Once data validation steps are complete, the data science team can generate statistical reports and visualizations for the final report. Deadlines should be set ahead of time for the aforementioned steps and dissemination of this final report should be tailored to institutional culture and take advantage of existing communication workflow and consistent with each report. This step is as important as the others and necessary to operationalize to oppose the inertia of deploying well-validated clinical knowledge into clinical practice, historically thought of years-long timeframes.[29] This should be done by local leaders and in particular, CIs. Having been involved throughout the entire workflow, CIs are well suited to this final task and accordingly, this role of practicing physicians has been highlighted at other systems.[30]

Data generated during care delivery can be used in a variety of ways including in quality metric reporting, clinical guideline creation, and answering questions about specific patients. For health systems interested in leveraging their existing data to rapidly generate evidence, we are freely sharing our tools and lessons learned.[19] [23] In our case, we are fortunate that a surge of COVID-19 patients did not occur, but we were well equipped in the event that it did, in part because of the insights derived by our COVID-19 data science task force.


#

Conclusion

While the merits of using observational data for evidence generation are often debated, there is a clear argument for utilizing available data to inform questions that are unaddressed in the literature and need to be answered urgently.[15] While efforts exist to answer patient-level questions,[18] [21] an LHS also needs to be able to rapidly answer operational questions. The management questions posed by the COVID-19 pandemic is one such example. Our experience shows that by utilizing interdisciplinary teams, judicious use of observational data can support the rapid creation of evidence-based guidelines. By employing a high degree of collaboration, data scientists and clinicians can validate the utility of a clinical question, define the populations and interventions of interest, validate the integrity of the data, and define analytic methods to protect against misleading results. Given the interdisciplinary nature of this analytic process, CIs are well suited to lead endeavors such as these.


#

Clinical Relevance Statement

A learning health system should be able to provide actionable insights through analysis of available observational data. An institution-supported multidisciplinary team can execute the stages of our proposed analytic process to produce evidence that can be incorporated into guidelines. The creation of such a process will likely require significant upfront investment but can provide value to the health care system through insight generation and improved patient outcomes.


#

Multiple Choice Questions

  1. According to AHRQ (Agency for Healthcare Research and Quality), most LHS activities should be classified as:

    • Research requiring an institutional review board (IRB).

    • Operations outside IRB oversight.

    • Conduits for industry exposure to health data.

    • Patient facing comparison tools.

    Correct Answer: The correct answer is option b. According to AHRQ, most LHS activities should be classified as “normal operations” that do not require oversight of an IRB. Evidence-generating activities through an LHS should thus not be ordinarily called “research.”[13]

  2. The concept of a learning health system was initially introduced by:

    • National Health and Nutrition Examination Survey (NHANES), through the Centers for Disease Control and Prevention (CDC) in 1971.

    • Alliance for Health Policy and Systems Research (AHPSR), through World Health Organization (WHO) in 1999.

    • IOM/NAM in 2007.

    • Commonwealth Fund, in 2012.

    Correct Answer: The correct answer is option c. The results of a 2-day workshop in 2006 were published as a summary in 2007 by the Institute of Medicine, now known as the National Academy of Medicine.[1] Real-time digital access to health data, empowered patients, aligned incentives, and proper organizational culture are all necessary components to a functioning LHS. Further reports from IOM/NAM delineate rising costs, complexities, economic, and quality barriers to widespread development of LHSs.


#
#

Conflict of Interest

N.H.S. is a co-founder of Atropos Health. A.C. is an advisor to Atropos Health. None of the other authors have conflicts of interest to declare.

Acknowledgments

We would like to thank Stanford Research IT for providing material support as well as the Office of the CIO for institutional support of this initiative.

Protection of Human and Animal Subjects

This manuscript discusses development of an analytic process and reports QA results which do not constitute human subjects research.


Author Contributions

N.H.S. oversaw the study. R.L., K.S., and W.C. helped formulate clinical questions and update practice guidelines. G.K. and J.P. performed database queries. B.S.P. and A.C. performed data analysis and summarization. L.R. devised the literature search strategy. D.D. and A.G. performed data validation.


* These authors contributed equally to this work.


Supplementary Material

  • References

  • 1 Institute of Medicine. Integrating Research and Practice: Health System Leaders Working Toward High-Value Care: Workshop Summary. Washington, DC: The National Academies Press; 2015
  • 2 Van Spall HGC, Toren A, Kiss A, Fowler RA. Eligibility criteria of randomized controlled trials published in high-impact general medical journals: a systematic sampling review. JAMA 2007; 297 (11) 1233-1240
  • 3 Kennedy-Martin T, Curtis S, Faries D, Robinson S, Johnston J. A literature review on the representativeness of randomized controlled trial samples and implications for the external validity of trial results. Trials 2015; 16 (01) 495
  • 4 Ebell MH, Sokol R, Lee A, Simons C, Early J. How good is the evidence to support primary care practice?. Evid Based Med 2017; 22 (03) 88-92
  • 5 Tricoci P, Allen JM, Kramer JM, Califf RM, Smith Jr SC. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA 2009; 301 (08) 831-841
  • 6 Fanaroff AC, Califf RM, Windecker S, Smith Jr SC, Lopes RD. Levels of evidence supporting American College of Cardiology/American Heart Association and European Society of Cardiology Guidelines, 2008-2018. JAMA 2019; 321 (11) 1069-1080
  • 7 Stewart WF, Shah NR, Selna MJ, Paulus RA, Walker JM. Bridging the inferential gap: the electronic health record and clinical evidence. Health Aff (Millwood) 2007; 26 (02, Suppl 1): w181-w191
  • 8 Friedman C, Rubin J, Brown J. et al. Toward a science of learning systems: a research agenda for the high-functioning learning health system. J Am Med Inform Assoc 2015; 22 (01) 43-50
  • 9 Institute of Medicine. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: The National Academies Press; 2013
  • 10 HealthIT.gov. Non-federal acute care hospital electronic health record adoption. Accessed August 17, 2021 at: https://www.healthit.gov/data/quickstats/non-federal-acute-care-hospital-electronic-health-record-adoption
  • 11 Calzoni L, Clermont G, Cooper GF, Visweswaran S, Hochheiser H. Graphical presentations of clinical data in a learning electronic medical record. Appl Clin Inform 2020; 11 (04) 680-691
  • 12 Nimmich EB, Bookstaver PB, Kohn J. et al. Development of institutional guidelines for management of gram-negative bloodstream infections: incorporating local evidence. Hosp Pharm 2017; 52 (10) 691-697
  • 13 Wong-Beringer A, Nguyen LH, Lee M, Shriner KA, Pallares J. An antimicrobial stewardship program with a focus on reducing fluoroquinolone overuse. Pharmacotherapy 2009; 29 (06) 736-743
  • 14 AHRQ. How learning health systems learn: lessons from the field. Accessed May 10, 2021 at: http://www.ahrq.gov/learning-health-systems/how-lhs-learn.html
  • 15 McGinnis JM, Fineberg HV, Dzau VJ. Advancing the learning health system. N Engl J Med 2021; 385 (01) 1-5
  • 16 Frankovich J, Longhurst CA, Sutherland SM. Evidence-based medicine in the EMR era. N Engl J Med 2011; 365 (19) 1758-1759
  • 17 Longhurst CA, Harrington RA, Shah NHA. A ‘green button’ for using aggregate patient data at the point of care. Health Aff (Millwood) 2014; 33 (07) 1229-1235
  • 18 Gombar S, Callahan A, Califf R, Harrington R, Shah NH. It is time to learn from patients like mine. NPJ Digit Med 2019; 2 (01) 16
  • 19 Callahan A, Gombar S, Cahan EM. et al. Using aggregate patient data at the bedside via an on-demand consultation service. NEJM Catal 2021; DOI: 10.1056/CAT.21.0224.
  • 20 Institute of Medicine. Digital Infrastructure for the Learning Health System: The Foundation for Continuous Improvement in Health and Health Care: Workshop Series Summary. Washington, DC: The National Academies Press; 2011
  • 21 Ostropolets A, Zachariah P, Ryan P, Chen R, Hripcsak G. Data consult service: can we use observational data to address immediate clinical needs?. J Am Med Inform Assoc 2021; 28 (10) 2139-2146
  • 22 Knighton AJ, Ranade-Kharkar P, Brunisholz KD. et al. Rapid implementation of a complex, multimodal technology response to COVID-19 at an integrated community-based health care system. Appl Clin Inform 2020; 11 (05) 825-838
  • 23 Datta S, Posada J, Olson G. et al. A new paradigm for accelerating clinical data science at Stanford Medicine. 44
  • 24 OHDSI. Observational Health Data Sciences and Informatics GitHub Library. Accessed August 18, 2021 at: https://github.com/OHDSI/
  • 25 STARR OMOP | Observational Medical Outcomes Partnership | Stanford Medicine. Accessed August 17, 2021 at: https://med.stanford.edu/starr-omop.html
  • 26 Wongvibulsin S, Zeger SL. Enabling individualised health in learning healthcare systems. BMJ Evid Based Med 2020; 25 (04) 125-129
  • 27 Sertkaya A, Birkenbach A, Berlind A, Eyraud J. Examination of Clinical Trial Costs and Barriers for Drug Development: Report to the Assistant Secretary of PLanning and Evaluation (ASPE). Washington, DC: Department of Health and Human Services; 2014
  • 28 Sendak MP, Balu S, Schulman KA. Barriers to achieving economies of scale in analysis of EHR data. A cautionary tale. Appl Clin Inform 2017; 8 (03) 826-831
  • 29 Friedman CP, Wong AK, Blumenthal D. Achieving a nationwide learning health system. Sci Transl Med 2010; 2 (57) 57cm29
  • 30 Gould MK, Sharp AL, Nguyen HQ. et al. Embedded research in the learning healthcare system: ongoing challenges and recommendations for researchers, clinicians, and health system leaders. J Gen Intern Med 2020; 35 (12) 3675-3680

Address for correspondence

Dev Dash, MD, MPH
Department of Medicine, Stanford University School of Medicine Stanford
CA 94305-5119
United States   

Publication History

Received: 27 September 2021

Accepted: 06 January 2022

Article published online:
02 March 2022

© 2022. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Institute of Medicine. Integrating Research and Practice: Health System Leaders Working Toward High-Value Care: Workshop Summary. Washington, DC: The National Academies Press; 2015
  • 2 Van Spall HGC, Toren A, Kiss A, Fowler RA. Eligibility criteria of randomized controlled trials published in high-impact general medical journals: a systematic sampling review. JAMA 2007; 297 (11) 1233-1240
  • 3 Kennedy-Martin T, Curtis S, Faries D, Robinson S, Johnston J. A literature review on the representativeness of randomized controlled trial samples and implications for the external validity of trial results. Trials 2015; 16 (01) 495
  • 4 Ebell MH, Sokol R, Lee A, Simons C, Early J. How good is the evidence to support primary care practice?. Evid Based Med 2017; 22 (03) 88-92
  • 5 Tricoci P, Allen JM, Kramer JM, Califf RM, Smith Jr SC. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA 2009; 301 (08) 831-841
  • 6 Fanaroff AC, Califf RM, Windecker S, Smith Jr SC, Lopes RD. Levels of evidence supporting American College of Cardiology/American Heart Association and European Society of Cardiology Guidelines, 2008-2018. JAMA 2019; 321 (11) 1069-1080
  • 7 Stewart WF, Shah NR, Selna MJ, Paulus RA, Walker JM. Bridging the inferential gap: the electronic health record and clinical evidence. Health Aff (Millwood) 2007; 26 (02, Suppl 1): w181-w191
  • 8 Friedman C, Rubin J, Brown J. et al. Toward a science of learning systems: a research agenda for the high-functioning learning health system. J Am Med Inform Assoc 2015; 22 (01) 43-50
  • 9 Institute of Medicine. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: The National Academies Press; 2013
  • 10 HealthIT.gov. Non-federal acute care hospital electronic health record adoption. Accessed August 17, 2021 at: https://www.healthit.gov/data/quickstats/non-federal-acute-care-hospital-electronic-health-record-adoption
  • 11 Calzoni L, Clermont G, Cooper GF, Visweswaran S, Hochheiser H. Graphical presentations of clinical data in a learning electronic medical record. Appl Clin Inform 2020; 11 (04) 680-691
  • 12 Nimmich EB, Bookstaver PB, Kohn J. et al. Development of institutional guidelines for management of gram-negative bloodstream infections: incorporating local evidence. Hosp Pharm 2017; 52 (10) 691-697
  • 13 Wong-Beringer A, Nguyen LH, Lee M, Shriner KA, Pallares J. An antimicrobial stewardship program with a focus on reducing fluoroquinolone overuse. Pharmacotherapy 2009; 29 (06) 736-743
  • 14 AHRQ. How learning health systems learn: lessons from the field. Accessed May 10, 2021 at: http://www.ahrq.gov/learning-health-systems/how-lhs-learn.html
  • 15 McGinnis JM, Fineberg HV, Dzau VJ. Advancing the learning health system. N Engl J Med 2021; 385 (01) 1-5
  • 16 Frankovich J, Longhurst CA, Sutherland SM. Evidence-based medicine in the EMR era. N Engl J Med 2011; 365 (19) 1758-1759
  • 17 Longhurst CA, Harrington RA, Shah NHA. A ‘green button’ for using aggregate patient data at the point of care. Health Aff (Millwood) 2014; 33 (07) 1229-1235
  • 18 Gombar S, Callahan A, Califf R, Harrington R, Shah NH. It is time to learn from patients like mine. NPJ Digit Med 2019; 2 (01) 16
  • 19 Callahan A, Gombar S, Cahan EM. et al. Using aggregate patient data at the bedside via an on-demand consultation service. NEJM Catal 2021; DOI: 10.1056/CAT.21.0224.
  • 20 Institute of Medicine. Digital Infrastructure for the Learning Health System: The Foundation for Continuous Improvement in Health and Health Care: Workshop Series Summary. Washington, DC: The National Academies Press; 2011
  • 21 Ostropolets A, Zachariah P, Ryan P, Chen R, Hripcsak G. Data consult service: can we use observational data to address immediate clinical needs?. J Am Med Inform Assoc 2021; 28 (10) 2139-2146
  • 22 Knighton AJ, Ranade-Kharkar P, Brunisholz KD. et al. Rapid implementation of a complex, multimodal technology response to COVID-19 at an integrated community-based health care system. Appl Clin Inform 2020; 11 (05) 825-838
  • 23 Datta S, Posada J, Olson G. et al. A new paradigm for accelerating clinical data science at Stanford Medicine. 44
  • 24 OHDSI. Observational Health Data Sciences and Informatics GitHub Library. Accessed August 18, 2021 at: https://github.com/OHDSI/
  • 25 STARR OMOP | Observational Medical Outcomes Partnership | Stanford Medicine. Accessed August 17, 2021 at: https://med.stanford.edu/starr-omop.html
  • 26 Wongvibulsin S, Zeger SL. Enabling individualised health in learning healthcare systems. BMJ Evid Based Med 2020; 25 (04) 125-129
  • 27 Sertkaya A, Birkenbach A, Berlind A, Eyraud J. Examination of Clinical Trial Costs and Barriers for Drug Development: Report to the Assistant Secretary of PLanning and Evaluation (ASPE). Washington, DC: Department of Health and Human Services; 2014
  • 28 Sendak MP, Balu S, Schulman KA. Barriers to achieving economies of scale in analysis of EHR data. A cautionary tale. Appl Clin Inform 2017; 8 (03) 826-831
  • 29 Friedman CP, Wong AK, Blumenthal D. Achieving a nationwide learning health system. Sci Transl Med 2010; 2 (57) 57cm29
  • 30 Gould MK, Sharp AL, Nguyen HQ. et al. Embedded research in the learning healthcare system: ongoing challenges and recommendations for researchers, clinicians, and health system leaders. J Gen Intern Med 2020; 35 (12) 3675-3680

Zoom Image
Fig. 1 This figure illustrates the iterative cycles that can be performed to create and continuously update institutional guidelines. The process starts with a clinical need that arises from an institutional concern (e.g., resource constraints at our institution during COVID-19 pandemic). This concern is then translated into a PICO-T question as described in the Methods section. Next, report writers query the clinical data warehouses and this information is validated by clinical informaticians. Once the data are validated, data scientists create report summaries and the evidence is presented to the initiators of the request to inform clinical guideline creation. As a clinical need evolves over time, this process can undergo further iterative cycles to refine guidelines to adapt to a changing environment.