Appl Clin Inform 2021; 12(03): 629-636
DOI: 10.1055/s-0041-1731679
Research Article

Using Log Data to Measure Provider EHR Activity at a Cancer Center during Rapid Telemedicine Deployment

Colin Moore
1   Department of Clinical Informatics and Clinical Systems, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida, United States
2   Department of Oncologic Sciences, Morsani College of Medicine, University of South Florida, Tampa, Florida, United States
,
Amber Valenti
3   Cerner Corporation, North Kansas City, Missouri, United States
,
Edmondo Robinson
2   Department of Oncologic Sciences, Morsani College of Medicine, University of South Florida, Tampa, Florida, United States
4   Office of the Chief Digital Innovation Officer, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida, United States
,
Randa Perkins
1   Department of Clinical Informatics and Clinical Systems, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida, United States
2   Department of Oncologic Sciences, Morsani College of Medicine, University of South Florida, Tampa, Florida, United States
› Author Affiliations
 

Abstract

Objectives Accurate metrics of provider activity within the electronic health record (EHR) are critical to understand workflow efficiency and target optimization initiatives. We utilized newly described, log-based core metrics at a tertiary cancer center during rapid escalation of telemedicine secondary to initial coronavirus disease-2019 (COVID-19) peak onset of social distancing restrictions at our medical center (COVID-19 peak). These metrics evaluate the impact on total EHR time, work outside of work, time on documentation, time on prescriptions, inbox time, teamwork for orders, and undivided attention patients receive during an encounter. Our study aims were to evaluate feasibility of implementing these metrics as an efficient tool to optimize provider workflow and to track impact on workflow to various provider groups, including physicians, advanced practice providers (APPs), and different medical divisions, during times of significant policy change in the treatment landscape.

Methods Data compilation and analysis was retrospectively performed in Tableau utilizing user and schedule data obtained from Cerner Millennium PowerChart and our internal scheduling software. We analyzed three distinct time periods: the 3 months prior to the initial COVID-19 peak, the 3 months during peak, and 3 months immediately post-peak.

Results Application of early COVID-19 restrictions led to a significant increase of telemedicine encounters from baseline <1% up to 29.2% of all patient encounters. During initial peak period, there was a significant increase in total EHR time, work outside of work, time on documentation, and inbox time for providers. Overall APPs spent significantly more time in the EHR compared with physicians. All of the metrics returned to near baseline after the initial COVID-19 peak in our area.

Conclusion Our analysis showed that implementation of these core metrics is both feasible and can provide an accurate representation of provider EHR workflow adjustments during periods of change, while providing a basis for cross-vendor and cross-institutional analysis.


#

Background and Significance

When evaluating provider efficiency within the electronic health record (EHR), clinical informaticists are faced with numerous barriers. A significant and well-documented barrier stems from the inherent diversity in medical practice patterns throughout the United States.[1] [2] [3] Ranging from small rural clinics up to large multiregional health centers, it is often difficult to truly identify a peer medical practice to use as benchmarks and set goals for optimization, even within the same organization.[4] [5] Added to this is the complexity of the hundreds of Office of the National Coordinator for Health Information Technology (ONC)-certified health information technology developers providing diverse EHR experiences that usually provide only vendor-specific metrics for informaticists' analysis and use in optimization efforts.[6] [7] This can be particularly true in larger subspecialty health system settings and in areas with limited peer groups within the vendor-supplied metrics.[8] Several efforts have been undertaken in recent years to standardize metrics in order for them to be vendor-neutral with broad-sweeping implications for research data.[9] [10]

The H. Lee Moffitt Cancer Center and Research Institute is an NCI-Designated Cancer Center with clinical services including one primary campus, providing both ambulatory and inpatient services, as well as two satellite locations providing ambulatory, infusion, and surgical oncology. The Clinical Informatics Department provides operational clinical informatics support to all clinical sites and helps manage the operational informatics needs of over 550 physicians and advanced practice providers (APPs). The primary EHR vendor is Cerner Millennium PowerChart and end-user workflow optimization has relied on analytics provided from the vendor through their proprietary platforms. These vendor-based analytics are useful for internal reference and comparison of pre- and postimplementation workflow data, but are restricted to the time analyses as set by the vendor and often do not match the same workflow intervals that the organization is interested in tracking or optimizing. At best many of the available metrics are used as proxy for what stakeholders really want to measure. In an effort to utilize more targeted, practice-based, and standardized analytics, we created an analytics tool applying the seven core metrics proposed by Sinsky et al which included a collaboration of researchers and experts in working with EHR log data.[9] These metrics are outlined in [Table 1] as described in the original publication and seek to give a true picture of time spent in the EHR based on analyzed log data. The collaboration proposed these metrics to ultimately improve the patient experience through achieving insight into the practice environment, effectiveness of teams, and the influence of policies and regulations on physician workflows.[9]

Table 1

Core measures adopted from Sinsky et al[9]

Measure

Abbreviation

Definition and example

Total EHR time

EHR-Time8

Total time on EHR (during and outside of clinic sessions) per 8 hours of patient scheduled time

Work outside of work

WOW8

Time on EHR outside of scheduled patient hours per 8 hours of patient scheduled time

Time on encounter note documentation

Note-Time8

Hours on documentation (note writing) per 8 hours of scheduled patient time

Time on prescriptions

Script-Time8

Total time on prescriptions per 8 hours of patient scheduled time

Time on inbox

IB-Time8

Total time on inbox per 8 hours of patient scheduled time

Teamwork for orders

TWORD

The percentage of orders with team contribution

Undivided attention

ATTN

The amount of undivided attention patients receive from their physician. It is approximated by [(total time per session) minus (EHR time per session)]/total time per session

While our initial study aim was to validate these novel core measures as an efficient tool for our clinical informaticists to optimize provider workflow, our study evolved to first assess the feasibility of implementing these metrics, with a secondary goal to utilize these metrics to track impact on workflow during times of significant policy change in the treatment landscape. One of the most significant drivers of workflow policy change since the inception of the EHR has been the coronavirus disease-2019 (COVID-19) pandemic.[11] [12] [13] Practices and regulatory bodies have required rapid change to allow for escalation in telemedicine to meet the needs of social distancing and protecting at-risk populations.[14] [15] Hospital systems around the world were charged with rapidly adapting to these challenges to help control the spread of COVID-19 by limiting unnecessary in-person patient encounters, thus shifting from traditional patient care workflows to this novel format.[16] [17] Within this aim we ensured that we were able to sort and analyze if the metrics highlighted any differences in impact felt between various provider groups, including physicians versus APPs and a comparison of all divisions within our hospital system.

As a hospital system charged with caring for patients with cancer, Moffitt Cancer Center has already taken significant precautions regarding infectious diseases as so many of our patients are immunocompromised. COVID-19 further elevated these concerns for our patient population due to the unknown, potentially significant complications from COVID-19 in cancer patients. To meet this challenge, our institution implemented rapid escalation of existing telemedicine services to keep our patients safe while continuing to ensure that they received the timely healthcare that they needed. Utilizing our existing patient schedules, in-person visits were converted to telemedicine video visits as needed to accommodate via the Zoom platform only while maintaining identical appointment durations.[18] As with centers across the country, we saw a significant increase in these visits in a very short period of time.[19] [20] In the span of 4 weeks since implementation, we saw an increase in volume of telemedicine encounters of over 5,000%.[21] While this was an extremely rapid change in our providers' workflows, it presented an opportunity to evaluate the impact on shifting to increased telemedicine utilization through the scope of the core standardized metrics proposed by Sinsky et al.[9]


#

Methods

Specifications for creating the seven metrics and scores were gathered from the Sinsky et al publication and were harmonized with data points from our EHR by our vendor-based data analyst. User and schedule data were obtained from Millennium PowerChart and Moffitt Cancer Center's internal scheduling software, respectively. Data compilation and analysis was designed to be performed by a single data analyst utilizing Tableau (version 2020.3.1).[22] Work effort for creation of analytic formulas, initial data analysis, and ongoing maintenance model was tracked for total time investment and resource utilization. We defined the time point at which full implementation of significant social distancing restrictions occurred at our medical centers as the “COVID-19 peak.” To fully evaluate the impact of these COVID-19-induced restrictions on our providers' EHR efficiency, we analyzed three distinct time periods: the 3 months prior to the COVID-19 peak period (December 2019–February 2020), the 3 months during peak COVID-19 impact (March 2020–May 2020), and the 3 months immediately post-peak recovery and adjustment for providers (June 2020–August 2020). Our analysis was further delineated by breakdown of physicians versus APPs, comparison of all divisions within our hospital system, as well as telemedicine virtual visit versus in-person patient encounters. The analysis contained both ambulatory and inpatient encounters to capture all provider activity within the organization. Although many of our providers have both inpatient and outpatient care responsibilities, our institution implemented telemedicine only in the ambulatory setting. All measures are expressed in a score which has no units and can be interpreted as a lower score associating with better efficiency for a provider. The exception to this is the undivided attention (ATTN) metric, which can be interpreted as the percentage of time that a provider is giving the patient their ATTN, and therefore a higher score would imply a better patient experience. In the original paper by Sinsky et al, this ATTN metric was noted to be aspirational due to the difficulty in interpreting if the difference in total time in the EHR versus the total time of the session may not accurately portray the ATTN of a provider during an encounter.


#

Results

Overall work effort for the project included 120 hours of data analysis and stayed within the bounds of the budgeted allocation of resources for the project. Initial creation of formulas to analyze the data and produce the scores took >90% of time effort, while adjustments to the final analysis process and development of an ongoing maintenance model were much more efficient. The analysis reflects significant workflow changes during the time of the COVID-19 peak in our patient care patterns, with in-person appointment proportions decreasing from nearly 100 to 70.8% of all encounters and telemedicine appointment types increasing from <1 to 29.2% ([Fig. 1]). This shift was amplified in those clinical divisions that could rapidly accommodate virtual visits in their workflow, or who part of the early implementation group could rapidly “scale up” their utilization, such as Endocrine Tumor, Supportive Care, and Survivorship. Many divisions depended on continued in-person visits to provide the appropriate level of care needed. Following the peak, we continued to have an elevated proportion of telemedicine visits, compared with the prior baseline, with a new established baseline average of approximately 11%.

Zoom Image
Fig. 1 Percentage of in-person and telemedicine patient encounters pre-, during, and post-COVID-19 in our patient care area.

Results obtained for total EHR time (EHR-Time8), work outside of work (WOW8), time on documentation (Note-Time8), time on prescriptions (Script-Time8), inbox time (IB-Time8), teamwork for orders (TWORD), and ATTN patients receive during an encounter were analyzed for the three period of pre-, during, and post-COVID 19 peak. Barriers were encountered while attempting to assess two measures: Script-Time8 and TWORD, due to limitations in our vendor-generated analytics. The scores were further delineated to compare physician versus APP workflow differences, as well as comparisons of various provider groups. The average scores for all analyzed measures for these groups are highlighted in [Table 2]. Upon analysis we noted differences in overall provider efficiency during the peak of COVID-19-induced restrictions in our area. Significant increases from average baseline scores were observed in EHR-Time8 (8.03–10, p < 0.001), WOW8 (6.53–8.3, p < 0.001), Note-Time8 (3.27–3.8, p < 0.01), and IB-Time8 (0.3–0.6, p < 0.01). Most notably providers had a 25% increase in overall time spent in the EHR and a 27% increase in the amount of time spent on WOW8. The ATTN score was unaffected during the time periods analyzed. All of the metrics returned to baseline after the initial COVID-19 peak in our area and a summary of these shifts for all providers is found in [Fig. 2].

Zoom Image
Fig. 2 Core metric score trends pre-, during, and post-COVID-19 restrictions peak in our patient care area.
Table 2

Core metric score averages delineated by provider type and provider group for the time periods pre-, during, and post-COVID-19 peak in our patient care area

Pre-

During

Post-

Number of providers

EHR-Time8

WOW8

ATTN

Note-Time8

IB-Time8

Number of providers

EHR-Time8

WOW8

ATTN

Note-Time8

IB-Time8

Number of providers

EHR-Time8

WOW8

ATTN

Note-Time8

IB-Time8

Provider type

APP

215

23.7

20.8

0.6

10.1

0.9

223.0

26.4

23.3

0.6

10.6

1.2

235.0

23.9

20.8

0.6

9.83

0.97

Physician

232

3.8

2.7

0.9

1.4

0.2

235.0

4.4

3.2

0.9

1.6

0.3

245.0

4.0

2.8

0.9

1.48

0.23

Provider groups

BMT

55

43.8

41.9

0.8

17.2

1.4

57.0

46.0

44.0

0.8

17.4

1.6

58.0

44.5

42.5

0.7

17.47

1.36

Breast Oncology

32

5.5

4.0

0.8

2.4

0.3

32.0

6.6

5.0

0.8

2.6

0.5

32.0

5.9

4.2

0.8

2.44

0.34

Cutaneous Oncology

17

5.8

4.4

0.8

2.8

0.3

18.0

7.2

5.7

0.8

3.2

0.5

19.0

6.6

5.1

0.8

3.07

0.34

Endocrine Tumor

11

10.3

7.2

0.6

4.5

0.6

11.0

13.6

10.5

0.6

5.2

1.0

12.0

10.7

7.5

0.6

4.54

0.68

GI Tumor

38

6.3

4.8

0.8

2.3

0.4

40.0

7.7

6.0

0.8

2.6

0.5

41.0

7.4

5.7

0.8

2.62

0.48

GU Oncology

19

4.2

2.9

0.8

1.7

0.2

19.0

4.8

3.4

0.8

1.8

0.3

20.0

3.8

2.5

0.8

1.49

0.18

Head and Neck Oncology

12

5.4

3.7

0.8

2.6

0.3

13.0

5.2

3.5

0.8

2.4

0.3

14.0

5.6

4.0

0.8

2.74

0.33

Infectious Disease

8

97.8

94.6

0.6

44.5

0.9

9.0

91.8

88.6

0.6

39.4

1.3

9.0

80.5

77.3

0.6

36.96

1.19

Internal Medicine

53

187.6

185.0

0.7

61.9

1.3

55.0

165.9

162.7

0.6

54.4

1.7

62.0

115.0

112.1

0.6

36.75

1.2

Malignant Hematology

47

10.7

8.4

0.7

3.8

0.5

48.0

12.4

9.8

0.7

4.2

0.7

52.0

11.5

9.0

0.7

3.97

0.54

Gynecologic Oncology

14

5.5

4.2

0.8

2.3

0.2

14.0

6.0

4.7

0.8

2.4

0.3

14.0

5.2

3.8

0.8

2.19

0.25

Neuro-oncology

18

8.8

8.1

0.9

4.5

0.3

18.0

10.2

9.4

0.9

4.9

0.5

18.0

8.7

7.9

0.9

4.39

0.34

Plastic Surgery

5

0.6

0.4

1.0

0.1

0.1

5.0

0.8

0.5

1.0

0.2

0.1

5.0

0.6

0.3

1.0

0.11

0.06

Pulmonology

6

9.2

8.4

0.9

3.6

0.2

6.0

12.6

11.7

0.9

5.0

0.3

7.0

11.1

10.2

0.9

4.57

0.22

Radiation Oncology

36

3.9

3.2

0.9

1.8

0.3

36.0

4.1

3.4

0.9

1.8

0.4

38.0

4.0

3.2

0.9

1.71

0.36

Sarcoma

13

3.6

2.3

0.8

1.4

0.2

13.0

3.8

2.5

0.8

1.5

0.3

13.0

3.7

2.3

0.8

1.49

0.23

Satellite Oncology

11

9.6

7.2

0.7

3.3

0.7

12.0

10.6

8.4

0.7

3.7

0.7

12.0

9.8

7.5

0.7

3.58

0.63

Senior Adult Oncology

5

6.5

4.1

0.7

2.7

0.3

5.0

6.8

4.4

0.7

2.6

0.5

5.0

6.1

3.8

0.7

2.49

0.35

Supportive Care Medicine

20

9.9

8.2

0.8

5.3

0.2

20.0

12.1

10.3

0.8

6.1

0.4

21.0

10.7

8.9

0.8

5.56

0.29

Survivorship

6

5.7

3.4

0.7

3.2

0.3

6.0

6.2

4.3

0.8

3.4

0.4

6.0

5.4

3.4

0.8

2.96

0.25

Thoracic Oncology

21

5.9

4.1

0.8

2.0

0.3

21.0

6.7

4.8

0.8

2.3

0.4

22.0

6.5

4.6

0.8

2.3

0.37

Abbreviations: APP, advanced practice provider; ATTN, undivided attention; BMT, bone marrow transplant; HER, electronic health record; IB-Time8, inbox time; WOW, work outside of work.


While analyzing overall comparison between physician and APP workflows, we detected significant differences in all five core metrics between the two groups. Physicians and APPs had significant differences in every category (p < 0.002), with APPs having higher scores in all areas except ATTN, and the largest differences noted in EHR-Time8, WOW8, and Note-Time8 ([Fig. 3]). Further analysis revealed that these differences were consistent in the three analyzed time periods and the deviations in scores noted during the peak COVID-19 period were observed equally in physicians and APPs throughout the organization.

Zoom Image
Fig. 3 Core metric scores comparing physicians and advanced practice providers (APP) over the entire 9-month observation period.

#

Discussion

There are numerous ways to enhance the patient experience in health care, with one of the most significant being improving the efficiency in which clinicians, including providers, use the EHR. The analysis of EHR log data has been identified as an expanding utility to further understand provider efficiency and allow the field of clinical informatics to easily analyze these data and provide optimization efforts to providers directly.[9] [10] Enhanced provider efficiency has been associated with both improved patient safety and decreased physician burnout.[3] [23] To accomplish these goals of enhanced patient safety and provider efficiency, with the rapidly expanding certified EHR market, there is a critical need to have standardized provider efficiency metrics that can be universally implemented with all EHRs and care sites. In our analysis we have been able to apply five of the seven core metrics described by Sinsky et al in relatively rapid fashion with minimal resources. In previous evaluations, vendor-supplied metrics were the only ones that were easily accessible, and these often compare data to anonymous baseline groups or defined metrics outside the scope of practice at our facility. A key example of this was that the provider was considered working “after hours” based on a hard stop of work at 5:00 p.m. Being able to better classify provider work efficiency as well as standardize metrics allows for larger cross-vendor and cross-institutional studies using provider workflow analysis.

Our data highlighted that during a time of rapid telemedicine expansion our providers' overall time in the EHR and hours spent after work were greatly increased. As our health care system adjusted to the workflow shifts, however, there was normalization of these metrics back to near-baseline, underscoring the adaptability of providers faced with large increases in telemedicine utilization as the new normal. During the rapid expansion of telemedicine visits, the clinical informatics teams provided enhanced, incremental, and at the elbow support to our clinicians to guide them through this process as it became a larger component of their workload. Through expedited governance review discussions and reprioritization of efforts, Clinical Informatics staff members were able to have less critical efforts deprioritized so that focus on supporting providers during this time was their priority. This included enhanced resources available for our provider direct Clinical Informatics support telephone line, as well as more available resources to troubleshoot and rapidly validate break-fix solutions when problems arose due to the new workflows. Similar adjustments to governance to match accelerated response teams have been shown to be effective at facilities adapting to the COVID-19 pandemic.[24] [25] While not a direct goal of this study, this observation of metric normalization post major workflow transitions brings light to the fact that providers can return to their benchmarked efficiencies with appropriate clinical informatics support after a period of recovery.

These metrics were additionally able to highlight the disparate EHR utilization by physicians and APPs at our institution. Our institution has a robust and highly qualified cohort of specialized APPs whose skills are widely utilized in the care of our patients. Although small or subtle differences in scores between these groups could be explained by training or efficiency of using the EHR, the differences in nearly every category observed from these two groups is quite large, highlighted by a EHR-Time8 score being six times higher for APPs than that for physicians. These data have helped quantify a true gap in EHR use burden when working in a setting where an attending is staffing a patient with the APP. Previous studies have shown similar trends in increased time in the EHR for APPs compared with physicians.[26] Increased total amount of time spent within an EHR has been shown to be directly related to increased clinician burnout,[3] and not surprisingly this concept applies to APP burnout as well.[27] While both groups in our study had similar changes during the three observation periods, utilizing these data to help understand these different workflows between physicians and APPs can help target specific gaps in workflow to address for future optimizations. These efforts can range from ensuring APPs as key stakeholder in design optimizations as well as ensure robust Clinical Informatics support of APPs to regularly analyze their use of the EHR and provide direct at the elbow follow-up to guide more efficient use of the system.

The implementation of these metrics has allowed for an ongoing review of all provider workflow as we continue to work through this health crisis and the ever-changing digital health care landscape. As with many analytics and optimization efforts in clinical informatics, resource constraints can be a significant barrier to implementation. Our institution was able to create these metrics with the support of a vendor-based analyst and approximately 120 hours of work. The significant burden of time investment was at the initiation of the project and creation of formulas to calculate the scores. Following the creation of the formulas, being able to capture and analyze the data on an ongoing basis required minimal effort. Despite the initial time burden to implement the analytics tools, the ease of future utilization of the tool made its implementation not only feasible, but also exceedingly valuable to our institution. These reports can now be reviewed by informaticists and allow for improved ongoing workflow analysis without the need for resource expansion. Establishment of these metrics at our institution has also led to broad-reaching future research implications with easily obtained and analyzed data that can be compared between institutions and vendors in the future.

Our analysis was limited by not being able to collect the data needed to implement all seven metrics due to our current system setup. Limitations regarding Script-Time8 arose from inability to accurately define time triggers for prescriptions. In our EHR, time spent on different provider activities is delineated by Response Time Measurement System (RTMS) timers. Capturing these distinct time periods allows for separation of time spent on activities such as documentation and chart review. Our EHR is limited in that no RTMS timer trigger exists to capture and delineate time spent on prescriptions, making the calculation of this metric impossible until an improved RTMS timer trigger can be designed in the future. Regarding TWORD, our system encountered difficulties related to the calculation of computerized provider order entry (CPOE) percentage per user. While it is able to capture the order action, and whether or not a co-signature is required, the system is not able to capture single CPOE orders that have been placed in a “future” status previously signed by a provider, nor does it take into consideration orders from more complex order-sets the providers previously signed. As these order types are major parts of the standard workflow at our institution, an accurate calculation of the metric could not be performed in the scope of the project. These were barriers that may be overcome in future iterations of the project as the EHR analytics evolve, but they could be observed at other institutions with similar workflow and RTMS limitations. There are also limitations that may be specific to being a large oncology center where providers may be seeing patients in both an ambulatory and an inpatient setting on the same day. This can potentially skew some of the scores, such as the higher total time in the EHR scores we observed in some groups such as bone marrow transplant and Infectious Disease. We recognize that there are limitations to the interpretation of our data, including those imbued by the nature of clinical workflows. This can be seen with the complexity of assessing true scheduling data when physicians and APPs can and do see patients not on their assigned schedule in the EHR. Our study was also conducted at a tertiary cancer center with large ambulatory volumes which leant itself well to analyze the effect of rapidly expanding virtual health visits. Due to the COVID-19 peak, social distancing restrictions, this study was not able to be conducted alongside an in-person validation method, such as a time-motion study. Despite these barriers, we feel that the remaining five metrics were able to provide us with key insight into current state, as well as changes that take place during significant workflow shifts. While this study showed the feasibility of implementing these new metrics, future studies could incorporate in-person validation, as well as direct comparison of standard vendor-supplied metrics, to more robustly analyze new core metrics value in workflow analysis.


#

Conclusion

This analysis helps further illustrate that implementation of these novel core measures is feasible and has the potential to provide a more accurate representation of provider EHR workflow issues that may arise during and after implementations or other workflow-altering events. Barriers were identified in fully incorporating all seven measures which will need to be addressed not only with our EHR vendor-generated data, but also in other EHRs as well, to help validate the broad-reaching implications of these metrics. Further multi-institutional implementation of these metrics will help evaluate these issues to further substantiate these core measures as a new potential gold standard in EHR provider workflow analytics and could lead to advances in data analysis and research in the future.


#

Clinical Relevance Statement

Accurate metrics of provider activity within the EHR are critical to understand workflow efficiency and target optimization initiatives. Implementing these novel log-based metrics can help provide a more accurate and objective view of provider EHR activity and assist to identify workflow deficiencies and target optimization targets. These metrics can be especially helpful to understand the impact of the shifting landscape to increased telemedicine utilization on provider EHR efficiency.


#

Multiple Choice Questions

  1. What was one of the most significant barriers that were encountered while attempting to implement these core log-based metrics?

    • Inability to delineate between provider type

    • Difficulty having specific RTMS data needed

    • Limitations in analytics software to calculate the scores

    • Provider reluctance to participate in the data capture

    Correct Answer: The correct answer is option b. Difficulty having specific RTMS data needed. Explanation: Not all of the core metrics were analyzable during this study. Time spent on prescriptions (Script-Time) was difficult to delineate within the data due to having no absolute RTMS time trigger in the available log data that could be identified to calculate the metrics. This can be a common barrier when trying to accurately track time spent on specific tasks if there are not discreet markers of the start and end of that task.

  2. What conclusion can be made regarding the metrics comparing physicians and advanced practice providers (APPs) in this study?

    • APPs overall spent less time in the EHR compared with physicians

    • APPs overall spent less time on documentation compared with physicians

    • Physicians spent less total time in the EHR compared with APPs

    • Physicians spent more time on documentation compared with APPs

    Correct Answer: The correct answer is option c. Physicians spent less total time in the EHR compared with APPs. Explanation: Analyzing metrics based on log data can highlight and quantify significant workflow gaps between groups of providers. In this study although both physicians and APPs were affected by the changes in workflow due to COVID-19, the overall burden of EHR use was not distributed equally between these groups. Analyzing log data between various provider groups can help target specific gaps in workflow to address for future optimizations including ensuring the most burdened groups as key stakeholders in design optimizations.


#
#

Conflict of Interest

None declared.

Protection of Human and Animal Subjects

Human and/or animal subjects were not included in the project.



Address for correspondence

Colin Moore, MD
12902 Magnolia Drive, Tampa, FL 33612
United States   

Publication History

Received: 25 January 2021

Accepted: 29 May 2021

Article published online:
14 July 2021

© 2021. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany


Zoom Image
Fig. 1 Percentage of in-person and telemedicine patient encounters pre-, during, and post-COVID-19 in our patient care area.
Zoom Image
Fig. 2 Core metric score trends pre-, during, and post-COVID-19 restrictions peak in our patient care area.
Zoom Image
Fig. 3 Core metric scores comparing physicians and advanced practice providers (APP) over the entire 9-month observation period.