Appl Clin Inform 2021; 12(02): 208-221
DOI: 10.1055/s-0041-1723989
Research Article

U.S. COVID-19 State Government Public Dashboards: An Expert Review

Naleef Fareed
1   CATALYST—The Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, Ohio, United States
2   Department of Biomedical Informatics, College of Medicine, Institute for Behavioral Medicine Research, The Ohio State University, Columbus, Ohio, United States
,
Christine M. Swoboda
1   CATALYST—The Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, Ohio, United States
,
Sarah Chen
1   CATALYST—The Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, Ohio, United States
,
Evelyn Potter
3   Department of Biochemistry, Ohio University, Athens, Ohio, United States
,
Danny T. Y. Wu
4   Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Cincinnati, Ohio, United States
,
Cynthia J. Sieck
1   CATALYST—The Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, Ohio, United States
5   Department of Family and Community Medicine, College of Medicine, Institute for Behavioral Medicine Research, The Ohio State University, Columbus, Ohio, United States
› Author Affiliations
Funding None.
 

Abstract

Background In the United States, all 50 state governments deployed publicly viewable dashboards regarding the novel coronavirus disease 2019 (COVID-19) to track and respond to the pandemic. States dashboards, however, reflect idiosyncratic design practices based on their content, function, and visual design and platform. There has been little guidance for what state dashboards should look like or contain, leading to significant variation.

Objectives The primary objective of our study was to catalog how information, system function, and user interface were deployed across the COVID-19 state dashboards. Our secondary objective was to group and characterize the dashboards based on the information we collected using clustering analysis.

Methods For preliminary data collection, we developed a framework to first analyze two dashboards as a group and reach agreement on coding. We subsequently doubled coded the remaining 48 dashboards using the framework and reviewed the coding to reach total consensus.

Results All state dashboards included maps and graphs, most frequently line charts, bar charts, and histograms. The most represented metrics were total deaths, total cases, new cases, laboratory tests, and hospitalization. Decisions on how metrics were aggregated and stratified greatly varied across dashboards. Overall, the dashboards were very interactive with 96% having at least some functionality including tooltips, zooming, or exporting capabilities. For visual design and platform, we noted that the software was dominated by a few major organizations. Our cluster analysis yielded a six-cluster solution, and each cluster provided additional insights about how groups of states engaged in specific practices in dashboard design.

Conclusion Our study indicates that states engaged in dashboard practices that generally aligned with many of the goals set forth by the Centers for Disease Control and Prevention, Essential Public Health Services. We highlight areas where states fall short of these expectations and provide specific design recommendations to address these gaps.


#

Background and Significance

There is increasing demand for communication of data to help people understand public health messages, and this is particularly important during the novel coronavirus disease 2019 (COVID-19) pandemic. Data visualization methods can be used to share information across a range of stakeholders and help users interact with data, interpret data, and inform decision-making.[1] [2] [3] Growing use of data visualization among health care systems, local and state governments, and the general public has helped data visualizations become integrated into daily life. Visualizations have proliferated within public health to display data about behavioral health, chronic disease risks, availability of care, communicable diseases, and environmental risks.[1] [2]

Data visualization can be broadly defined as a purposeful use of any physical object (digital or analog) for some form of analysis of data.[1] [2] Dashboards are tools used to communicate information through interactive visualizations of critical metrics to help identify and achieve goals.[4] [5] When data are made interactive through a dashboard, it may enhance understanding and allows users to view the data in multiple ways.[6] [7] These interactive dashboards are an efficient visualization mechanism for presenting necessary data on which relevant stakeholders can act and engage in rapid decision-making.

Data visualizations during public health emergencies have been used to track, respond, and reflect on their impacts as early as 1854 when John Snow traced cases of cholera outbreak using dot maps to identify sources of the outbreak[8] and Edwin Jordan's use of different visualizations to describe the influenza pandemic of 1918 and its impact across the globe.[9] The shift from static visualizations to modern dashboarding applications now allow for real-time and drill-down analysis of pandemic data by stakeholders and the public. In 2009, the use of dashboarding in public health to show epidemics and pandemics was growing, but it was not ubiquitous. For example, in the systematic review by Cheng et al in 2009 of national influenza surveillance web sites, 98 out of 216 countries had such web sites, but some of these were excluded due to not being updated for over a year.[10] In 2011, Cheng et al went on to create a framework for efficient dashboard design focusing on the influenza surveillance in Hong Kong.[11] During the 2015 Ebola outbreak, a dashboard was used by the World Health Organization (WHO) to “support in-country preparedness efforts by national authorities.”[12]

COVID-19 began affecting people in China as early as December 2019 and as of March 11, 2020 it has been classified as a global pandemic by the WHO.[13] The spread of COVID-19 created the need to quickly communicate critical information to a range of stakeholders including policymakers, health care practitioners, and the general public.[14] It represents the first time in modern history that such a broad and pervasive public communication effort was required.[15] Online dashboards have been developed by numerous institutions and individuals to ensure that accurate, standard, and credible information is accessible to track the spread of the pandemic.[16] [17] Matheus and colleagues define dashboards, in the context of those developed by public agencies, as “the visualization of a consolidated set of data for a certain purpose which enables to see what is happening and to initiate actions.”[18] Consolidated data of this type of are typically provided via data warehouses or data marts.[19]

One of the most popular COVID-19 dashboards was launched by Johns Hopkins on January 22, 2020 “… to provide researchers, public health authorities, and the general public with a user friendly tool to track the outbreak as it unfolded.”[20] [21] Remarkably and not surprisingly, all state governments in the United States deployed dashboards on COVID-19 due to the widespread need for information about COVID-19 specific to the experience of each state and the need to help create response plans and inform the public of risks.[22] [23] Public interest in learning COVID-19 information has been strong. A Pew research study conducted in March reported that 70% of Americans searched online for information about the coronavirus.[24] The Johns Hopkins dashboard, which is one of the most well-known, has approximately 5 billion daily user interactions.[25]

In a crisis situation like the COVID-19 pandemic, well-designed data visualizations can highlight critical information and offer a rapid and comprehensive view of the data,[26] including, for example, number of cases, hospitalizations, locations of cases, and demographics. Dashboards can display trends and reveal early warning signals to the public and decision makers.[27] However, states may have restrictions for presenting data or idiosyncratic reasons for choosing the type and style of presenting information, and these requirements may change during the course of the pandemic. In addition, while general recommendations about potential metrics for state dashboards are available (e.g., National Governors Association and the Association of State and Territorial Health Officials), there is little guidance for what state dashboards should look like or contain.[28] Political and economic concerns may motivate decisions that affect how data are displayed; for example, the presentation of test positivity rate requirements for loosening restrictions in some states.[29] [30] Some have also noted that dashboards may not adequately express risk in places with higher poverty rates and minority populations, depending on how geographic data are displayed.[31]

The Centers for Disease Control and Prevention (CDC), Essential Public Health Services (EPHS), describes 10 functions of public health agencies that promote equity and protect the health of communities.[32] Informational dashboards, such as those developed to respond to COVID-19 support, are many of these essential services. For example, the EPHS framework suggests that public health agencies should “communicate effectively to inform and educate people about health, factors that influence it, and how to improve it.” An understanding of effective communication that can educate the public about health is a central component of COVID-19 dashboards. To that end, dashboard visualization experts support the notion of consistency in presentation of information communicated through dashboards.

The EPHS also notes the responsibility to “assess and monitor population health status, factors that influence health, and community needs and assets.” Specific to COVID-19 state dashboards, the layout, the types of graphs, the precision of language and metrics, timely access to the data, and the interpretability of information presented on dashboards influence how the citizens approach their state's dashboard, interpret its content, and respond to the information. The ability for citizens to interactively analyze data on the dashboards with the help of filters, sort functions, zooming capabilities, and share this information provide a greater public health value proposition for the state dashboards, which is also in alignment with the EPHS framework that addressing health problems requires greater transparency and effective use of data.


#

Objectives

The primary objective of our study was to catalog how information, system function, and user interface were deployed across the COVID-19 state dashboards. Our secondary objective was to group and characterize the practices used by the states to develop their dashboards, based on the information we collected, using clustering analysis.


#

Methods

Dashboard Review Framework

We conducted an expert review of the COVID-19 state dashboards. Drawing upon the experience of three members of our team, who have extensively worked on projects to develop public dashboards for the State of Ohio and one team member who has expertise in qualitative research techniques, we developed a data collection framework to assess the information provided in state-developed COVID-19 dashboards. The research team created a preliminary data collection framework that examined the content, function, and visual design and platform of each dashboard. Based on the best practices identified in the literature, we applied this framework to cataloging the COVID-19 dashboards available from all 50 states across the United States along the domains of (1) content, factors associated with information provided in the dashboard; (2) function, factors related to navigation and interactivity of the dashboard; and (3) visual design and platform, factors related to design of the dashboard to effectively communicate information and its software platform.

For content, we investigated whether dashboards contained visualizations, graphs, metrics, temporality, benchmarks, stratification, statistical models, and informational elements. For function, we studied whether dashboards had interactivity, filtering, sorting, tooltips, swap and zoom functions, and sharing capabilities. Lastly, for visual design and platform, we investigated the layout of the dashboard, the use of colors, and the software/vendor of choice. A data dictionary that defines each of our subcategories in included in [Supplementary Appendix Table S1] (available in the online version).


#

Dashboard Data Collection Approach

All research team members initially evaluated two dashboards (Ohio and Florida) using our review framework, compared results, and modified the data collection framework to ensure clarity of the items and alignment among team members. We then divided the remaining dashboards among the research team such that two team members used the framework to examine each dashboard. Each pair met upon completion of data collection to reconcile any discrepancies and create a final dataset.

Data collection took place between June 1, 2020 and July 10, 2020. Preference was given to reviewing dashboards developed or sponsored by the state's department of public health or the equivalent agency if the state had multiple dashboards. As part of our approach, prior to reviewing the dashboards, we defined common metrics specific to the COVID-19 pandemic such as total deaths reported, total cases reported, and others. This list was not exhaustive and during the review if a dashboard contained a metric not on this list, it was added to the dataset in a free-text field. Once data collection was complete, we then categorized the free-text field into additional metric groups such as number of intubations, number of ventilators, etc. We did not count stratification of metrics as a different metric category such as cases by demographics or deaths by county, instead those are represented in the stratification content section. [Table 1] below provides a definition of metrics from our original list.

Table 1

Frequently used COVID-19 metrics and definitions

Metric

Centers for Disease Control and Prevention definition

Total deaths reported

Complete number of COVID-19 cases that resulted in death

New deaths reported

Number of COVID-19 deaths that recently occurred over a given time period (e.g., days or weeks)

Total recovered reported

Complete number of COVID-19 patients that return to a normal state of health

Total cases reported

Complete number of cases that meet confirmatory laboratory evidence

New cases reported

Number of cases confirmed by laboratory evidence that recently occurred over a given time period (e.g., days or weeks)

Suspected and probable cases reported

Meets clinical criteria and epidemiologic evidence with no confirmatory laboratory testing performed for COVID-19

Total hospitalized reported

Complete number of people admitted to hospital for treatment of COVID-19

Intensive care unit admissions reported

Number of people admitted to the intensive care unit for treatment of COVID-19

Emergency department admissions reported

Number of people admitted to the emergency department for treatment of COVID-19

Tests or laboratory tests reported

Number of people tested for COVID-19 or total number laboratory tests performed to diagnose COVID-19 (includes all types of tests)

Abbreviations: COVID-19, novel coronavirus disease 2019.


A dashboard was considered interactive if any components changed on performing an action, such as clicking or hovering on something and a tooltip appearing, or clicking on a part of one visualization that changes the metrics displayed in another visualization. Data that were collected recorded on a master document which was reviewed by all team members during frequent meetings during the review period. The final version of the master sheet contained counts and percentages for each dashboard characteristic across all of the dashboards.


#

Cluster Analysis

We used a hierarchical agglomerative clustering algorithm to group COVID-19 state dashboards on the basis of presence or absence of similar dashboard features. We use the Jaccard index[33] which is a popular distance measure for our clustering of binary variables and has been demonstrated to be a simpler yet equally effective approach in comparison to other distance measures.[34] Ward's method of cluster extraction[35] was employed based on prior literature indicating that among the major clustering methods, this approach was the best at population recovery of clusters[36] [37] and the best at cluster extraction when used with binary distance measures.[34] [38] Using this algorithm, the selected state dashboards were initially placed in separate clusters and subsequently joined together, based on similar patterns of responses on the variables of interest, than those with more divergent response patterns.

Our final cluster solution placed each state in a unique group that minimized differences in dashboard feature within the cluster and maximized differences in features between all other clusters. An optimal number of clusters was determined using the stopping rules proposed by the Calinski–Harabasz pseudo-F index,[39] Duda–Hart scores,[40] and inspection of the cluster dendrogram. As a post hoc analysis, we report summary statistics (cluster size [count and proportion of all dashboards] and average percentage of presence of a dashboard attribute with a cluster. We also compare clusters using state metrics for population size, COVID-19 cases, and COVID-19 deaths during the time of our review.


#
#

Results

Overall Descriptive Statistics

Our team analyzed state-sponsored dashboards from all 50 states (see [Supplementary Appendix Table S2] (available in the online version) for details on states and web URLs for COVID-19 dashboards). We provide descriptive summaries based on our review related to population size, total cases, average cases, total deaths, average deaths, and first reported cases in [Supplementary Appendix Table S3] (available in the online version).

Dashboard content typically (>60%) contained statistical trend lines; aggregate and relative metrics; metrics included cases, new cases, deaths, laboratory tests, and hospitalized; data were visualized through maps; graphs such as histograms, bar charts, and line charts; dashboards incorporated resources to guide dashboard use; stratification that included race/ethnicity, geography, gender, and age; geographical scope of metrics that focused on the state and counties; legends to interpret data; and date stamps. Functions included tool tips on graphs and maps with demographic or general metric information, interactive charts and graphs, zooming of visuals, exporting of data, data of last update, and benchmark metrics on geography within a state. From a visual design and platform perspective, the dashboards typically listed data sources, were more than one page, had metric definitions, and used descriptive labels and titles.


#

Hierarchical Cluster Analysis Results

Based on our inspection of the stopping rules, we arrived at a six-cluster solution (see [Fig. 1] for dendrogram and [Figs. 2] [3] [4] [5] [6] [7] for example dashboards from cluster one to six, respectively). [Fig. 8] presents the clusters, the size of each cluster (count and proportion in reference to all dashboards), and the average percentage presence of a dashboard attribute within a cluster with the aid of a heat map. We only report notable differences among clusters when they differed from the characteristics reported in the “All” column by a threshold of 30% points (i.e., the absolute difference in the percent attribute present between a cluster and all dashboards). We selected this threshold as it reflected change among approximately one-third of all states.

Zoom Image
Fig. 1 Dendrogram of clusters based on COVID-19 state dashboard practices. Each letter represents a cluster. COVID-19, novel coronavirus disease 2019.
Zoom Image
Fig. 2 Cluster A: example dashboard—Florida.
Zoom Image
Fig. 3 Cluster B: example dashboard—Kansas.
Zoom Image
Fig. 4 Cluster C: example dashboard—Delaware.
Zoom Image
Fig. 5 Cluster D: example dashboard—Vermont.
Zoom Image
Fig. 6 Cluster E: example dashboard—Nevada.
Zoom Image
Fig. 7 Cluster F: example dashboard—Massachusetts.
Zoom Image
Fig. 8 Heat map of COVID-19 state dashboard practices by cluster. We use a heat map within each domain (i.e., content, function, and visual design and software platform) to present attributes with higher percentages of presence (darker shades of a color indicate progression toward 100% presence of the attribute across all dashboards within a cluster). COVID-19, novel coronavirus disease 2019.

Cluster A accounted for 32% of the overall sample and contained characteristics similar to the overall sample. These dashboards, however, differed from the overall sample from a functional perspective, where the distinguishing feature was that all dashboards had zoom as an interactivity feature. This cluster represented the largest cluster group.

Cluster B represented 30% of the overall sample and differed mostly from a visual design and platform. The dashboards used four to seven colors and were mainly built on the Tableau platform. Functionally, this cluster offered fewer zooming options.

Cluster C represented 14% of the overall sample and differed from a content and function perspective. Dashboards reported metrics at the ZIP code level and did not report metrics on suspected and probable cases, new deaths, or new cases. Stratification was not possible by race/ethnicity. Benchmarking by unique metrics and line charts and tables to view metrics, interactivity components including filtering by metrics or geography, sorting by metrics or words, and options to share and export data were not included. The dashboards were mostly based on an Environmental Systems Research Institute (ESRI) platform and mostly contained one page with no metric definitions or data sources listed.

Cluster D accounted for 10% of the overall sample and varied mainly in the content and function perspectives. The dashboards displayed data temporally only by days and did not use tables and bar charts to visualize data, statistical trend lines, and intensive care unit (ICU) admission metrics, as well as benchmarks with unique metrics and stratification on unique factors. In terms of the function, zooming and tool tips that reported demographic information were included but they lacked tool tip definitions, filtering options by metrics or geography, interactivity with numbers or tables and sharing or exporting data. From a visual design and platform, the dashboards had more colors (seven or more) and were also mostly based on an ESRI platform.

Cluster E accounted for 8% of the overall sample and it differed most notably from the content and functional perspectives. Dashboards in this cluster reported metrics for people on ventilators and ventilator availability, used pie charts, temporally displayed data by date, stratified beyond typical demographics, and were less likely to have COVID-19 help resources. From a function perspective, filters for demographics, metrics, and time, interactive features such as tool tips for numbers and tables that generally provided information on demographics, and sorting by metrics and words, sharing capabilities were included. The cluster did not generally have tool tips with definitions, export options, or zooming. Visual design and platform wise, the dashboards contained one to three colors and were mostly based on the Microsoft Power BI platform.

Cluster F accounted for 6% of the overall sample. These dashboards contained filters for metrics, time, and geography. They contained interactive features on charts and graphs, with tool tips and definitions of metrics, were sortable by metrics and words, were sharable and exportable, and included zooming. From a content perspective, they reported counts of delayed care and new deaths, and offered pie charts and stacked bar graphs but were less likely to use benchmarks with unique metrics, report on laboratory tests, use race/ethnicity as a stratification, or to provide help resources on COVID-19.

Further analysis of the clusters indicated that clusters A and B represented states with larger populations. Clusters A and B also had above average total cases and total deaths reported during our review period. In regard to cases per day and deaths per day during the review period, Clusters A and D had higher than average values in reference to cases per day and clusters A and F had higher than average values in reference to deaths per day, in comparison to the average values across all clusters for both metrics. States that belonged to cluster A also represented 50% of the top 10 number of total cases, average cases, total deaths, and average deaths. Clusters A, B, and C also contained states with cases that were reported as early as the month of January, while the states from the other clusters had cases reported later in February or March 2020. See [Table 2] for details of this analysis.

Table 2

Descriptive statistics on clusters

Cluster

United States census state population estimate, 2019[a]

Total number of cases reported

Average number of cases reported daily

Total number of deaths reported

Average number of deaths reported daily

First reported case

Numbers and averages reported June 1, 2020 to July 10, 2020[b]

A

122,038,084

667,207

1,043

12,692

20

January 25, 2020

B

106,244,998

319,031

532

7,748

13

January 21, 2020

C

33,829,807

91,006

325

3,754

13

January 24, 2020

D

39,843,880

219,132

1,096

1,947

10

February 12, 2020

E

14,292,481

45,040

282

791

5

March 5, 2020

F

11,284,524

29,672

247

1,739

14

February 1, 2020

a United States Census Bureau. 2019. 2019 National and State Population Estimates. Available at: https://www.census.gov/newsroom/press-kits/2019/national-state-estimates.html.


b New York Times. 2020. Nytimes/Covid-19-Data. Available at: https://github.com/nytimes/covid-19-data.



#
#

Discussion

Principal Findings

The COVID-19 pandemic created a demand for real-time information publicly available online through dashboards to inform and influence the decision-making by public health entities, city and state governments, and citizens across the 50 states. To achieve the goal of effective communication using dashboards, experts suggest dashboard web sites should have comparable hardware and software, standard user interface, data format and coding, and common and convenient approaches for data sharing.[10] [11]

We note that three different dashboard vendors (ESRI, Tableau, and Microsoft BI) dominated the landscape for COVID-19 state dashboards and therefore utilized a standardized format, particularly among those in clusters B, C, D, and E. These vendors moved early during the outbreak to rapidly develop central hubs to provide developers with access to dashboard templates with metrics and visualizations, sources for data to be readily incorporated into the dashboard, and general resources on responsible visualizations of the data. These off the shelf solutions offer convenience for rapidly communicating information and allow comparison across states but may result in the creation of an a priori set of norms toward what is appropriate in terms of the design, use, and influence of a COVID-19 dashboard. Because the science of COVID-19 was developing rapidly, guidance for stakeholders changed over time, perhaps decreasing the utility of a priori dashboard norms. In addition, ESRI was also the vendor for the Johns Hopkins University dashboard that was deployed very early on and offers more extensive mapping and zooming functionalities, and this may have influenced the design choices made by early dashboards.

Our cluster analysis, interestingly, presented additional insights about how groups of states engaged in specific and similar practices that signify aspects of dashboard design that can help states achieve our stated goal of how public health officials can use their dashboards to provide a valuable resource to their citizens. We specifically found practices that were noteworthy for clusters B through F, which we highlight in our discussion below. Notably, timing of the first cases may have allowed states in clusters D, E, and F to have more time to develop their dashboards as observed by the more content and functional differences among them in reference to dashboards overall.

In relation to the EPHS framework, most COVID-19 state dashboards were generally aligned with the primary goal for state officials to effectively communicate information and educate people about the pandemic. For example, we found that most states used benchmarks to aid end users in better understanding and comparing performance of specific metrics such as test positive rate. However, there was little specific guidance interpreting the implications of these metrics, which is the best practice in use of benchmarking.[41] As benchmarks are frequently utilized in policymaking as a tool to determine whether to have lockdowns or open up public services, this is a critical consideration. Benchmarks alone may create a false sense of security among the public. For example, Nevada visually presents the WHO test positivity rate goal compared with the state positivity rate but does not include filters for area level data that would allow users in counties or ZIP codes with higher positivity rates to better understand their elevated risk.


#

Design Recommendations for Public Health Dashboards

Based on the analysis results, we generated the following five design recommendations (DRs) for researchers and practitioners to develop an effective public health dashboard. It is worth noting that designing a public health dashboard fulfilling multiple needs from laypersons, public health officials, and policymakers can be challenging. Future work should consider these recommendations and developing different versions of dashboards to address specific needs of a target group.

DR no.1: it provides transparent, sense making, and usable information. Most dashboards did not provide resources to help citizens identify factors that influence their health related to the pandemic, recommendations on how to engage in healthy and preventative practices for themselves and their community, and of assuring a system that enables equitable access to services. There was also little attention placed on providing insights to citizens to identify hotspots in their neighborhoods and/or to anticipate potential outbreaks using epidemiological or statistical tools. The full transparency in data and the ability to disaggregate data to focus on, for example, historically marginalized groups was not prevalent. Interactivity of the dashboards, moreover, was also concerning in regard to the potential effect on the user's experience with the tool, which in turn could diminish the public health value proposition of using dashboards to communicate information about the pandemic.

DR no. 2: it combines geographical and temporal features. Several dashboards contained standard choropleth maps that displayed total cases by county. Experts suggest this approach to displaying viral spread that masks the complex spatial patterns of the spread across ZIP codes, cities, and regions (especially those with high population densities where the likelihood of spread is high).[31] The collection and reporting of data at the ZIP code level is challenging and may not communicate actionable information in regard to COVID-19 spread, especially if the ZIP codes cover large areas. Additional granularity of communicating data at the neighborhood or street level could be more useful to monitor progress within communities, thus aligning with the EPHS framework to assess and monitor public health status. However, providing data at this level comes at the price of security, privacy risks to individuals and their communities, and measurement reliability challenges (especially if rates and ratios need to be calculated with small counts).

DR no. 3: it Applies machine intelligence while maintaining comprehension. Simple trend lines were common, particularly in cluster E. Trend lines can provide citizens with information to help them anticipate viral spread and assess patterns, but we noted that enhancing these lines with statistical or epidemiological models to forecast or predict the spread was rare. These models can provide citizens with helpful decision analytics and insights based on different scenarios.[35] However, the utility of these models must be balanced with the general public's ability to fully comprehend their meaning due to the high-visualization literacy required. They may also provide the public with a false sense of hope, disregard important factors such as the effect of social behaviors, such as social distancing and masking in the area, or report incorrect information about when the viral spread might peak.

DR no. 4: it reports standardized outcomes. Although the CDC collects extensive information from states about various metrics related to the outbreak,[42] few of these reportable measures were displayed across the dashboards. Dashboards in fluster F were more robust in how metrics were presented. Reported metrics included those for people on ventilators and ventilator availability, in addition to the other metrics typically presented. The decision to provide a wide range of metrics and information to slice information in different perspective comes with much controversy[43] [44] [45] which disconcertingly may impact the acceptability, credibility, and sustainable use of these dashboards over time by the public.[46] [47] Dashboards in this cluster, in addition to providing the typical demographic drill downs, also provided stratifications, such as geography providing, such information allows citizens to identify meaningful patterns, for example, between COVID-19 outbreaks across specific zones within a geographical space that concurrently represented poor socioeconomic or environmentally hazardous areas.[48] [49] [50] [51] Information of this nature could promote sound decision-making at the policy level and targeted allocation of resources, whereby enhancing the public value proposition of the dashboards.

DR no. 5: it creates information resources to guide behavioral changes. A critical motivation for using dashboards, as noted in the EPHS framework is to allow for timely risk communication to facilitate disease prevention and public health interventions during public health crises that may aid end users in taking appropriate actions that are evidence based.[18] [52] [53] [54] [55] [56] We noted that dashboards in clusters E and F did not provide COVID-19 information resources. Those dashboards that did provide such resources mostly offered recommendations to call a public health department or specific locations of test sites. We suggest that this is one area in which dashboards could provide additional useful information about specific behavior changes to decrease the spread of COVID-19 in alignment with the EPHS framework component of strengthening, supporting, and mobilizing communities to improve health.


#

Limitations

The composition of the state COVID-19 dashboards changed throughout the pandemic and future changes can be expected. We sought to complete our data collection and reconciliation during a contained period of time to present a snap shot of the state of these dashboards. However, it is possible that aspects of a dashboard's content, functionality, and visual design and platform may have changed since our analysis was done due to the rapidly evolving nature of the pandemic. Since July of 2020, many of the dashboards have added additional metrics that were not present during this evaluation, and have changed the scales for their legends and temporal trends; however, the dashboard visual characteristics and functionalities have largely stayed the same. We also recognize that states may face challenges with resources to develop and maintain dashboards which influence their acceptability and use among the general public. There may be variation in the ability of different states to collect data in a consistent and timely manner, especially states with significant amounts of remote or alternately governed areas such as Native American territories. Finally, this only reviewed COVID-19 dashboards for the 50 states, and omitted Washington DC and city and county dashboards, so that dashboards across similar levels of governance were being compared.


#
#

Conclusion

Our review is one of the first of its kind to systematically assess the public health response to the COVID-19 pandemic through the use of dashboards. The results from our study can inform state governments, public stakeholders, and researchers on how dashboards are being used to communicate information about the pandemic. We find that the current approach used by states contains commonality in some of the dashboard elements based on our framework, albeit there is also great variation across the other elements. For dashboards to be effective public health tools, they should be encouraged to have more than just data visualizations, but also informational resources and patterns to help with resource targeting and behavioral changes. More research is required to quantify these variations to inform future dashboard development.


#

Clinical Relevance Statement

Our review aims to assist health care leaders in making decisions that can inform the development of dashboards, and similar tools, to communicate information about pandemics to the public. Our findings provide an aggregate profile of the state COVID-19 dashboards, including the types of metrics used and how they are presented to the public. It raises important questions about the use of standard metrics and how they are communicated to motivate potential responses to a public health crisis. The variation in state dashboards shows how health departments differ in their approach to public health messaging. While all state governments are encouraged to report about public health crises, they make choices about specific data points and functionalities employed to have different intended effects on their audiences. It is important to evaluate and compare these dashboards to ensure that their informational needs are being met by their governments.


#

Multiple Choice Questions

  1. What design aspects may positively influence the acceptability, credibility, and sustainable use of COVID-19 dashboards in the eyes of the public?

    • Providing data sources, providing metric definitions, and always limiting the design to one page

    • Providing data sources, providing metric definitions, and providing benchmarks for success

    • Providing data sources, providing benchmarks, and providing empirical forecast models

    • Providing data sources, providing benchmarks, and always limiting the design to one page

    Correct Answer: The correct answer is option b. Providing data sources allows the user to access the validity and reliability of the data used, while providing metric definitions allows the user to understand the scope of what is reported. Providing benchmarks for success allow users to easily compare their progress. The other design aspects listed above offer trade-offs. Limiting a dashboard to one page makes it easier to view the information but having multiple pages may also enable a deeper understanding of the data. Empirical forecast models offer insights based on different scenarios, but the public may not accurately comprehend their meaning due to the high-visualization literacy required.

  2. What issue may arise from COVID-19 dashboards using choropleth maps at the county level to display total case counts?

    • Choropleth county level maps mask the complex spatial patterns of the spread across ZIP codes, cities, and regions

    • Choropleth county level maps are rarely used and require high visualization literacy making it difficult for users to accurately interpret the information

    • Choropleth county level maps may pose higher security and privacy risks to individuals and their communities than other level maps

    • Choropleth county level maps require complicated custom development and are not standard visualizations provided by the most popular dashboard platforms

    Correct Answer: The correct answer is option a. Choropleth county level maps displaying total case counts could mask the complex spatial patterns of the virus spread across ZIP codes, cities, and regions (especially those with high population densities where the likelihood of spread is high). ZIP code information is more granular, but the collection and reporting of data at this level is challenging. Neighborhood or street level information could be more useful to monitor progress but poses more security and privacy risks.


#
#

Conflict of Interest

None declared.

Protection of Human and Animal Subjects

Our study is exempted from Institutional review board.


Supplementary Material


Address for correspondence

Naleef Fareed, PhD, MBA
460 Medical Center Drive, Columbus, OH 43210
United States   

Publication History

Received: 12 August 2020

Accepted: 11 January 2021

Article published online:
14 April 2021

© 2021. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany


Zoom Image
Fig. 1 Dendrogram of clusters based on COVID-19 state dashboard practices. Each letter represents a cluster. COVID-19, novel coronavirus disease 2019.
Zoom Image
Fig. 2 Cluster A: example dashboard—Florida.
Zoom Image
Fig. 3 Cluster B: example dashboard—Kansas.
Zoom Image
Fig. 4 Cluster C: example dashboard—Delaware.
Zoom Image
Fig. 5 Cluster D: example dashboard—Vermont.
Zoom Image
Fig. 6 Cluster E: example dashboard—Nevada.
Zoom Image
Fig. 7 Cluster F: example dashboard—Massachusetts.
Zoom Image
Fig. 8 Heat map of COVID-19 state dashboard practices by cluster. We use a heat map within each domain (i.e., content, function, and visual design and software platform) to present attributes with higher percentages of presence (darker shades of a color indicate progression toward 100% presence of the attribute across all dashboards within a cluster). COVID-19, novel coronavirus disease 2019.