Keywords telemedicine and telehealth - clinical research informatics - clinical information
systems - human–computer interaction - hospital information systems
Background and Significance
Background and Significance
The term eHealth refers to the use of information and communication technologies necessary
for the operability of health systems.[1 ]
[2 ]
[3 ] Some eHealth systems are aimed at medical staff for medical decision support,[4 ]
[5 ]
[6 ] while others are aimed at patients for their personal welfare.[7 ]
[8 ]
[9 ] eHealth systems, such as health information systems, are becoming more prevalent
due to the rapid development of information and communication technologies[10 ] and have the potential to improve health care.[11 ]
[12 ]
[13 ] There have been reports on critical issues related to the successful implementation
of eHealth systems,[11 ] including the lack of customizability and usability.[14 ] Increased usability may lead to increased patient safety.[11 ]
[15 ] Safe and usable eHealth systems are crucial in health care because failures of the
system can result in death or injury to the patients being treated.[15 ] The usability of health information systems has thus become an important concern
worldwide[14 ]
[16 ] because usability problems of eHealth systems can put patients at the risk of harm.[17 ]
Usability is considered one of the crucial requirements of eHealth systems[18 ] because the usefulness of these systems to end users (medical staff or patients)
is an essential component in the development of health information systems.[10 ] ISO 9241–11:2018 defines usability as “the extent to which a system, product or
service can be used by specified users to achieve specified goals with effectiveness,
efficiency and satisfaction in a specified context of use.”[19 ] Usability focuses on functional aspects[20 ] and aims to assess the level of a system's effectiveness and efficiency.[21 ] Effectiveness and efficiency are part of the performance of a system.[10 ] Effectiveness relates to the “accuracy and completeness with which users achieve
specific goals”[19 ] and includes, for example, the informativeness and understandability of the system.[22 ] Efficiency relates to “resources used in relation to the results achieved”[19 ] and includes, for instance, readability and reachability of the system.[22 ] To improve the effectiveness and efficiency of an eHealth system, usability evaluations
are implemented.
Usability evaluations of eHealth systems have an enormous value for patient benefit.[23 ] To obtain the patient benefit and evaluate the usability of eHealth systems, both
traditionally well-established expert- or user-based usability evaluation methods
can be applied. In the course of expert-based usability evaluation methods, evaluators
can inspect the usability of eHealth systems using heuristics.[24 ] User-based usability evaluations are utilized to observe users' interaction with
software[24 ] and are usually realized using usability tests. Usability tests are considered a
key component of user-centered design[25 ] to evaluate health information technology.[26 ] User-centered design involves the prospective end users, such as medical staff or
patients, in all steps of the development process.[8 ] The needs of the end users are considered by user experience as well, which is more
broadly defined as the “user's perceptions and responses that result from the use
and/or anticipated use of a system, product or service.”[19 ] User experience considers users' perceptions while interacting with software and
extends on users' feelings and emotional responses as well.[27 ] The requirements of eHealth systems change quickly due to customer and user needs.
To address these needs and adapt quickly to these changes, efforts have been made
to introduce iterative design and refinement of systems through agile software development.[16 ]
Agile software development enables rapid software delivery, demand for which is constantly
on the rise.[28 ] Traditional usability evaluation methods are difficult to reconcile with agile software
development,[29 ] as they require substantial time in preparation and implementation.[30 ] However, fast user feedback is crucial for eHealth systems developed using agile
software development.[31 ] To allow the incorporation of user feedback in agile software development, there
is a growing need for fast, flexibly applicable, and cost-effective usability evaluation
methods.[32 ] This creates a need for easily applicable and useful eHealth usability evaluation
methods that facilitate agile eHealth usability evaluations.
There already exist several approaches for rapidly applicable usability evaluations
that are integrated into agile software development.[29 ]
[33 ]
[34 ]
[35 ]
[36 ] These approaches include extreme usability[37 ] or extremely rapid usability testing.[30 ] Easily applicable usability evaluation methods, as such, were brought up by Jakob
Nielsen in the late 1980s.[38 ] At that time, Nielsen coined the term discount usability engineering,[38 ] which refers not only to usability evaluation methods simply applicable at low-cost[31 ] but also focuses on rapid iteration to obtain user feedback.[38 ] This thought behind discount usability is still represented in many current approaches,
including agile user experience,[39 ] agile user-centered design,[40 ] or light-weight user-centered design.[41 ]
The idea behind discount usability engineering perfectly fits the field of health
care, where cost reductions are ubiquitous.[42 ] However, there are only few approaches on discount usability engineering that are
appropriate for evaluating eHealth systems, such as low-cost rapid usability testing[14 ] or rapid usability evaluation.[43 ] Due to the high complexity of eHealth systems,[44 ] end users should be involved early in the development of eHealth systems. In addition
to the early integration of end users, being aware of the context in which to evaluate
eHealth systems is essential.[45 ] The stage of software lifecycle[46 ] that contains the different stages (such as requirements engineering, design, and
evaluation[35 ]) and context[24 ] of eHealth affects the choice of the appropriate eHealth usability evaluation method.
The context of eHealth refers to different available types of eHealth systems, such
as health information systems, electronic health records, or web sites for online
patient information. A variety of evaluation methods can be considered to inspect
or test usability[47 ] but there are only few easily applicable and useful eHealth usability evaluation
methods suitable for agile eHealth usability evaluations in health care.
Objectives
To address the demand of easily applicable and useful eHealth usability evaluation
methods that support faster usability evaluations, we systematically identified and
expert-validated rapidly deployable eHealth usability evaluation methods. Our objective
was to identify and prioritize eHealth usability evaluation methods suitable for agile,
easily applicable, and useful eHealth usability evaluations.
Methods
The study design comprised an iterative approach by contrasting expert knowledge with
findings from literature. To achieve the objectives, we set up a process that included
two main steps: (1) literature review and (2) interviews with experts.
Systematic Literature Review on eHealth Usability Evaluation Methods
Step one comprised the development of a literature-based list of eHealth usability
evaluation methods. We conducted a systematic literature review within the thematic
areas of eHealth, usability, and agility. In this context, we define agility as the
possibility to quickly implement eHealth usability evaluations to facilitate rapid
software delivery. The search questions were: (1) which eHealth usability evaluation
methods exist? (2) which usability evaluation methods can be rapidly deployed to facilitate
agile eHealth usability evaluations? The search question guided the selection of the
search terms. We combined search terms from the thematic areas of eHealth, usability,
and agility ([Table 1 ]). We selected the following databases for the search: ACM Digital Library, IEEE
Xplore, and Medline (via PubMed). To consider emergent eHealth usability evaluation
methods that were not published in peer-reviewed literature, we complemented our search
with reviews of gray literature from Google Scholar (first 30 pages of results). To
meet the particularities of the search engines of each database, we adjusted the search
terms. We used the Medical Subject Heading (MeSH) terms for the literature search
in Medline (via PubMed), such as telemedicine, medical informatics, user-centered
design, and user-computer interface.
Table 1
Search terms used to identify eHealth usability evaluation methods
Thematic area of search
Selected search terms
eHealth
eHealth, telemedicine, telemonitoring, telehealth, mHealth, “mobile health”, “electronic
health”, health, “medical informatics”, “clinical informatics”, medical, “medical
computer science”, or “health information technology”
AND
evaluation, framework, model, approach, process, processes, concept, testing, development,
or engineering
Usability
usability, “user-centered design”, “human computer interaction”, or “usability testing”
Agility
agile, extreme, rapid, fast, and iterative
Note: Mobile health (mHealth) represents a context of eHealth that deals with mHealth
systems aimed for instance at patients for their personal welfare.
In total, 3,981 findings from peer-reviewed literature and non-peer-reviewed literature
matched our inclusion criteria ([Fig. 1 ]). To include a paper in the review process, the paper must contain a description
of an eHealth usability evaluation method, report on agility as the possibility to
quickly implement eHealth usability evaluations, and refer to the applicability of
eHealth usability evaluation methods emphasizing the evaluation stage of the software
lifecycle ([Table 2 ]). The search was limited to English-language papers published from January 2008
to June 2019. Since our study focuses on obtaining descriptions of eHealth usability
evaluation methods, we excluded papers that were experience reports, conference posters,
or presentations. After removing duplicates (n = 324), we downloaded all findings into Zotero reference manager to review them for
possible inclusion (n = 3,657). We analyzed the papers via a two-step process as follows: (1) screening
of the papers according to the inclusion criteria based on title and abstract and
(2) reading the full text of those papers that matched our inclusion criteria. We
kept only papers that report on eHealth usability evaluation methods suitable for
agile, easily applicable, and useful eHealth usability evaluation. We decided an eHealth
usability evaluation method during the review process to be suitable if the eHealth
usability evaluation method was theoretically or practically integrated in an agile,
easily applicable, and useful eHealth usability evaluation. From these included papers
(n = 287), we extracted a list of 29 eHealth usability evaluation methods (see details
in [Fig. 2 ]). This list of eHealth usability evaluation methods was complemented by an ongoing
manual search in peer-reviewed journals parallel to the interviews to include up-to-date
literature (n = 42). Further, we applied a snowballing approach and examined our selected findings
for further crucial literature.
Fig. 1 Flow chart of literature review according to the PRISMA statement. We searched ACM
Digital Library, Google Scholar, IEEE Xplore, and Medline (via PubMed) (ordered alphabetically).
Fig. 2 Model behind iterative development of the expert-based prioritization of eHealth
usability evaluation methods. Originating from the literature-based list of eHealth
usability evaluation methods, the iterative refinement of prioritized eHealth usability
evaluation methods is visualized.
Table 2
Inclusion and exclusion criteria of systematic literature review on eHealth usability
evaluation methods
Inclusion
Exclusion
• Relevance to the three main thematic areas for this paper: (1) eHealth, (2) usability,
and (3) agility
• Description of eHealth usability evaluation method, i.e., a method, model, approach,
process, or concept that can be rapidly deployed to facilitate rapid software delivery
• Applicability of eHealth usability evaluation method emphasizing the evaluation
stage of the software lifecycle
• Peer-reviewed papers as well as non-peer-reviewed papers
• Papers not published in English
• Papers not focusing on the evaluation of eHealth systems
• No description on existing eHealth usability evaluation methods suitable for agile,
easily applicable, and useful eHealth usability evaluations
• No indicators that eHealth usability evaluation methods can be rapidly deployed
• Papers that were experience reports, conference posters, or presentations
• Papers published before 2008
Expert-Based Iterative Validation of eHealth Usability Evaluation Methods
Step two comprised the validation of the extracted list of 29 eHealth usability evaluation
methods by experts. We did this iteratively by continuously assessing and validating
each eHealth usability evaluation method identified in step one with the help of interviews
with 10 experts ([Fig. 2 ]). We performed five iterations with interviews instead two different experts in
each iteration. During each iteration, we (1) successively conducted two interviews
with two experts and (2) related the experts' statements on eHealth usability evaluation
methods to the literature identified in step one to possibly add further rapidly deployable
eHealth usability evaluation methods. Contrasting the experts' statements with the
literature was conducted to ensure the consideration of the most recent eHealth usability
evaluation methods. Inspired by rapid software delivery and the user feedback gathered
in each iteration, implementing the expert interviews iteratively was motivated by
agile software development.
The expert interviews were finished after five iterations, as a saturation of results
was already achieved during iteration number four ([Fig. 3 ]). We defined saturation as the number of altered, newly added, recommended, and
not recommended eHealth usability evaluation methods, which strongly decreased after
iteration four. During iteration four, only one eHealth usability evaluation method
was recommended; regarding iteration five, only one eHealth usability evaluation method
was newly added and recommended by experts (see also [Fig. 4 ]).
Fig. 3 Saturation of information content concerning experts' interviews.
Fig. 4 Iterative prioritization and refinement of eHealth usability evaluation methods displayed
for each iteration. Visualization of eHealth usability evaluation methods that were
altered, newly added, recommended, and not recommended each iteration (ordered alphabetically).
The 10 interviews with the usability experts were conducted in March and April 2020.
We identified the usability experts via professional associations. For the selection
of experts, we considered the following criteria: (1) a record of at least 10 years'
experience in the field of usability, user experience, and/or agile software development
and (2) occupation as a usability engineer, professional for usability or user experience,
experience consultant, user experience architect/designer, and usability, interaction,
or product designer. We kept the inclusion criteria as broad as possible to avoid
excluding potentially qualified experts. When approaching the experts, we clarified
the issue of the interview and roughly outlined the questions we wanted to discuss.
To ensure that the approached experts have expertise working with eHealth systems
and are familiar with a variety of eHealth usability evaluation methods, we informed
them in advance that we wanted to obtain their opinion on rapidly deployable eHealth
usability evaluation methods that facilitate agile eHealth usability evaluations.
Overall, we invited 20 experts. From these, 10 agreed to participate in the online-based
and semi-structured interviews via videoconference or by phone.
We used an interview guideline designed for a half-hour conversation consisting of
two subject areas mentioned below:
Expert's opinion on rapidly deployable eHealth usability evaluation methods suitable
for agile, easily applicable, and useful eHealth usability evaluations. Question 1:
Which rapidly deployable eHealth usability evaluation method would you recommend (or
not recommend) to conduct agile, easily applicable, and useful eHealth usability evaluations?
Question 2: How would you set up the eHealth usability evaluation? Question 3: Assuming
that it is possible to combine two or more eHealth usability evaluation methods, which
eHealth usability evaluation methods would you combine and why?
Expert's opinion on the list of 29 eHealth usability evaluation methods identified
in the literature search (step one). We shared the list with the experts visually
as a file or orally before starting or during the interview. Question: Which of these
eHealth usability evaluation methods would you recommend to rapidly evaluate an eHealth
system? Which would you not recommend? Why or why not?
We analyzed the transcribed interviews by combining inductive and deductive content
analysis. To achieve this, we used the literature-based list of eHealth usability
evaluation methods to predefine eHealth usability evaluation methods recommended and
not recommended by experts (deductive content analysis) and used the interview transcripts
to postdefine eHealth usability evaluation methods recommended and not recommended
by experts (inductive content analysis). The analysis consisted of two steps mentioned
below:
We counted each eHealth usability evaluation method that was recommended and not recommended
via experts' choice. We documented the numbers of experts' recommendations (as well
as non-recommendations) for each eHealth usability evaluation method. For example,
Remote User Test was recommended nine times by experts.
We summarized eHealth usability evaluation methods using the same methodology. We
did this for both recommended eHealth usability evaluation methods and not recommended
eHealth usability evaluation methods. For example, Asynchronous Usability Testing
is an automated usability test that is recorded and performed without an evaluator.[21 ] Unmoderated Usability Testing is performed automatically without an evaluator as
well.[48 ] Since these two eHealth usability evaluation methods can be regarded equally, we
summarized them under the term Unmoderated Usability Testing.
All remaining eHealth usability evaluation methods, which were neither recommended
nor not recommended, and thus not mentioned, in more detail by experts, were categorized
as potentially useful eHealth usability evaluation methods.
Results
After literature review and expert interviews, we finally prioritized 10 recommended
eHealth usability evaluation methods, 22 potentially useful eHealth usability evaluation
methods, and 11 not recommended eHealth usability evaluation methods. Not recommended
eHealth usability evaluation methods refer to eHealth usability evaluation methods
that the experts recommended not to use for rapid deployments.
We took care to have a diversity of experts' sector affiliations to gain different
views from their professional experience. Most experts (six of 10) were employed in
research and development. The remaining experts were equally employed in industry
and civil service. The median usability experience of our chosen experts was 16 years.
The most experienced expert had a usability experience of 25 years.
The interviews were recorded and transcribed with 29,799 words in total. The interviews
lasted a median of 34 minutes.
Based on the literature review, we extracted a list of 29 eHealth usability evaluation
methods that provided the basis for the iterative prioritization of the eHealth usability
evaluation methods ([Fig. 2 ], left). [Fig. 4 ] shows the iterative prioritization and refinement of the eHealth usability evaluation
methods for each iteration (iteration one to five) in detail.
Iteration 1
We started the initial round with the literature-based list of 29 eHealth usability
evaluation methods. Experts stated that the combination of Cognitive Walkthrough (pretest)
and Shadowing is too cumbersome to implement and not suitable for rapid software delivery;
the combination was therefore altered and simplified to Shadowing. Due to both experts'
suggestions, we summarized Retrospective Cognitive Walkthrough and Retrospective Peer
Discovery under the generic term Retrospective Testing. We finished iteration one
with 37 eHealth usability evaluation methods, resulting from 29 eHealth usability
evaluation methods minus one eHealth usability evaluation method (because two eHealth
usability evaluation methods were summarized and simplified to Retrospective Testing)
plus nine eHealth usability evaluation methods that were newly added.
Iteration 2
Six eHealth usability evaluation methods were not recommended by experts. Due to both
experts' suggestions, the two eHealth usability evaluation methods Think Aloud and
Questionnaires were summarized into one eHealth usability evaluation method (referred
to as “Think Aloud combined with Questionnaire”). In addition, both experts suggested
the joint consideration of Card Sorting and Storyboard (referred to as “Card Sorting
as well as Storyboard”). We finished iteration two with 40 eHealth usability evaluation
methods, which resulted from 37 eHealth usability evaluation methods minus two eHealth
usability evaluation methods (due to the previously mentioned experts suggestions
concerning the summarization of eHealth usability evaluation methods) plus five newly
added eHealth usability evaluation methods.
Iteration 3
Two eHealth usability evaluation methods (Synchronous Usability Testing and Unmoderated
Usability Testing) were newly added according to both experts' suggestions. In total,
three eHealth usability evaluation methods were not recommended. We finished iteration
three with 42 eHealth usability evaluation methods, resulting from 40 eHealth usability
evaluation methods plus two newly added eHealth usability evaluation methods.
Iteration 4
Feature Inspection was recommended by both experts, although this method originates
more from the field of user experience and is used in early stages of software lifecycle.[49 ] We finished iteration four with 42 eHealth usability evaluation methods because
no eHealth usability evaluation methods were newly added or altered.
Iteration 5
Crowd Testing was newly added and recommended by experts because the evaluation can
be achieved “automatically under real conditions which is useful to evaluate eHealth
systems aimed at patients.” We finished iteration five with 43 eHealth usability evaluation
methods ([Fig. 5 ]).
Fig. 5 Final prioritization of eHealth usability evaluation methods. For easier readability,
recommended eHealth usability evaluation methods are ordered by the number of experts'
choice (for more details see [Fig. 6 ]). The same was done for not recommended eHealth usability evaluation methods. All
other eHealth usability evaluation methods are arranged alphabetically.
Recommended eHealth Usability Evaluation Methods
The three most frequently recommended eHealth usability evaluation methods are Remote
User Testing, Expert Review, and Rapid Iterative Test and Evaluation Method ([Fig. 6 ]). The Remote User Testing is recommended by experts due to its simplified technical
framework; beneficial are its uncomplicated “technical environment, working at distance.”
Expert Review is recommended by experts because it “is always a quick choice to accomplish
a usability evaluation in health care.” Rapid Iterative Test and Evaluation Method
is the third most recommended eHealth usability evaluation method because “prospective
users can be quickly involved, which is an important precondition for developing user-friendly
eHealth systems.”
Fig. 6 eHealth usability evaluation methods according to number of experts’ choice. The
number originates from the documented number of experts’ recommendation (as well as
non-recommendations) for each eHealth usability evaluation method.
Descriptions of all 10 recommended eHealth usability evaluation methods can be found
in the appendix ([Supplementary Appendix Table A ], available in the online version).
Potentially Useful eHealth Usability Evaluation Methods
The experts neither recommended nor not recommended Perspective-Based Inspection,
Consistency Inspection, Standards Inspection, and Formal Usability Inspection suitable
for agile eHealth usability evaluations, all of which can be applied early in the
software lifecycle. Descriptions of all 22 potentially useful eHealth usability evaluation
methods are listed in the appendix ([Supplementary Appendix Table B ], available in the online version).
Not Recommended eHealth Usability Evaluation Methods
[Fig. 6 ] shows that Retrospective Testing is the most frequently not recommended eHealth
usability evaluation method, followed by Focus Group, Unmoderated Usability Testing,
and Questionnaires. Experts do not recommend Retrospective Testing because the evaluation
is done twice; they noted that this “must be more effort” and the benefit to achieve
higher quality is “dearly bought.” Focus Groups are not recommended because “the effort
in implementation and preparation is quite high.” Unmoderated Usability Testing is
not recommended because technical faults can occur during evaluation and the evaluator
is not able to intervene, which is disadvantageous for the implementation of the evaluation.
Questionnaires are not recommended because a large number of test participants who
are representative of prospective users are needed to gain statistically valid results.
Experts stated that this “effort is simply too great.” Descriptions of the 11 not
recommended eHealth usability evaluation methods can be found in the appendix ([Supplementary Appendix Table C ], available in the online version).
Combinations of eHealth Usability Evaluation Methods
The experts suggested that combining Remote User Testing with Think Aloud or Interview
to rapidly evaluate eHealth systems would be highly useful because insights into participants'
thought processes are difficult to gain solely from observations when tasks are performed
during eHealth usability evaluation. The experts stated that Retrospective Testing
can be combined with Think Aloud or Eye Tracking to gain deeper insights into participants'
thought processes or eye movements. However, this combination does not increase the
usefulness of Retrospective Testing for agile eHealth usability evaluations because
the experts agree that Think Aloud or Eye Tracking increases the effort especially
if the evaluation is done twice. The experts suggested combining Think Aloud with
Questionnaire as a way to enhance insufficiently informative qualitative results with
quantitative results. Nevertheless, the experts do not recommend conducting Questionnaires
for agile eHealth usability evaluations since many test participants are required
to achieve reliable quantitative results.
The experts' quotes on recommended eHealth usability evaluation methods and not recommended
eHealth usability evaluation methods are documented in detail in the appendixes ([Supplementary Appendix Tables A ] and [C ], available in the online version).
Discussion
This study aimed to achieve the identification and prioritization of eHealth usability
evaluation methods suitable for agile, easily applicable, and useful eHealth usability
evaluations. Results show that there are a variety of eHealth usability evaluation
methods deployable to evaluate eHealth systems. According to expert interviews, we
found that 10 eHealth usability evaluation methods were recommended to evaluate eHealth
systems, while 11 eHealth usability evaluation methods were not recommended. A further
22 eHealth usability evaluation methods are potentially useful to evaluate eHealth
systems but were not especially commented on by experts.
Overall, we identified 43 eHealth usability evaluation methods. The systematically
identified and expert-validated eHealth usability evaluation methods are useful either
to rapidly evaluate eHealth systems aimed at medical staff or for the evaluation of
eHealth systems aimed at patients. Both usability professionals and non-usability
professionals can use the systematically identified and prioritized eHealth usability
evaluation methods that facilitate agile, easily applicable, and useful eHealth usability
evaluations. The categorization of recommended, potentially useful, and not recommended
eHealth usability evaluation methods helps usability professionals and non-usability
professionals to choose an appropriate eHealth usability evaluation method suitable
to conduct agile eHealth usability evaluations. This fosters usability evaluations
in health care that are easy to realize and can be performed rapidly.
Some eHealth usability evaluation methods exist that are suitable for rapid evaluations
addressing eHealth systems for medical decision support.[50 ] A prior study demonstrated that Questionnaires were recommended to be the most appropriate
eHealth usability evaluation method to evaluate electronic health records compared
with Heuristic Evaluation, Cognitive Walkthrough, Usability Testing, and Remote Usability
Testing.[50 ] Although Questionnaires are the most prevalent eHealth usability evaluation method,[23 ] our results showed that experts do not recommend Questionnaires to conduct agile
eHealth usability evaluations because, as one reason, the experts assessed the scoring
systems of Questionnaires to be cumbersome. This finding can be confirmed because
recent research criticized the complex scoring systems of Questionnaires[51 ] and showed that Questionnaires easily overlook important information about user
interpretation of information.[5 ] Recent research further showed that automatically feasible usability evaluations
are not used to develop eHealth systems.[23 ] This is confirmed by the fact that Unmoderated Usability Testing was strongly not
recommended by experts. The combination of rapidly deployable eHealth usability evaluation
methods to enrich usability findings may be a necessary approach to accomplish eHealth
usability evaluations with medical staff quickly, since they have limited time available
to them.[52 ] This is consistent with our findings since the experts recommended combining eHealth
usability evaluation methods to support faster eHealth usability evaluations. The
experts did not recommend using Retrospective Testing for agile eHealth usability
evaluations combined with Think Aloud or Eye Tracking. Recent research showed that
Eye Tracking has not attracted acceptance for the evaluation of mHealth systems[23 ] because, as one reason, the recording of eye movements was found to be distracting
by the test participants.[53 ]
[54 ] Additionally, Eye Tracking gained little additional benefit to Retrospective Think
Aloud.[54 ] Cognitive Walkthrough was suggested by the experts as a potentially useful eHealth
usability evaluation method. Research supports this suggestion, since Cognitive Walkthrough
has been frequently used to evaluate eHealth systems,[55 ]
[56 ]
[57 ] although it was criticized in 2013 for its effort and the time required for implementation.[32 ] This led to the creation of simplified versions of Cognitive Walkthrough, such as
Cognitive Jogthrough[58 ] or Streamlined Cognitive Walkthrough.[32 ]
Limitations
Since usability research is a rapidly evolving field, we included gray literature
to incorporate emerging eHealth usability evaluation methods. We obtained more relevant
findings from Google Scholar compared with the databases ACM Digital Library, IEEE
Xplore, and Medline (via PubMed). Since Google Scholar is a metasearch engine and
takes databases from several other publishers into account, we received papers that
were not considered by the other databases, such as a paper dealing with discount
user-centered eHealth design,[42 ] a usability toolkit addressing the evaluation of electronic health records,[50 ] or a description of Cooperative Usability Testing in the field of eHealth.[59 ] Given the systematic literature search conducted in this study, we believe that
we have found most of the relevant papers and then focused on including further relevant
papers by using a snowballing approach. One limitation of our study, however, is that
we are not able to confirm this with absolute certainty. We included papers that emphasize
the evaluation stage of the software lifecycle. Nevertheless, we did not explicitly
restrict the literature search to the evaluation stage because there are eHealth usability
evaluation methods, such as Cognitive Walkthrough, that can be applied in different
stages of the software lifecycle such as evaluation, requirements engineering, and
design ([Supplementary Appendix Table B ], available in the online version). One further limitation of this study is that
the literature search was predominantly performed and interpreted by the first author.
To avoid bias, the results were frequently discussed with the second author. Based
on the literature review, we extracted a list of 29 eHealth usability evaluation methods.
We performed the expert-based prioritization iteratively to continuously assess and
validate each eHealth usability evaluation method. Due to our chosen iterative approach,
results of one iteration affected the results of the subsequent iteration. This may
have an impact on the final prioritization of recommended eHealth usability evaluation
methods and not recommended eHealth usability evaluation methods which represents
a limitation of our study. We performed five iterations with interviews including
two different experts in each iteration. The interviews were time limited to around
half an hour due to the full schedule of the experts. We addressed this limitation
by relating the experts' statements to eHealth usability evaluation methods to the
literature (identified in step one) to possibly add further rapidly deployable eHealth
usability evaluation methods. As a saturation of results was already achieved during
iteration number four, we finished the expert interviews after five iterations. The
generalization of the experts' suggestions from this study is difficult because the
choice of an appropriate eHealth usability evaluation method is also affected by aspects
such as the context of eHealth and stage of software lifecycle.
Future Work
Further research is needed to support the selection of an appropriate eHealth usability
evaluation method regarding the context of eHealth. We are currently developing a
decision tree to address this need. Based on the results of this study, we aim to
develop a toolbox consisting of the prioritized eHealth usability evaluation methods.
There are existing toolboxes describing usability evaluation methods[50 ]
[60 ]
[61 ]; however, our intended toolbox will address the systematic identification and evidence-based
prioritization of eHealth usability evaluation methods suitable for agile eHealth
usability evaluations extended with information on the strengths and weaknesses of
each eHealth usability evaluation method. Some of the systematically identified and
prioritized eHealth usability evaluation methods, such as Assessing Cognitive Workload[62 ] and Cooperative Usability Testing,[63 ] were theoretically conceived but have not yet been practically applied in eHealth.
This study therefore suggests that there is an increasing need to share knowledge
and to identify eHealth usability evaluation methods that were applied and tested
practically. Future work of this study will continue by investigating whether the
systematically identified and expert-validated eHealth usability evaluation methods
can be used to evaluate eHealth systems quickly and easily in health care. For this
purpose, the implementation of a case study is planned prior to this study.
Conclusion
We conducted a systematic review and expert-validation to identify rapidly deployable
eHealth usability evaluation methods. The systematic identification and evidence-based
prioritization of eHealth usability evaluation methods support faster eHealth usability
evaluations, and thus contributes to the ease-of-use of emerging eHealth systems.
We aim to provide a toolbox consisting of the eHealth usability evaluation methods
identified in this study. Future work will contain the development of a toolbox that
includes further information of eHealth usability evaluation methods on those presented
in the tables in the appendixes ([Supplementary Appendix Tables A ] to [C ], available in the online version). To achieve this, we aim to set up method cards
for each eHealth usability evaluation method indicating coherences and similarities
between different eHealth usability evaluation methods.
Clinical Relevance Statement
Clinical Relevance Statement
This study offers an expert-validated prioritization of eHealth usability evaluation
methods that can be used to quickly evaluate eHealth systems. Medical staff with different
professional backgrounds (e.g., medical computer scientists or health care professionals)
can utilize the systematically identified and prioritized eHealth usability evaluation
methods to perform an agile eHealth usability evaluation. The evidence-based prioritization
into 10 recommended eHealth usability evaluation methods, 22 potentially useful eHealth
usability evaluation methods, and 11 not recommended eHealth usability evaluation
methods provides an indication as to which eHealth usability evaluation method is
suitable for conducting an agile eHealth usability evaluation.
Multiple Choice Questions
Multiple Choice Questions
What do the systematically identified and prioritized eHealth usability evaluation
methods focus on?
large-scale usability evaluations aimed at evaluating safety-critical eHealth systems.
eHealth user experience methods that can be especially applied at early stages of
the software lifecycle.
rapidly deployable eHealth usability evaluation methods to support faster usability
evaluations.
none of the above.
Correct Answer: The correct answer is option c. This study offers systematically identified and prioritized
rapidly deployable eHealth usability evaluation methods to foster usability evaluations
in health care that are easy to realize and can be performed quickly.
For the evaluation of which medical device or software can the expert-validated prioritization
of eHealth usability evaluation methods be utilized?
Surgical robots supporting operations on patients.
eHealth systems, for instance in clinical practice.
Serious games for mental health disorders.
None of the above.
Correct Answer: The correct answer is option b. The prioritized rapidly deployable eHealth usability
evaluation methods can be used to evaluate eHealth systems that are the object of
rapid software delivery, such as health information systems, electronic health records,
or web sites for online patient information.