RSS-Feed abonnieren

DOI: 10.1055/s-0042-1754011
Design and Integration of Mobile Health Technology in the Treatment of Orthopaedic Surgery: A Qualitative Study
- Abstract
- Introduction
- Methods
- Results
- Discussion
- Conclusions and Future Directions
- Clinical Relevance Statement
- References
Abstract
Background The use of mobile health (mHealth) technologies has dramatically increased in the past year. A critical component in the discussion about telehealth and mHealth technologies is the importance of integrating the voices of patients, caregivers, and their clinicians.
Methods This study was performed at a tertiary center in Houston consisting of 7 hospitals (1 academic and 6 community hospitals). The clinically integrated mHealth technology consisted of a mHealth education and monitoring platform that used patient-centered emails and text messages over a 50-day period, from prior to the orthopaedic total joint replacement surgery to posthospital discharge to provide education and health monitoring at home. Study participants included patients who were scheduled for total joint replacement surgery between July 2018 and November 2019, and their caregivers. The study involved two components: (1) focus group study (n = 15); split into two groups of participants who had not used the mHealth technology (α-testing during the design phase, prior to implementation); and (2) a content analysis of 377 free-text comments from patients who used the mHealth technology, and who responded to questions about their use of the mHealth platform (β-testing; after implementation, during the execution phase). Thematic analyses methods were used.
Results Three key themes emerged during the design phase including: (1) monitoring, bidirectional questions asking patients to respond to a question can feel invasive and/or annoying unless framed in a reciprocal, contextual-based way; (2) text messages should be used selectively for time-sensitive, critical information; and (3) information should be contained within the body of the message. Three themes emerged during the execution phase include: (1) the content should be divided into small, digestible chunks at the times that patients need that information; (2) the tone of the messages should be approachable and friendly, as opposed to detached and professional; and (3) mHealth technologies make patients calmer and more confident and less inclined to draw on hospital personnel, enabling patients to be managed by the automated program without escalating to human care. Limited, bidirectional engagement can foster interactivity and patient monitoring without becoming excessive or burdensome to health care professionals.
Conclusion The use of mHealth for patient care is likely to be more effective and used in this multihospital mHealth technology study of patients undergoing orthopaedic surgery, if they are clinically integrated with staff who can respond to escalated problems as needed, to enable better adoption, uptake, and sustainability of technology.
#
Keywords
mHealth technology - mHealth interventions - patient-facing technologies - patient-centered care - patient experience - patient engagement - patient activation - effectiveness - quality improvement - patient safetyIntroduction
Patient-centered communication and engagement are integral components of health care.[1] [2] Care that is codesigned around patient needs is associated with better compliance, value, and outcomes.[3] [4] [5] Patient-facing mobile health (mHealth) technologies are one mechanism to facilitate patient-centered communication and meaningful engagement.[6] [7] [8]
Yet, a primary means of maximizing patient engagement—treating patients' experiences as an essential element for value cocreation during mHealth development and execution—is often lacking.[9] [10] [11] Most mHealth research only engages patients in the beginning phases during agenda-setting and protocol development.[12] [13] [14]
To better understand patient and caregiver perspectives during design and implementation phases, we used our (then) under-development education and monitoring mHealth technology (CareSense, Inc., www.caresense.com) which sends texts and emails to patients and their caregivers before and after their orthopaedic joint replacement surgery over a 50-day period.[15] Continuity of care is essential in ensuring safe and high-quality care outcomes.[16] [17] [18] [19] [20] A greater understanding is needed about how patients' beliefs and perspectives can influence their mHealth choices and experiences, and help guide redesign of mHealth efforts, yielding greater patient engagement, patient activation, and ultimately better health outcomes.[4] [18] [20] In this study, we aimed to elicit patient and caregiver preferences to better understand decision, communication needs, and preferences across the design and execution phases of a clinical service technology redesign program to better understand decision, communication needs, and preferences.
#
Methods
Setting and Participants
The study was conducted at Houston Methodist Hospital System, a 2,264-bed tertiary academic medical center located in Houston, Texas, along with six affiliated community hospitals (300–700 beds) in the suburbs. This study was approved by the Hospital System's institutional review board.
We describe the codevelopment with patients of CareSense technology who leveraged text and electronic mail (email) to monitor and inform patients who underwent preoperative and postoperative total joint replacement (TJR). We focused on this patient population because total joint procedures are standardized and elective—the combination of which makes them ideal for leveraging a pre- and postoperative education and monitoring platform.
To develop the education messages within CareSense, a taskforce consisting of clinicians and administrators (i.e., orthopaedic surgeons, nurses, nurse practitioners, physician assistants, medical assistants, and hospital administration) involved in TJR surgeries drafted and edited the messages. The messages were written to: (1) provide patient education about wellness and safety; (2) monitor health and recovery; (3) provide key service reminders for needed actions or medication reminders; and (4) manage and support the ongoing resolution of patient' action-items. [Table 1] demonstrates exemplar questions of all four purposes.
The study consisted of two components: (1) a focus group study of participants (n = 15; separated into two sessions) using moderator guides, of those not using CareSense (α-testing during the design process; prior to implementation), and (2) a content analysis of 377 respondents, which included 377 free-text comments from patients who were using CareSense and who responded to questions asking them to reflect on their use (β-testing; after implementation, during revision processes).
The patients were recruited by hospital volunteers who were tasked with identifying patients and caregivers willing to participate in a codesign study process. The volunteers were not clinicians, did not get paid and were not affiliated with the clinical team or CareSense to minimize bias.
The volunteers reviewed the patient orthopaedic surgery census each week, with the goal of identifying patients (or caregivers of patients) who either had undergone a TJR or who were about to undergo TJR prior to CareSense implementation. Patients and caregivers had to be English-speaking and have above an 8th grade education level as a proxy for adequate literacy.[21] [22] [23] We excluded non-English-speakers because the patient messages were written only in English, and the interviewers only spoke English.
Volunteers used convenience sampling to identify potential participants based on the patient census, which involved sampling of our patient mix to obtain a wide distribution of cases and experiences.[23] [24] Volunteers called prospective participants to solicit their willingness and/or ability to participate in the study.
The first participant was enrolled in August 2018, and the last patient follow-up was in December 2019.
#
Design Phase
Focus Group Sessions
We conducted focus groups to assess patient and caregiver information and decision needs during the design phase. We conducted focus groups sessions using a semistructured moderator guide. The moderator guides were developed and pilot-tested based on the researchers' prior knowledge of domains and areas of interest, literature review, and expert opinion. During the pilot testing, all domains and question items were reviewed and ratified by five clinical experts who were members of the research team and two clinicians who were not part of the research team.[24] We selected a focus group methodology, because we wanted participants to describe in real-time their experiences and expectations, and we felt that the interaction among different users or stakeholders would allow for a richer exploration of themes important for the execution phase.
During the focus groups, we used a series of drafted email and text messages presented on paper that we intended to be used as the intervention during the execution phase. The purpose in using the draft email and text messages with the focus group participants in the design phase was to assess their preferences on the usability, formatting, and accessibility of these communication modalities, in keeping with the principles of usability testing.
The two focus group interviews were conducted with patients and providers. There were 7 participants in the first focus group, and 8 participants in the second focus group. The focus group interviews were led by an experienced moderator and one observer who took field notes and added prompts. At the end of the focus groups, the moderator summarized the information and allowed participants to reflect and comment on the accuracy and validity of this summary.[24] [25] [26] The interviews were audio-recorded and transcribed according to a standardized format. Information obtained during the focus groups was used to revise the text messages and emails used during the intervention phase.
#
#
Execution Phase
After revising the messages based on feedback collected during the focus groups, the survey builders employed by CareSense required 3 months to build and test the revised intervention that was used during the execution phase. The clinical teams drafted all messages that patients received during the execution phase.
During the execution phase, patients were enrolled by hospital schedulers who asked all English-speaking patients undergoing select orthopaedic surgeries whether they would be willing to receive messages, and, if so, the scheduler used the electronic medical record to activate the care pathway.
The enrolled patients received either text messages or email messages inviting their participation and they could accept or decline with no bearing on their surgical care. Generally, time-sensitive, short messages were sent via text messages, and longer educational messages were sent via email to their personal email accounts. When the patients did not have text-message capabilities, the text messages were converted to automated phone calls. One or two text messages or email messages were sent each day on the 20 days prior to a scheduled orthopaedic surgery; the message was not sent when the patient was admitted to the hospital, and they were resumed for once or twice a day for 30 days following their hospital discharge. We define the full sequence of the clinical intervention supported by the technology platform and messages throughout the 50-day period as “The Pathway.”
The messages were unidirectional or bidirectional. All participants in the execution phase received the same unidirectional and bidirectional messages. Unidirectional messages were for educational or informational purposes only and were not intended to solicit patients' responses. The bidirectional messages, on the other hand, were designed to solicit patient responses using close-ended questions and response options ([Table 1]; [Fig. 1]). The bidirectional messages allowed clinicians to monitor patients' health and recovery or, alternatively, to ensure that the patient completed important action-items such as daily ambulation goals prior to or following their surgery.


Clinical Escalation
In situations where patients responded to bidirectional message exchange in a manner that raised clinical concerns, an alert was automatically generated and routed via an email message to the appropriate health care professionals ([Fig. 1]).
Clinicians were expected to call patients with urgent clinical questions or needs involving pain issues, unusual or heavy bleeding or swelling, or signs of infection within 24 hours. For example, in one bidirectional message: “Would you like to speak to a nurse about any questions or concerns you have? Press 1 for yes, Press 2 for no.” A response of a 1 sent an alert to the health care team (i.e., medical assistants or nurses), letting them know that a patient responded to a monitoring question in a concerning manner that required their response ([Fig. 1]). At the same time that the alert was routed to the health care team, patients were simultaneously sent a message on who and what number to call if they did not hear back from the surgeon's office within 24 hours.
The decision logic for the bidirectional questions was written by the mHealth technology company's survey builders, at the direction of the main editor employed by the hospitals who wrote the email and text messages. The alert rules were established by both the hospital staff who wrote the messages and the survey builders.
At the end of the mHealth Pathway, patients were asked to reflect on whether they liked the technology with a free-text field. These free-text responses were used to conduct a thematic analysis during the execution phase. The free-text responses were collected from patients who underwent the entire Pathway in a secure, two-factor authentication process, and the responses were collated in an electronic database form in Excel.
We supplemented the free-text feedback by contacting five patient participants from the original focus group sessions who had completed The Pathway and asked them to reflect on their experiences. The five participants were chosen from the original focus groups that underwent a TJR and were cared for by surgeons using CareSense during the timeframe of this study. Each participant wrote a similar amount of free text, and we included quotes from three of the five participants.
#
Data Analysis
The analysis consists of a general analysis of the focus groups interviews and a subanalysis focusing on the role of mHealth aspects. The focus group sessions and free-text fields were analyzed using thematic analysis to identify and evaluate the needs, preferences, and tone among our sample. Thematic analysis involves identifying key themes that emerge as the theory is formed—salient information, decision needs, and preferences—based on recurring participant statements.[26] Coding is the interpretative process in which conceptual labels are given to the data, each generating new emergent codes (mix of a priori and open), and later compared until a consensus in coding is reached. The unit of analysis refers to the amount of content that is used to form the basis of a code. Researchers strive for a unit of analysis that retains enough context to derive meaning in the data and thus generally err on the side of broader units. In keeping with this principle, we chose to code at the paragraph level and coded each item only once.
Atlas.ti software version 7 (Atlas.ti Scientific Software Development Company, GmbH, Berlin, Germany)[27] was used to facilitate the coding process.
Our analytical process involved collaboratively developing a codebook through discussions among the research team. Consensus in coding in developing the codebook was reached when we achieved theoretical saturation—the point in data collection when no additional issues or insights emerge from the data and all conceptual coding categories were identified and exhausted.[28]
The emerging codes were circulated among researchers and the list of codes was developed into a codebook, during a face-to-face meeting, conference calls, and electronic mail correspondence. After developing the codebook, code assessments of the transcriptions were made independently by team members. The interrater reliability among the coders was assessed to have a 97% agreement.
#
#
#
Results
Design Phase
Sample characteristics: We interviewed 11 patients and 4 caregivers to reach conceptual thematic saturation.[28] Our sample approximated larger demographic trends among orthopaedic patients seeking medical care at our hospital, including the distribution of groups by gender (48% male, 52% female), age (mean 61 years), length of time since the procedure (range –8 days prior to procedure to 53+ after the procedure), and ethnic subgroups, including 82% white, 13% African American, and 2% other. The sample was 7% Hispanic-Latinos, while 93% were non-Hispanic/Latinos.
The data analysis for the design phase resulted in three emerging themes: (1) monitoring, bidirectional questions asking patients to respond to a question can feel invasive and/or annoying unless framed in a reciprocal, contextual-based way; (2) text message is the preferred method of communicating but should be used selectively for time-sensitive, critical information; and (3) information should be contained within the body of the message ([Table 2]).
Theme I: Monitoring, Bidirectional Questions Asking Patients to Respond to a Question Can Feel Invasive and/or Annoying Unless Framed in a Reciprocal, Contextual-based Way.
A major theme that surfaced in the design phase (and resurfaced in the execution phase—appearing in at least 31 [8.2%] of the 377 free-text fields) is that the bidirectional monitoring questions can feel invasive and, in the words of one patient, could “actually make it less likely that [patients will] listen or do what you say.” When asked to elaborate, one patient said: “You asked us whether we completed our preadmission testing. I'd be more receptive to that question, if instead of asking whether I did it, you asked: ‘Did you have any trouble with preadmission testing? That way, it's less about the doctors telling me to do something and more about whether I had any issues.’” ([Table 2])
The idea that patient data monitoring questions can be reframed to be more reciprocal seemed appealing to the focus group participants. Similarly, patients and their caregivers articulated a preference for messages that contextualized how the patient was doing relative to other patients as a means of goal-setting and experiential normalization, such as messages that read: “You should be able to walk around the block by now.”
Theme II: Text Messages Should be Used Selectively for Time-Sensitive, Critical Information.
Participants during the design phase focus groups said that they only wanted to receive information via text that needed them to respond and act quickly. Information that did not require them to take action should be communicated through other mechanisms, aptly summarized by this caregiver: “time-sensitive, we-need-to-do-it-now-sort-of-text is fine. But anything that is just communicating FYI stuff should be email,” echoed by another patient “Educational info on how to get ready for surgery or who to call with problems should never be text. Text is ‘act now’. Appointment reminders can be sent one time via text, no more.” The frequency of text messages should be kept to a minimum (not to exceed three times a week), and anything that could be communicated easily via email should be in email format ([Table 2]).
Theme III: Information Should be Contained within the Body of the Message.
Several participants in the focus groups reported disliking features that required them to turn to other pages, pamphlets, or email or text messages to have the full context of the message. For instance, participants reported disliking messages that told them to, as one patient said, “call the so-and-so-office without actually giving us the [phone] number [in that message itself].” Other participants echoed similar sentiments by saying “email message must contain the information in the body of the email [without requiring them to open up an attachment or go to another internet link] (caregiver).” Pictures or images inside of the email message were liked but not viewed as necessary, particularly when recipients would need to open separate attachments or Internet links to see the pictures ([Table 2]).
#
Execution Phase
There were 377 pieces of discrete, free-text fields (377 respondents) in response to our question on whether patients liked The Pathway.
Theme IV: The Content Should be Divided into Small, Digestible Chunks at the Times that Patients' Need the Information.
A major theme in the execution phase was that patients liked receiving messages over the span of several weeks and days. They preferred that the messages be divided into small, incremental pieces of information, because they felt this helped them learn and retain information, as opposed to presenting all educational information in one document at one or two major time points. This theme was distinctly prominent—showing up in at least 260 free-text fields and succinctly expressed by one patient: “I really didn't have the knowledge before. It was most helpful when you would tell us exactly what to expect at specific times, and what is to be expected of me at specific times.” ([Table 2])
Theme V: The Tone of the Messages Should be Approachable and Friendly, as Opposed to Detached and Professional.
Forty-three of the patient' free-text comments out of 377 free-text comments reflected on the tone of the messages being a significant contributing factor to how much they liked CareSense, aptly expressed by one patient: “[This system] was like having a doctor right beside you every day,” or, as another patient said: “The tone of your messages was so comforting and reassuring.” Patients explained there were specific tactics we used to convey empathy and facilitate connectedness: Using questions to ask how patients were doing, asking whether they had concerns, asking whether they needed us to call them, and describing symptoms that patients should consider normal during their recovery, as well as points of concern in their recovery.
Ten patients disliked the lack of two-way communication free-text comments. We chose a close-ended method for our monitoring (bidirectional) questions to limit overtaxing surgeons' office staff and other clinicians, but the tradeoff was that some patients felt less connected: “[Monitoring questions with limited options] doesn't give much opportunity to actually talk.”
Theme VI: mHealth Technologies Make Patients Less Inclined to Ask Hospital Personnel Because They Feel Calmer and More Confident.
Forty-seven patients remarked that the mHealth technology made them less inclined to call their surgeons' office or go to the emergency department, which they considered a positive attribute of the technology. As one patient noted: “I try not to go to the ER unless I don't know what else to do to help myself. This program helped me keep thinking of ways to try to help myself.” Similarly, nine patients liked that the hospital would not be using its resources to call on them ([Table 2]), and 47 patients reported feeling calmer and more confident as a result of the messaging: “Knowing what to expect as realistic outcomes allows me to relax and not worry about things that I would have stressed over if I hadn't been prepared to expect them.” ([Table 2])
#
#
Discussion
We found that patient engagement is essential in determining the effectiveness and uptake of mHealth and is a central tenet of our study.[13] [14] Our findings indicate that by interviewing patients at several different critical junctures, both before and after the mHealth implementation, we were able to detect what appears to be a time-dependent shift in patients' information needs and their user preferences. Furthermore, the use of the app saved organizational resources (time and money on the part of clinical staff) and produced equal or better outcomes.[8]
Different themes emerged during our study in spite of rigorous assurance of the interview tools and coding scheme being identical in all study phases. Participants in the design phase emphasized the look and feel of the mHealth technology, such as the email length, message frequency, and accessibility. Conversely, participants in the execution phase focused on how the content personally made them feel—supported, connected, capable, confident, annoyed, etc.
The implication of these findings for designers may be that in using participants only in the design process of mHealth technologies, designers may underappreciate the complex, emotionally laden, and evolving perspectives during extended mHealth use. At a minimum, this finding underscores the importance of patient and family codesign throughout the entire design, implementation, and revision stages of mHealth pathways. We suggest that designers elicit user emotions prior to implementation by enrolling a few patients in the first pilot, then gauging their responses and revising the content prior to a more full-scale implementation. Robust evaluation is required for mHealth programs to work as patient responses to clinical automation can be quite variable and not anticipated.[29] [30]
Previous studies of patient engagement have found that patient engagement can be treated as a tokenistic measure, one where patients' feedback is used primarily as a means of “rubber-stamping” to secure funding, rather than as integral members of the implementation team.[13] We chose a highly interactive patient engagement strategy by using patient feedback as the primary (if not exclusive) driver in revising the data algorithm and the clinical messaging protocols. We thus elicited an important finding: Greater levels of interactivity (bidirectional engagement) seems to enhance the appeal and usability of available information to patients by asking questions such as “Would you like to have a nurse speak with you about any health care concerns you might have?” However, even modest levels of interactivity with close-ended, yes/no or option-responses that do not allow for open-ended, two-way communication seem to signal support for many (but not all) patients to feel meaningfully connected and supported by their health care team. This unexpected finding suggests that limited, bidirectional engagement can foster interactivity and patient monitoring without becoming excessive or burdensome for health care professionals.[29]
Another significant contribution of this study is the ability to discern the types of messages patients like and dislike, when is optimal to engage patients, and why they prefer one approach over another. The tone of the message seems to matter the most. We found it surprising that many patients preferred an approachable, almost informal tone, compared with a professional tone that is often used in hospital-printed materials. We suspect that prospective patients contemplating surgery, in the process of reviewing different hospitals and physicians, prefer a more detached professional tone, but, once they have committed to undergo surgery with a surgeon or surgical team, they appreciate a more nuanced, informal, and conversational approach. This finding suggests that technology programs should aim to use an emphatic, supportive tone—one that can be guided and influenced by patient care navigators who excel at providing sensitive, trust-evoking feedback to patients and their family members over the phone and through email communications during the perioperative journey.[30]
Several quantitative studies have empirically demonstrated that patients who are more “activated” (defined as a patient's willingness and ability to take independent actions to manage their health and care) are significantly more likely than less activated patients to engage in healthy behaviors, such as exercising, and avoiding health-damaging behaviors, like smoking.[4] [31] [32] [33] [34] However, with few exceptions, studies have failed to elicit the range of patients' perspectives as to why or when they become most active, or where and how they become more or less adherent to provider instructions.
We think that based on these findings it is presumptuous to categorize patients as “more” or “less” activated in a scaled manner. A more nuanced interpretation would suggest that there are a range of diverse situations in which patients become activated and responsive to clinicians' and hospital's instructions. Specifically, our data suggest that the level of activation depends just as equally on the context of care, if not more, on the person encouraging the activation, that is the activator's (the health care professional) messaging—their tone and phrasing—rather than merely the patient's inherent disposition.[35]
Behavioral economic principles such as reciprocity and normalization may lead to more patient activation.[34] For instance, rather than saying, “You should be off the narcotic pain medication by this point,” the clinician might normalize the experience by saying, “90% of patients do not need narcotic pain medications at this point,” which may yield dividends in improving patient experience and maximizing their activation. Clinicians can contextualize their patients' recovery processes by describing how they are doing relative to other patients without being judgemental. This effective behavioral trigger helps to anchor behavior changes beyond the mHealth context, as the patient can understand why he or she should needs to make a change in their behavior.[34] [35]
Limitations
There are limitations to our study. The sample size was small, and there is the possibility of participant bias—especially since focus groups are set in an artificial environment involving a conference room, a white board, and an overhead projector.[36] [37] [38] We made all efforts throughout the study to ensure methodological rigor and validity of our interviews to determine whether the data were collected, analyzed, and reported correctly according to the study protocol by using a standardized codebook, meeting frequently, and by performing a robust face validity analysis.[35] [36]
Third, our interviewees were English-speaking only and literate and the patients were undergoing joint replacement, meaning our findings may not be as generalizable to populations who do not fit this profile. Our study does not fully reflect the diversity of our Houston culture and conditions. The lack of diversity impedes our ability to generalize study results and may have prevented Spanish-speaking populations from experiencing the benefits of research innovation. Fourth, we provided percentages alongside the themes to give a measure of the robustness of the themes, and we do not intend for the percentages to be interpreted as a full representation of the sample.
Finally, it could be argued that the reason why different themes emerged in the design and the execution study phases is because the design was different.
#
#
Conclusions and Future Directions
The study considerably advances our understanding and importance of remote patient-facing monitoring technologies that empower patients and their caretakers to become more involved and informed about their care and, specifically, to play a more active role in enhancing patient safety. The use of focus group interviews provides valuable insights into the social behavior and the underlying shared values, beliefs, assumptions, and norms of health care providers in the hospital and community. The mHealth technology deployed in this study appears to be efficient, welcomed, and encouraged by this highly select group of English-speaking and literate orthopaedic patients and their caregivers. Importantly, we found that even limited, bidirectional engagement can foster interactivity and patient monitoring. We are convinced that potential concerns about mHealth opening floodgates of unfettered, open-access patient messaging appear to be unlikely.[39] This codevelopment exercise serves as a blueprint for a scalable process for future co-development (and lead to a “play-book” of producing further mHealth apps).
Our study lends credence to the notion that patients, within the socioeconomic and cultural context of our study, prefer certain types of clinical messages involving elements of reciprocity, which suggest a better patient-centered communication experience. There remains a deeper need to better optimize the data algorithms, clinical messaging, and the interactions between health care providers and patients.
#
Clinical Relevance Statement
The study considerably advances our understanding and importance of remote patient-facing monitoring technologies that empower patients and their caretakers to become more involved and informed in their care, and specifically, to play a more active role in enhancing patient safety and experience. Automated systems for patient care are likely to be more effective, and more likely to be used, if clinically integrated with staff who can respond to escalated problems and enable better adoption, uptake, and sustainability of the technology.
#
#
Conflict of Interest
The CareSense employees who are coauthors on this manuscript helped to build the templates of the reports that allowed analyses (C.G.) or built the decision logic for the Pathway messages that patients received (J.S.). No CareSense employee had any input in the data analyses or in drafting the manuscript.
Protection of Human and Animal Subjects
The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects and was reviewed by the hospital's Institutional Review Board.
-
References
- 1 Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood) 2008; 27 (03) 759-769
- 2 Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med 2014; 12 (06) 573-576
- 3 Hansen WB, Scheier LM. Specialized smartphone intervention apps: review of 2014 to 2018 NIH funded grants. JMIR Mhealth Uhealth 2019; 7 (07) e14655
- 4 Hibbard JH, Greene J. What the evidence shows about patient activation: better health outcomes and care experiences; fewer data on costs. Health Aff (Millwood) 2013; 32 (02) 207-214
- 5 Irfan Khan A, Gill A, Cott C, Hans PK, Steele Gray C. mHealth tools for the self-management of patients with multimorbidity in primary care settings: pilot study to explore user experience. JMIR Mhealth Uhealth 2018; 6 (08) e171
- 6 Statucki T, Howard N, Ackerman W, Kuhn C. The potential benefits of digital health technology in managing COVID-19. Covington Digital Health. Accessed August 12, 2020 at: https://www.covingtondigitalhealth.com/2020/03/the-potential-benefits-of-digital-health-technology-in-managing-covid-19/
- 7 Centers for Medicare and Medicaid Services. HCAHPS: patients' perspectives of care survey. 2014 . Accessed August 12, 2020 at: https://www.cms.gov/medicare/quality-initiatives-patient-assessment-instruments/hospitalqualityinits/hospitalhcahps.html
- 8 Bruce CR, Harrison P, Nisar T. et al. Assessing the impact of patient-facing mobile health technology on patient outcomes: retrospective observational cohort study. JMIR Mhealth Uhealth 2020; 8 (06) e19333
- 9 Hardyman W, Daunt KL, Kitchener M. Value co-creation through patient engagement in health care: a micro-level approach and research agenda. Public Manage Rev 2015; 17 (01) 90-107
- 10 Bowen S, McSeveny K, Lockley E, Wolstenholme D, Cobb M, Dearden A. How was it for you? Experiences of participatory design in the UK health service. CoDesign 2013; 9 (04) 230-246
- 11 Hesselink G, Vernooij-Dassen M, Pijnenborg L. et al; European HANDOVER Research Collaborative. Organizational culture: an important context for addressing and improving hospital to community patient discharge. Med Care 2013; 51 (01) 90-98
- 12 Brett J, Staniszewska S, Mockford C. et al. A systematic review of the impact of patient and public involvement on service users, researchers and communities. Patient 2014; 7 (01) 387-395
- 13 Domecq JP, Prutsky G, Elraiyah T. et al. Patient engagement in research: a systematic review. BMC Health Serv Res 2014; 14: 89
- 14 Batalden M, Batalden P, Margolis P. et al. Coproduction of healthcare service. BMJ Qual Saf 2016; 25 (07) 509-517
- 15 Cook JA, Elders A, Boachie C. et al. A systematic review of the use of an expertise-based randomised controlled trial design. Trials 2015; 16: 241
- 16 Hesselink G, Schoonhoven L, Barach P. et al. Improving patient handovers from hospital to primary care: a systematic review. Ann Intern Med 2012; 157 (06) 417-428
- 17 Lee DonHee. A model for designing healthcare service based on the patient experience. Int J Healthc Manag 2019; 12 (03) 180-188
- 18 Miettinen S, Rytilahti P, Vuontisjärvi H, Kuure E, Rontti S. Experience design in digital services. Res Econ Bus. 2014; 6 (01) 29-50
- 19 Chandler JD, Vargo SL. Contextualization and value-in-context: how context frames exchange. Mark Theory 2011; 11 (01) 35-49
- 20 Vargo SL, Lusch RF. Service-dominant logic: continuing the evolution. J Acad Mark Sci 2008; 36 (01) 1-10
- 21 Erickson SM, Rockwern B, Koltov M, McLean RM. Medical Practice and Quality Committee of the American College of Physicians. Putting patients first by reducing administrative tasks in health care: a position paper of the American College of Physicians. Ann Intern Med 2017; 166 (09) 659-661
- 22 Arozullah AM, Yarnold PR, Bennett CL. et al. Development and validation of a short-form, rapid estimate of adult literacy in medicine. Med Care 2007; 45 (11) 1026-1033
- 23 Aboumatar HJ, Carson KA, Beach MC, Roter DL, Cooper LA. The impact of health literacy on desire for participation in healthcare, medical visit communication, and patient reported outcomes among patients with hypertension. J Gen Intern Med 2013; 28 (11) 1469-1476
- 24 Brod M, Tesler LE, Christensen TL. Qualitative research and content validity: developing best practices based on science and experience. Qual Life Res 2009; 18 (09) 1263-1278
- 25 Nielsen J. Usability Engineering. San Francisco, CA: Morgan Kaufmann Publishers Inc.; 1993
- 26 Cohen DJ, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. Ann Fam Med 2008; 6 (04) 331-339
- 27 Atlas,ti.. The qualitative data analysis and research software. Accessed September 11, 2020 at: https://atlasti.com/
- 28 Corbin J, Strauss A. Basics of Qualitative Research. Newbury Park, CA: Sage; 2007
- 29 Lopez C, Hanson C, Yorke D. et al. Improving communication with families of patients undergoing pediatric cardiac surgery. Prog Pediatr 2017; 45: 83-90
- 30 Natale-Pereira A, Enard KR, Nevarez L, Jones LA. The role of patient navigators in eliminating health disparities. Cancer 2011; 117 (15, Suppl): 3543-3552
- 31 Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ 2010; 341: c4413
- 32 Hibbard JH, Stockard J, Mahoney ER, Tusler M. Development of the Patient Activation Measure (PAM): conceptualizing and measuring activation in patients and consumers. Health Serv Res 2004; 39 (4 Pt 1): 1005-1026
- 33 Maindal HT, Sokolowski I, Vedsted P. Translation, adaptation and validation of the American short form Patient Activation Measure (PAM13) in a Danish version. BMC Public Health 2009; 9: 209
- 34 Sunstein CR. Behavioral Law and Economics. Cambridge: Cambridge University Press;; 2000
- 35 Deane K, Stevermer JJ, Hickner J. Help smokers quit: tell them their “lung age”. J Fam Pract 2008; 57 (09) 584-586
- 36 Althubaiti A. Information bias in health research: definition, pitfalls, and adjustment methods. J Multidiscip Healthc 2016; 9: 211-217
- 37 Mays N, Pope C. Qualitative research in health care. Assessing quality in qualitative research. BMJ 2000; 320 (7226): 50-52
- 38 Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 2007; 19 (06) 349-357
- 39 Rudin RS, Fanta CH, Qureshi N. et al. A clinically integrated mHealth app and practice model for collecting patient-reported outcomes between visits for asthma patients: implementation and feasibility. Appl Clin Inform 2019; 10 (05) 783-793
Address for correspondence
Publikationsverlauf
Eingereicht: 19. Oktober 2020
Angenommen: 31. Oktober 2021
Artikel online veröffentlicht:
01. Juli 2022
© 2022. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution License, permitting unrestricted use, distribution, and reproduction so long as the original work is properly cited. (https://creativecommons.org/licenses/by/4.0/)
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood) 2008; 27 (03) 759-769
- 2 Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med 2014; 12 (06) 573-576
- 3 Hansen WB, Scheier LM. Specialized smartphone intervention apps: review of 2014 to 2018 NIH funded grants. JMIR Mhealth Uhealth 2019; 7 (07) e14655
- 4 Hibbard JH, Greene J. What the evidence shows about patient activation: better health outcomes and care experiences; fewer data on costs. Health Aff (Millwood) 2013; 32 (02) 207-214
- 5 Irfan Khan A, Gill A, Cott C, Hans PK, Steele Gray C. mHealth tools for the self-management of patients with multimorbidity in primary care settings: pilot study to explore user experience. JMIR Mhealth Uhealth 2018; 6 (08) e171
- 6 Statucki T, Howard N, Ackerman W, Kuhn C. The potential benefits of digital health technology in managing COVID-19. Covington Digital Health. Accessed August 12, 2020 at: https://www.covingtondigitalhealth.com/2020/03/the-potential-benefits-of-digital-health-technology-in-managing-covid-19/
- 7 Centers for Medicare and Medicaid Services. HCAHPS: patients' perspectives of care survey. 2014 . Accessed August 12, 2020 at: https://www.cms.gov/medicare/quality-initiatives-patient-assessment-instruments/hospitalqualityinits/hospitalhcahps.html
- 8 Bruce CR, Harrison P, Nisar T. et al. Assessing the impact of patient-facing mobile health technology on patient outcomes: retrospective observational cohort study. JMIR Mhealth Uhealth 2020; 8 (06) e19333
- 9 Hardyman W, Daunt KL, Kitchener M. Value co-creation through patient engagement in health care: a micro-level approach and research agenda. Public Manage Rev 2015; 17 (01) 90-107
- 10 Bowen S, McSeveny K, Lockley E, Wolstenholme D, Cobb M, Dearden A. How was it for you? Experiences of participatory design in the UK health service. CoDesign 2013; 9 (04) 230-246
- 11 Hesselink G, Vernooij-Dassen M, Pijnenborg L. et al; European HANDOVER Research Collaborative. Organizational culture: an important context for addressing and improving hospital to community patient discharge. Med Care 2013; 51 (01) 90-98
- 12 Brett J, Staniszewska S, Mockford C. et al. A systematic review of the impact of patient and public involvement on service users, researchers and communities. Patient 2014; 7 (01) 387-395
- 13 Domecq JP, Prutsky G, Elraiyah T. et al. Patient engagement in research: a systematic review. BMC Health Serv Res 2014; 14: 89
- 14 Batalden M, Batalden P, Margolis P. et al. Coproduction of healthcare service. BMJ Qual Saf 2016; 25 (07) 509-517
- 15 Cook JA, Elders A, Boachie C. et al. A systematic review of the use of an expertise-based randomised controlled trial design. Trials 2015; 16: 241
- 16 Hesselink G, Schoonhoven L, Barach P. et al. Improving patient handovers from hospital to primary care: a systematic review. Ann Intern Med 2012; 157 (06) 417-428
- 17 Lee DonHee. A model for designing healthcare service based on the patient experience. Int J Healthc Manag 2019; 12 (03) 180-188
- 18 Miettinen S, Rytilahti P, Vuontisjärvi H, Kuure E, Rontti S. Experience design in digital services. Res Econ Bus. 2014; 6 (01) 29-50
- 19 Chandler JD, Vargo SL. Contextualization and value-in-context: how context frames exchange. Mark Theory 2011; 11 (01) 35-49
- 20 Vargo SL, Lusch RF. Service-dominant logic: continuing the evolution. J Acad Mark Sci 2008; 36 (01) 1-10
- 21 Erickson SM, Rockwern B, Koltov M, McLean RM. Medical Practice and Quality Committee of the American College of Physicians. Putting patients first by reducing administrative tasks in health care: a position paper of the American College of Physicians. Ann Intern Med 2017; 166 (09) 659-661
- 22 Arozullah AM, Yarnold PR, Bennett CL. et al. Development and validation of a short-form, rapid estimate of adult literacy in medicine. Med Care 2007; 45 (11) 1026-1033
- 23 Aboumatar HJ, Carson KA, Beach MC, Roter DL, Cooper LA. The impact of health literacy on desire for participation in healthcare, medical visit communication, and patient reported outcomes among patients with hypertension. J Gen Intern Med 2013; 28 (11) 1469-1476
- 24 Brod M, Tesler LE, Christensen TL. Qualitative research and content validity: developing best practices based on science and experience. Qual Life Res 2009; 18 (09) 1263-1278
- 25 Nielsen J. Usability Engineering. San Francisco, CA: Morgan Kaufmann Publishers Inc.; 1993
- 26 Cohen DJ, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. Ann Fam Med 2008; 6 (04) 331-339
- 27 Atlas,ti.. The qualitative data analysis and research software. Accessed September 11, 2020 at: https://atlasti.com/
- 28 Corbin J, Strauss A. Basics of Qualitative Research. Newbury Park, CA: Sage; 2007
- 29 Lopez C, Hanson C, Yorke D. et al. Improving communication with families of patients undergoing pediatric cardiac surgery. Prog Pediatr 2017; 45: 83-90
- 30 Natale-Pereira A, Enard KR, Nevarez L, Jones LA. The role of patient navigators in eliminating health disparities. Cancer 2011; 117 (15, Suppl): 3543-3552
- 31 Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ 2010; 341: c4413
- 32 Hibbard JH, Stockard J, Mahoney ER, Tusler M. Development of the Patient Activation Measure (PAM): conceptualizing and measuring activation in patients and consumers. Health Serv Res 2004; 39 (4 Pt 1): 1005-1026
- 33 Maindal HT, Sokolowski I, Vedsted P. Translation, adaptation and validation of the American short form Patient Activation Measure (PAM13) in a Danish version. BMC Public Health 2009; 9: 209
- 34 Sunstein CR. Behavioral Law and Economics. Cambridge: Cambridge University Press;; 2000
- 35 Deane K, Stevermer JJ, Hickner J. Help smokers quit: tell them their “lung age”. J Fam Pract 2008; 57 (09) 584-586
- 36 Althubaiti A. Information bias in health research: definition, pitfalls, and adjustment methods. J Multidiscip Healthc 2016; 9: 211-217
- 37 Mays N, Pope C. Qualitative research in health care. Assessing quality in qualitative research. BMJ 2000; 320 (7226): 50-52
- 38 Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 2007; 19 (06) 349-357
- 39 Rudin RS, Fanta CH, Qureshi N. et al. A clinically integrated mHealth app and practice model for collecting patient-reported outcomes between visits for asthma patients: implementation and feasibility. Appl Clin Inform 2019; 10 (05) 783-793

