Appl Clin Inform 2019; 10(05): 783-793
DOI: 10.1055/s-0039-1697597
Research Article
Georg Thieme Verlag KG Stuttgart · New York

A Clinically Integrated mHealth App and Practice Model for Collecting Patient-Reported Outcomes between Visits for Asthma Patients: Implementation and Feasibility

Robert S. Rudin
1   RAND Corporation, Boston, Massachusetts, United States
,
Christopher H. Fanta
2   Partners Asthma Center, Brigham and Women’s Hospital, Boston, Massachusetts, United States
,
Nabeel Qureshi
3   RAND Corporation, Santa Monica, California, United States
,
Erin Duffy
3   RAND Corporation, Santa Monica, California, United States
,
Maria O. Edelen
1   RAND Corporation, Boston, Massachusetts, United States
,
Anuj K. Dalal
4   Division of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, United States
,
David W. Bates
5   Division of General Internal Medicine, Department of Health Policy and Management, Brigham and Women's Hospital, Harvard Chan School of Public Health, Boston, Massachusetts, United States
› Author Affiliations
Funding This work was supported by the Agency for Healthcare Research and Quality grant #1R21HS023960.
Further Information

Address for correspondence

Robert S. Rudin, PhD
RAND Corporation
20 Park Plaza, Suite 920, Boston, MA 02116
United States   

Publication History

03 June 2019

07 August 2019

Publication Date:
16 October 2019 (online)

 

Abstract

Objective Mobile health (mHealth) apps may prove to be useful tools for supporting chronic disease management. We assessed the feasibility of implementing a clinically integrated mHealth app and practice model to facilitate between-visit asthma symptom monitoring as per guidelines and with the help of patient-reported outcomes (PRO).

Methods We implemented the intervention at two pulmonary clinics and conducted a mixed-methods analysis of app usage data and semi-structured interview of patients and clinician participants over a 25-week study period.

Results Five physicians, 1 physician's assistant, 1 nurse, and 26 patients participated. Twenty-four patients (92%) were still participating in the intervention at the end of the 25-week study period. On average, each patient participant completed 21 of 25 questionnaires (84% completion rate). Weekly completion rates were higher for participants who were female (88 vs. 73%, p = 0.02) and obtained a bachelor's degree level or higher (94 vs. 74%, p = 0.04). On average, of all questionnaires, including both completed and not completed (25 weekly questionnaires times 26 patient participants), 25% had results severe enough to qualify for a callback from a nurse; however, patients declined this option in roughly half of the cases in which they were offered the option. We identified 6 key themes from an analysis of 21 patients and 5 clinician interviews. From the patient's perspective, these include more awareness of asthma, more connected with provider, and app simplicity. From the clinician's perspective, these include minimal additional work required, facilitating triage, and informing conversations during visits.

Conclusion Implementation of a clinically integrated mHealth app and practice model can achieve high patient retention and adherence to guideline-recommended asthma symptom monitoring, while minimally burdening clinicians. The intervention has the potential for scaling to primary care and reducing utilization of urgent and emergency care.


#

Background and Significance

Mobile health applications (mHealth apps) have the potential to improve chronic disease management.[1] More than 81% of Americans own smartphones and the rates of use are rising among older adults and people with low household incomes.[2] [3] [4] Smartphone adoption among the homeless population has been shown to exceed 50%.[5] Although much attention has focused on the use of mHealth apps for self-management, less effort has been devoted to understanding how to integrate these apps into clinical care as well as the types of practice models required for successful implementation and sustainability.[6] [7] [8] [9] Furthermore, of the more than 165,000 mHealth apps that are currently available, few have acceptable usability, many fall short of providing clinical utility, and most are abandoned quickly.[2] [10] Understanding how to develop and implement clinically useful mHealth apps represents a key challenge.

An important opportunity for mHealth apps is to facilitate symptom monitoring using standardized questionnaires.[11] Patient-reported outcomes (PRO) that assess and measure symptom severity, functional status, and quality of life have the potential to promote effective, patient-centered care. Studies have shown many benefits for PROs, including accurate detection of symptoms, better communication between patients and clinicians, and improved health outcomes.[12] [13] Currently, PROs are primarily used in research settings; they have yet to be deployed in clinical practice more broadly.[14] Furthermore, there are few reported examples of sustainable implementations of PROs for routine monitoring between visits.

Asthma, a chronic disease that affects more than 25 million individuals in the United States and 300 million worldwide,[15] [16] is in many ways an ideal condition for between-visit symptom monitoring using an mHealth intervention for PRO collection. Uncontrolled asthma has deleterious effects on patients' lives: approximately 1.75 million asthma-related visits to US emergency departments occur each year (9 visits for every 100 patients with asthma), which are disproportionately common among patients of minority and lower socio-economic status.[17] Although evidence-based guidelines recommend adjusting treatment based on frequent symptom monitoring between visits, these guidelines are not routinely followed.[18] [19] [20] [21] [22] Previously studied mHealth interventions that attempted to do this were overly complex and had inconsistent results. Finally, a little amount of attention has been given to user-centered design and the intervention's integration into clinical practice.[20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30]

In our previous study, we developed a clinically integrated mHealth app for asthma patients, as well as a practice model including a dashboard for clinicians to proactively monitor symptoms between visits.[31] We employed a user-centered design approach in which we systematically engaged patient and clinician end users to identify requirements and core functionality.[31] Consistent with recent findings that emphasize the importance of simplicity in digital health tools, we focused on simplicity in every aspect of the design of the mHealth intervention.[32] We implemented our mHealth intervention in two subspecialty care clinics with the goal being to engage patients and clinicians in proactively monitoring asthma symptoms between visits through the collection of PROs.


#

Objective

The purpose of the current study was to evaluate the feasibility of between-visit asthma symptom monitoring via our mHealth intervention by conducting a mixed-methods analysis of our implementation.


#

Methods

Setting and Participants

We conducted this study at two ambulatory practices within Brigham and Women's Hospital, an academic medical center in Boston, MA, from May 2017 through April 2018.Five physicians, one physician's assistant, and one licensed practical nurse (LPN) from these clinics were recruited (most of these clinicians were previously involved in the design of the intervention and application). While primary care is the setting in which most asthma patients are treated, we chose ambulatory clinics that specialized in pulmonary or allergic diseases to facilitate recruitment of physicians and diverse patients with poor asthma control.

Patients were recruited on a rolling basis during the six-month study period by means of a mailed letter from their subspecialty physician followed by a phone call from a research assistant. We included patients selected by their physician who were older than 18 years, spoke English, had a diagnosis of asthma, and stated they used a smartphone regularly. We excluded patients with cognitive impairment. We used purposeful, maximal variation sampling to recruit between 24 and 50 participants with a range of ages, ethnicities and education levels, as recommended for feasibility studies.[33] [34] [35] A research assistant collected demographic data verbally from patients during enrollment. Patients were provided an incentive of $25 upon enrollment and completion of the study. The nurse was paid $5,000 as compensation to answer phone calls and upload PRO data to the electronic health record (EHR) before each visit. Physicians were not provided any incentive or additional compensation to participate.


#

Intervention: mHealth App and Practice Model

Using user-centered design principles, we previously designed and developed a patient-facing mHealth app that runs on iOS and Android, a web-based dashboard for use by nurses and physicians, and a practice model to support implementation in routine care.[31] A practice model defines the essential components of clinical practice (e.g., workflows and technologies) required to achieve a specific goal. We found through our design process that clinical integration would not be possible by implementing the app without an accompanying practice model. We used an iterative process in which we engaged with 19 patients and 7 clinicians to define new workflows, develop low- and high-fidelity prototypes, and finalize the final software specifications. The intervention consists of four components as described in [Table 1]. To collect and track patient symptoms, we used a 5-question instrument or the asthma control measure (ACM).[36] [37] [38] The ACM is a PRO instrument that is reliable and valid, and possesses similar characteristics to the more widely used Asthma Control Test.[39] The ACM does not require a license to use and is therefore more scalable. Possible total scores on the 5-question ACM range from 0 (no symptoms) to 19 (most severe symptoms); see [Supplementary Appendix A] [available in the online version] for ACM questions. To determine patients' baseline ACM scores, upon enrollment, the app prompted patients to complete the ACM for an average week in the recent past (see [Supplementary Appendix B] [available in the online version] for some screenshots of the app and dashboard).

Table 1

Invention components of mHealth app and practice model

Component

Description

1. Invitation

Patients are invited to participate by their physician via mailed letter with follow-up phone call from implementation staff

2. Weekly symptom checks and notifications

Patients receive weekly prompts to complete a questionnaire on their smartphones. Questionnaires expire after 48 hours. If symptoms meet specific criteria for worsening or severity (3 points worse than baseline or previous week), patients can request a call after completing the questionnaire from the nurse who uses the dashboard (i.e., optional callback). If symptoms are more severe (6 points worse than baseline or previous week, or most severe an any questions), a call is requested automatically (i.e., mandatory callback).[a] Call-backs are made within 24–48 hours

3. Patient's review of symptoms on app

Patients can view graphically their recent symptom history as an aggregate score and as scores on individual questions

4. Clinicians review of symptoms during in-person visit

Physicians, nurses, and/or physicians' assistants can access patients' symptom history within the EHR and discuss the data during in-person visits[b]

a We disabled the mandatory callback feature during the study period in response to feedback that patient preferred the optional callback instead.


b Integration of the dashboard directly into the EHR was not possible at the time of this study due to the institution's recent transition to a new EHR. To test this component, the nurse manually uploaded a one-page summary of each patient's PRO data as a recent note prior to each patient's visit with his or her provider.



#

Adaptation

Midway through the study, we made one change to the intervention in response to feedback the nurse received from patient participants: we removed the mandatory callback feature, which automated a nurse-to-patient phone call when patients reported symptoms that met criteria of very high or greatly worsening severity ([Table 1]). After that change, patients received only the optional callback feature (i.e., the patient would be allowed to request or decline the callback) even if their symptoms were very severe. We made this change because we found that the mandatory callback feature often occurred when the patient had already contacted their doctor and received treatment.


#

Semi-structured Interviews

We attempted to conduct individual, semi-structured interviews via telephone with all patient and clinician participants. We developed individual interview guides for patients, the nurse, and clinicians who participated. Interview topics addressed the following: overall perspective of the intervention (mHealth app and practice model), perceived benefits to patients, technical and workflow barriers, features of the app and practice model, and potential enhancements to include for subsequent development ([Supplementary Appendix C], available in the online version). Each interview lasted approximately 30 minutes for patients and one hour for physicians and the nurse. All interviews were audio recorded and transcribed. After each interview, we emailed the patient a survey to assess usability ([Supplementary Appendix D], available in the online version).


#

Mixed Methods Analysis

We conducted a sequential-explanatory, mixed methods analysis.[40] We first analyzed patients' usage logs using descriptive statistics and associations with demographic variables. Next, we analyzed qualitative data to inform our quantitative findings, and better understand patient and provider experiences with the app and practice model.

Quantitative Analysis

We used descriptive statistics to report patients' characteristics (e.g., baseline and weekly questionnaire scores) and participation (e.g., weekly questionnaire completion rates and proportion of questionnaires qualifying for each type of callback). To assess symptom volatility, we first categorized level of asthma control based on patients' weekly questionnaire responses into the following three levels: well controlled = 0 to 2, not well controlled = 3 to 7, very poorly controlled = 8 to 19. We then computed a transition score, reflecting changes in level of asthma control for each week of the study (same category = 0, change to an adjacent category = 1, and change to a nonadjacent category = 2).[38] [41]

We conducted bivariate analyses to assess the associations between patients' demographic (age, gender, ethnicity, education) and clinical characteristics and weekly questionnaire completion rates. For each patient, we calculated the completion rate, which is defined as the total number of questionnaires completed and then divided by the total number of questionnaires administered to the individual participant (i.e., 25). We used this as a continuous measure and applied two-sample t-tests to assess differences by gender, education (bachelor's degree and higher), and ethnicity. An analysis of variance was conducted to compare patients' questionnaire completion rates across the 5 clinicians (i.e., patients were grouped by their clinician). Pearson's correlation tests were conducted to assess the association between percent questionnaire completion and the continuous demographic and clinical variables (e.g., age and symptom severity). All analyses were conducted using SAS software (SAS Institute Inc., Cary, NC, USA).


#

Qualitative Analysis

To facilitate coding qualitative data, we used Dedoose (SocioCultural Research Consultants, LLC version v8.0.35), a secure online data analysis application. Two authors (RSR and NQ) reviewed all transcripts and developed a preliminary hierarchically organized codebook using both deductive (i.e., based on the interview guide and quantitative findings) and inductive (i.e., emerged from the data) approaches.[42] Once the codebook was finalized, to ensure consistency, 2 coders (RSR and NQ) coded 2 transcripts independently and obtained a kappa = 0.82 (out 1.0 which indicates perfect agreement). The 2 coders then coded all subsequent transcripts using the code book. The coders added a modest number of new codes or refined existing ones that emerged during the analysis to improve the quality of the coding. The same 2 coders reviewed all transcripts a second time using the final codebook to identify salient themes and discussed the findings with the research team over several sessions. We continued the analysis until we reached thematic saturation.[43]


#
#

Ethical Approval

The procedures used in this study were reviewed and complied with the institutional review boards at Brigham and Women's Hospital and the RAND Corporation; all patients provided their consent. It is possible that the payments to the patients made them more likely to actively participate in the study. However, their payment was not contingent on their level of participation and the amount of the payment was modest.


#
#

Results

Out of the 59 patients invited to participate via a letter from their physician, 26 enrolled in the study. Patient participants ([Table 1]) were mostly female and white, and approximately 50% were college educated. Four physicians and one physician assistant participated, along with the LPN who served as our study nurse.

Quantitative Results: Analysis of App Usage

During the 25-week study, on average, each patient completed 21 of the 25 weekly questionnaires (84% completion rate): the completion rate for individual participant ranged from 4 to 100%. Of the 26 patients, 19 (73%) had perfect or near perfect (i.e., missed one) completion rates on weekly questionnaires, 23 (88%) completed at least one questionnaire in each of the 6 months of the study, and 24 patients (92%) continued to participate in the final month of the study.

Of the 26 patient participants, 24 (92%) had at least one weekly questionnaire score that was severe enough to qualify for an optional or mandatory callback from a nurse (see Methods section for call-back severity criteria). On average, of the 25 questionnaires administered to each of the patient participants, 6.25 questionnaires (25%) had results adequately severe enough to qualify for a callback from a nurse. Callbacks consisted of three types ([Figure 1]) and had per patient averages as follows: 8% (2 of 25 weekly questionnaires) resulted in optional callbacks for which the patient requested a call; 12% (3 of 25 weekly questionnaires) resulted in optional callbacks for which the patient declined a call, and 5% (1.25 of 25 weekly questionnaires) resulted in mandatory callbacks. Of the 26 study patients, 20 (77%) declined at least one option to receive a callback.

Zoom Image
Fig. 1 Weekly questionnaire completion rates and use of callback features.

Female patients had significantly higher weekly questionnaire completion rates compared to males (88 vs. 73%, p = 0.02). Patients with a bachelor's degree level or higher completed significantly more questionnaires compared to those with lower education (94 vs. 74%, p < 0.01). Clinical characteristics (derived from ACM scores), age, and ethnicity were not significantly associated with questionnaire completion ([Fig. 2] and [Tables 2] and [3]). Types of ACM weekly score patterns for 4 participants are illustrated in [Fig 3]. There were no statistically significant differences in levels of completion across 5 clinicians (data not shown, ANOVA p = 0.37).

Zoom Image
Fig. 2 Distribution of questionnaire completion by patient characteristics. An “X” indicates the mean value in the distribution of % of questionnaire completion and the “o” marks outlier values.
Table 2

Patient demographics, asthma control, and mHealth app participation, n = 26

Patient demographics

Mean (SD) age in years

54 (16)

Sex, N (%)

 Male

7 (27%)

 Female

19 (73%)

Education, N (%)

 Bachelors+

14 (54%)

 Non-bachelors

9 (35%)

 Missing

3 (12%)

Ethnicity, N (%)

 White

20 (77%)

 Nonwhite

6 (23%)

Asthma control

 Median (IQR) baseline score on modified ACM[a]

5.0 (4.0)

 Median (IQR) average of weekly scores on modified ACM[a]

2.8 (5.8)

 Median (IQR) transitions score[b]

0.2 (0.3)

 Median (IQR) % of responses indicating worsening or severe symptoms[c]

12 (28)

mHealth app participation

 No. of weekly questionnaires administered to each patient participant[d]

25

 Mean % weekly questionnaires completed per patient[e]

84%

 Patients with perfect or near perfect (i.e., missed 1) questionnaire completion rate over 6-month study period, N (%)

19 (73%)

 Patients who completed a questionnaire in each of the 6 months of the study, N (%)

23 (88%)

 Patients who responded to a questionnaire within the final month of their study period, N (%)

24 (92%)

 Patients who had at least one questionnaire that qualified for an optional or mandatory call-back, N (%)

24 (92%)

 Mean % weekly questionnaires per patient that qualified for a call-back (optional + mandatory)[e]

25%

 Mean % weekly questionnaires per patient resulting in mandatory callbacks[e]

5%

 Mean % weekly questionnaires per patient resulting in optional callbacks requested[e]

8%

 Mean % weekly questionnaires per patient resulting in optional callbacks declined[e]

12%

 Patients receiving at least one callback from a nurse, N (%)

21 (81%)

 Patients declining at least one callback from a nurse, N (%)

20 (77%)

a The Asthma Control Measure (ACM) is a patient questionnaire used to assess recent asthma symptoms and severity. We modified it from monthly to weekly.


b Questionnaire scores were categorized into three levels for severity (well = 0–2, not well = 3–7, very poorly = 8–19) and changes in severity categories where measured for consecutive weeks (same category = 0, change to an adjacent category = 1, and change to a nonadjacent category = 2). The average transition score was computed for each patient as a measure of the volatility of their condition, where higher average transition scores indicate greater volatility.


c Patients with severe symptoms either automatically received a call for a clinician (mandatory callback) or the patient was given the option of requesting a call (optional callback).


d One questionnaire was sent per week to each patient for the 25-week study period. Due to a technical error, some patients were allowed to continue using the app beyond the study period; see [Supplementary Appendix E] (available in the online version) for usage.


e Denominator is total questionnaires administered per patient (i.e., 25).


Zoom Image
Fig. 3 Four example patients' responses to weekly ACM questionnaires.
Table 3

Association between weekly questionnaire completion and demographic and clinical characteristics

N

Percent questionnaires complete

T-test

p -value

Categorical variables

Mean

(SD)

Gender

 Male

7

73

36

0.02

 Female

19

88

18

Education

 Bachelors+

14

94

5

<0.01

 No Bachelors

9

74

36

Ethnicity/Race

 White, non-Hispanic

20

80

22

0.85

 Nonwhite and/or Hispanic

6

85

25

Continuous variables

N

Rho

Correlation

p -value

Age

26

−0.28

0.17

Baseline score on modified ACM[a]

26

0.23

0.26

Average score on modified ACM[a]

26

0.17

0.40

Average transitions score[b]

25

0.16

0.45

% Responses indicating worsening or severe symptoms[c]

26

0.20

0.32

a The modified Asthma Control Measure (ACM) is used to assess recent asthma symptoms and severity.


b Questionnaire scores were categorized into three levels for severity (well = 0–2, not well = 3–7, very poorly = 8–19) and changes in severity categories where measured for consecutive weeks (same category = 0, change to an adjacent category = 1, and change to a nonadjacent category = 2). The average transition score was computed for each patient as a measure of the volatility of their condition, where higher average transition scores indicate greater volatility.


c Patients with severe symptoms either automatically received a call from a clinician or the patient was given the option of requesting a call.



#

Qualitative Results: Analysis of Interview Data

We identified 6 salient themes based on interviews with 21 patients, 3 physicians, 1 physicians' assistant, and the nurse. Overall, patients' comments related to the intervention reflected positive experiences and continued interest in using the app after study completion. Clinicians found the intervention generally non-intrusive and/or helpful in informing their discussions with patients. We describe the themes for each intervention role (patient and clinician) as well as feedback on specific intervention components.

Patient Themes

The most salient theme that emerged from patients was the benefit of improved awareness of asthma symptoms: “Like it's kind of hard when you see your doctor like every couple of months and you don't really realize what's going on week to week… it was good just to kind of keep you aware of like what you should be thinking about telling your doctor, like whether you're getting better or worse.” Several patients believed the value of the app increased over time, as the graph included more of their asthma history. Patients found other benefits, most notably feeling more connected to their provider (attributed to completing weekly questionnaires)–“I was connected to the clinic and it felt good that I knew that they know that my asthma was bad and that I didn't have to go to the emergency room.”–and the app's simplicity–“I liked the simplicity of it. It was very easy to use. It was quick.” Based on review of patients' responses in the context of their reported scores through the app, it appears that patients with more active asthma were interested in spending more time in the app and requested more features (see below), while patients with more stable asthma found the weekly questionnaires less valuable.


#

Clinician Themes

The clinicians found the enrollment process to be easy and the intervention low burden and not disruptive to their workflows overall. One suggested the intervention reduced their workload by receiving updates from the nurse instead of making multiple calls to get a hold of patients: “And probably reduced my work to the extent that it engaged our nurse… as opposed to my making multiple phone calls trying to make contact, trying to discuss issues with the patient.” They mostly did not utilize the PRO data because they did not notice it in the EHR, but when they did, they found it helpful in discussions with patients because patients did not always remember when they had had asthma exacerbations: “… sometimes you'll have somebody who is there in front of you and like they just don't remember. And you're like, ‘Really? I got a message about your, you know, asthma last month. It didn’t sound like it was that great'.” Clinicians perceived that the intervention facilitated triage of patients and likely identified those in need of clinical assistance earlier: “… what I think is there were a certain number of people who I think we found helpful because they became more cognizant of their symptoms, … and that probably their asthma got triaged differently because of the app. Either they wouldn't have called at all, or no one would have realized how severe something was, or that it was very reassuring or whatever… So I thought for triaging purposes, in general, that helped.”


#

Feedback on Specific Features

Patients and clinicians were generally supportive of the weekly questionnaire functionality, including the ability to request a call via the app, which many patients found to be of a particularly high value. Interestingly, most patients preferred control over their requests for communication. They believed that the mandatory callback feature would be appropriate for other patients but not themselves. Patients were strongly supportive of the 48-hour deadline to complete the weekly questionnaire. Clinicians suggested the PRO data should be better integrated into their in-person visit workflows, but were unsure how best to do that. Both patients and clinicians suggested adding functionality to track peak flows, triggers, and diaries, and some patients said that such functionality would be necessary for them to be motivated to continue using the app.


#
#
#

Discussion

We found that a mHealth intervention for asthma symptom monitoring, consisting of a simple app for patients and practice model for clinicians, is feasible when integrated into routine clinical care. From our quantitative analysis, we found that questionnaire completion rates were high and sustained throughout the study period. From our qualitative analysis, we identified key themes about the intervention from the patient perspective (app simplicity, increased awareness of asthma, and greater connectedness to clinicians) and the clinician perspective (facilitating triaging patients, informing conversations during visits, and minimal workflow burden). Incorporating PRO data into routine clinical workflow for physician review from the EHR before or during a patient visit proved to be a major challenge. Patients and clinicians identified many opportunities for enhancements, including enabling data entry for peak flow measurements and short notes about symptoms.

The high and sustained level of participation by patients over the course of the study is likely due to our previous work in which we actively engaged patients and clinicians in a participatory design and development process. Specifically, our findings were largely consistent with the qualitative data from patients and clinicians we collected during our design and development process, in which we incorporated feedback from all stakeholders systematically, with a focus on the core intervention components.[31] This design and development method likely led to the app's simplicity and perceived utility (e.g., increased awareness of asthma and faster access to care). The larger number of female participants can be explained by greater adult prevalence of asthma among females.[44] The higher completion rates for educated participants is consistent with findings from other studies that found education to have a positive effect on adherence.[45] This finding suggests the importance of understanding how to engage less educated patients in the design of interventions so that they do not contribute to disparities. We found greater weekly completion rates among female participants, which is consistent with some studies of adherence but not others.[45]

We encountered several unanticipated findings. First, although few patients reported that they discussed their PRO results with their clinician during medical visits, most continued to use the app until the end of the study. The initial invitation from physicians may be sufficient to keep most patients engaged, and ongoing physician encouragement may not be as critical as our user testing indicated. Second, some patients found that the intervention improved awareness of their asthma even though they did not know about the graph capability in the app. The simple act of asking patients about their asthma symptoms and control may have some benefits in terms of symptom awareness in and of itself. Third, despite expressed support by many patients during design sessions for mandatory callbacks when symptoms were severe, we found a lack of support for keeping that feature during the feasibility test because it created unnecessary callbacks.

Whereas previously evaluated digital asthma interventions involved multiple components that were systematically designed and complex,[21] our intervention focused on a few, core components that patients and clinicians indicated were of the highest priority during our design sessions. Consistent with other recent findings based on survey data, these core components provide assistance to patients in monitoring symptoms between medical visits and deciding when to seek care.[46] Focusing on a small number of intervention components and incrementally adding new ones is likely an effective development strategy for producing clearer attribution of study results to intervention components.[21] [47] Although intervention developers may be tempted to build and test more complex interventions that incorporate multiple components and implement multiple recommended guidelines, our results suggest that when integrating mHealth-based interventions into clinical care, even a relatively simple intervention can reveal important complexities that are relevant to successful implementation and adoption of the intervention. Between visits, symptom monitoring using emails or SMS has been assessed in asthma[48] and other conditions such as cancer,[12] diabetes[49] and depression,[50] with some data showing clinical benefits. However, best practices for designing, developing, and implementing such tools are still rudimentary[51] [52] [53]; our work provides a demonstration of a systematic approach with one condition.


#

Limitations

The patients we included were all adults and recruited primarily from three subspecialty physicians within a single institution in one region and represent a modest, non-random sample. Thus, they may not be representative of all patients who experience difficulty controlling their asthma as well as those who are being treated in primary care settings. Two patients were involved in the design process the previous year. Patients who responded to our invitation may be more likely to use the app than those who did not. Patient without smartphones were excluded. However, it may be possible to expand the intervention, so that caregivers who do have smartphones can assist patients who do not. The physicians and nurse may not be representative of most clinicians in terms of their relationships with patients; patients who have less strong relationships with their clinicians may be less likely to stay engaged. Also, because most of the clinicians were involved in the design of the intervention and application, they may have been more inclined to view it favorably, however, their involvement in the design was mostly limited to one design session. The adaptation of the ACM from monthly to weekly may have introduced some unexpected changes to the thresholds. We did not assess health literacy and it is possible some aspects of the app, such as the graph, may have not have been useful to patients with lower health literacy. These limitations notwithstanding, as a feasibility study, the key quantitative and qualitative results are informative for further developing this intervention.


#

Conclusion—Implications and Next Steps

Implementation of a simple, clinically integrated mHealth app and practice model was feasible and resulted in a high and sustained rate of weekly questionnaire completion and minimal burden on clinical staff. This intervention provides preliminary evidence for how a mHealth intervention could support clinicians in improving adherence to guidelines that recommend serial symptom monitoring between visits, as part of routine care with the goal of preventing exacerbations and utilization of urgent and emergency care. This work suggests that focusing on a small number of intervention components was a successful strategy to design an intervention. Furthermore, feasibility testing allowed us to identify and prioritize additional enhancements. The next steps will be to adapt and scale this intervention to the primary care setting, where most asthma patients receive care, and further investigate how to improve patient engagement, including those who are less educated.


#

Clinical Relevance Statement

Guidelines recommend clinicians monitor patients' asthma symptoms, but such monitoring does not occur routinely between visits. In this 6-month study, we demonstrate that between-visit symptom monitoring is feasible using a patient-facing smartphone application and provider practice model. Patients adhered to 84% of questionnaires and the burden on providers was minimal.


#

Multiple Choice Questions

  1. When patient-report symptoms between visit indicate that they have severe symptoms and likely need to speak with a provider, which mechanism is preferred by patients to establish contact with their provider?

    • Mandatory callback by a nurse (i.e., the patients' symptoms automatically trigger a request for nurse phone call).

    • Optional callback by a nurse (i.e., patients have the option to either request or decline a call-back).

    • Prompt to call their provider.

    Correct Answer: The correct answer is option b. Patients in this study overwhelmingly preferred to have the option to request a callback instead of receiving a mandatory callback. Patients found it easier to request a call compared with using the usual means of calling their provider.

  2. What is the best approach for designing interventions that implement clinical guidelines?

    • Include as many clinical guidelines as possible up front to maximize the chance that the intervention will have the greatest benefit.

    • Include all guidelines that your collaborators are most familiar with.

    • Begin with one guideline and incrementally incorporate others with rounds of user testing and feasibility testing.

    Correct Answer: The correct answer is option c. Although implementing all available guidelines is the ultimate goal, doing so all at once creates a more complex design problem, risks wasting time adding features that are unhelpful or even bothersome to the users, and poses challenges attributing outcomes to specific features.


#
#

Conflict of Interest

D.W.B. consults for EarlySense, which makes patient safety monitoring systems. He receives cash compensation from CDI (Negev), Ltd, which is a not-for-profit incubator for health IT startups. He receives equity from ValeraHealth which makes software to help patients with chronic diseases. He receives equity from Clew which makes software to support clinical decision-making in intensive care. He receives equity from MDClone which takes clinical data and produces deidentified versions of it. His financial interests have been reviewed by Brigham and Women's Hospital and Partners HealthCare in accordance with their institutional policies.

Acknowledgments

The authors thank Juliette Randazza and Sabrina Spencer for coordinating patient recruitment.

Protection of Human and Animal Subjects

This study was approved by RAND and Partners Healthcare Institutional Review Board.


Supplementary Material

  • References

  • 1 Han M, Lee E. Effectiveness of mobile health application use to improve health behavior changes: a systematic review of randomized controlled trials. Healthc Inform Res 2018; 24 (03) 207-226
  • 2 Singh K, Drouin K, Newmark LP. , et al. Many mobile health apps target high-need, high-cost populations, but gaps remain. Health Aff (Millwood) 2016; 35 (12) 2310-2318
  • 3 Smith A. Record shares of Americans now own smartphones, have home broadband. 2017 . Available at: http://www.pewresearch.org/fact-tank/2017/01/12/evolution-of-technology/# . Accessed January 12, 2017
  • 4 Pew Research Center. Mobile fact sheet. Available at: http://www.pewinternet.org/fact-sheet/mobile/ . Published 2018. Accessed July 12, 2019
  • 5 Rhoades H, Wenzel S, Rice E, Winetrobe H, Henwood B. No digital divide? Technology use among homeless adults. J Soc Distress Homeless 2017; 26 (01) 73-77
  • 6 Tinschert P, Jakob R, Barata F, Kramer JN, Kowatsch T. The potential of mobile apps for improving asthma self-management: a review of publicly available and well-adopted asthma apps. JMIR Mhealth Uhealth 2017; 5 (08) e113
  • 7 Hui CY, Walton R, McKinstry B, Pinnock H. Time to change the paradigm? A mixed method study of the preferred and potential features of an asthma self-management app. Health Informatics J 2019 . Doi: 10.1177/1460458219853381
  • 8 Ramsey RR, Caromody JK, Voorhees SE. , et al. A systematic evaluation of asthma management apps examining behavior change techniques. J Allergy Clin Immunol Pract 2019 . Doi: 10.1016/j.jaip.2019.03.041
  • 9 Hui CY, McKinstry B, Walton R, Pinnock H. A mixed method observational study of strategies to promote adoption and usage of an application to support asthma self-management. J Innov Health Inform 2019; 25 (04) 243-253
  • 10 Laing BY, Mangione CM, Tseng CH. , et al. Effectiveness of a smartphone application for weight loss compared with usual care in overweight primary care patients: a randomized, controlled trial. Ann Intern Med 2014; 161 (10, Suppl): S5-S12
  • 11 Black N. Patient reported outcome measures could help transform healthcare. BMJ 2013; 346: f167
  • 12 Basch E, Deal AM, Kris MG. , et al. Symptom monitoring with patient-reported outcomes during routine cancer treatment: a randomized controlled trial. J Clin Oncol 2016; 34 (06) 557-565
  • 13 Howell D, Molloy S, Wilkinson K. , et al. Patient-reported outcomes in routine cancer clinical practice: a scoping review of use, impact on health outcomes, and implementation factors. Ann Oncol 2015; 26 (09) 1846-1858
  • 14 Zhang R, Burgess ER, Reddy MC. , et al. Provider perspectives on the integration of patient-reported outcomes in an electronic health record. JAMIA Open 2019; 2 (01) 73-80
  • 15 Centers for Disease Control and Prevention. Asthma in the US. CDC vital signs, May website. Available at: http://www.cdc.gov/vitalsigns/asthma . Published 2011. Accessed August 30, 2019
  • 16 World Health Organization. Asthma fact sheet No. 307. Updated November 2013. Available at: http://www.who.int/topics/asthma/en . Published 2008. Accessed August 30, 2019
  • 17 Wang T, Srebotnjak T, Brownell J, Hsia RY. Emergency department charges for asthma-related outpatient visits by insurance status. J Health Care Poor Underserved 2014; 25 (01) 396-405
  • 18 Okelo SO, Butz AM, Sharma R. , et al. Interventions to modify health care provider adherence to asthma guidelines: a systematic review. Pediatrics 2013; 132 (03) 517-534
  • 19 Wisnivesky JP, Lorenzo J, Lyn-Cook R. , et al. Barriers to adherence to asthma management guidelines among inner-city primary care providers. Ann Allergy Asthma Immunol 2008; 101 (03) 264-270
  • 20 Marcano Belisario JS, Huckvale K, Greenfield G, Car J, Gunn LH. Smartphone and tablet self management apps for asthma. Cochrane Database Syst Rev 2013; (11) CD010013
  • 21 Hui CY, Walton R, McKinstry B, Jackson T, Parker R, Pinnock H. The use of mobile applications to support self-management for people with asthma: a systematic review of controlled studies to identify features associated with clinical effectiveness and adherence. J Am Med Inform Assoc 2017; 24 (03) 619-632
  • 22 Wiecha JM, Adams WG, Rybin D, Rizzodepaoli M, Keller J, Clay JM. Evaluation of a web-based asthma self-management system: a randomised controlled pilot trial. BMC Pulm Med 2015; 15 (01) 17
  • 23 Vasbinder EC, Janssens HM, Rutten-van Mölken MPMH. , et al; e-MATIC Study Group. e-Monitoring of asthma therapy to improve compliance in children using a real-time medication monitoring system (RTMM): the e-MATIC study protocol. BMC Med Inform Decis Mak 2013; 13 (01) 38
  • 24 Rasmussen LM, Phanareth K, Nolte H, Backer V. Internet-based monitoring of asthma: a long-term, randomized clinical study of 300 asthmatic subjects. J Allergy Clin Immunol 2005; 115 (06) 1137-1142
  • 25 Wu AC, Carpenter JF, Himes BE. Mobile health applications for asthma. J Allergy Clin Immunol Pract 2015; 3 (03) 446-8.e1 , 16
  • 26 Kew KM, Cates CJ. Home telemonitoring and remote feedback between clinic visits for asthma. Cochrane Database Syst Rev 2016; (08) CD011714
  • 27 Stukus DR, Farooqui N, Strothman K. , et al. Real-world evaluation of a mobile health application in children with asthma. Ann Allergy Asthma Immunol 2018; 120 (04) 395-400
  • 28 Farzandipour M, Nabovati E, Sharif R, Arani MH, Anvari S. Patient self-management of asthma using mobile health applications: a systematic review of the functionalities and effects. Appl Clin Inform 2017; 8 (04) 1068-1081
  • 29 Merchant R, Inamdar R, Henderson K. , et al. Digital health intervention for asthma: patient-reported value and usability. JMIR Mhealth Uhealth 2018; 6 (06) e133
  • 30 van Gaalen JL, van Bodegom-Vos L, Bakker MJ, Snoeck-Stroband JB, Sont JK. Internet-based self-management support for adults with asthma: a qualitative study among patients, general practitioners and practice nurses on barriers to implementation. BMJ Open 2016; 6 (08) e010809
  • 31 Rudin RS, Fanta CH, Predmore Z. , et al. Core components for a clinically integrated mhealth app for asthma symptom monitoring. Appl Clin Inform 2017; 8 (04) 1031-1043
  • 32 Greenhalgh T, Wherton J, Papoutsi C. , et al. Beyond adoption: a new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J Med Internet Res 2017; 19 (11) e367
  • 33 Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract 2004; 10 (02) 307-312
  • 34 Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm Policy Ment Health 2015; 42 (05) 533-544
  • 35 Hooper R. Justifying Sample Size for a Feasibility Study. London, England: National Institute for Health Research; 2019
  • 36 Lang DM. New asthma guidelines emphasize control, regular monitoring. Cleve Clin J Med 2008; 75 (09) 641-653
  • 37 van der Meer V, van Stel HF, Bakker MJ. , et al; SMASHING (Self-Management of Asthma Supported by Hospitals, ICT, Nurses and General practitioners) Study Group. Weekly self-monitoring and treatment adjustment benefit patients with partly controlled and uncontrolled asthma: an analysis of the SMASHING study. Respir Res 2010; 11 (01) 74
  • 38 Lara M, Edelen MO, Eberhart NK, Stucky BD, Sherbourne CD. Development and validation of the RAND asthma control measure. Eur Respir J 2014; 44 (05) 1243-1252
  • 39 Schatz M, Sorkness CA, Li JT. , et al. Asthma control test: reliability, validity, and responsiveness in patients not previously followed by asthma specialists. J Allergy Clin Immunol 2006; 117 (03) 549-556
  • 40 Creswell JW, Clark VLP. Designing and Conducting Mixed Methods Research. 2nd ed. Los Angeles: SAGE Publications, Inc.; 2010
  • 41 Johnson KM, FitzGerald JM, Tavakoli H, Chen W, Sadatsafavi M. Stability of asthma symptom control in a longitudinal study of mild-moderate asthmatics. J Allergy Clin Immunol Pract 2017; 5 (06) 1663.e5-1670.e5
  • 42 Bernard HR, Ryan GW. Analyzing Qualitative Data: Systematic Approaches. Thousand Oaks, CA, US: Sage Publications, Inc; 2010
  • 43 Saunders B, Sim J, Kingstone T. , et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant 2018; 52 (04) 1893-1907
  • 44 Pignataro FS, Bonini M, Forgione A, Melandri S, Usmani OS. Asthma and gender: The female lung. Pharmacol Res 2017; 119: 384-390
  • 45 Mathes T, Jaschinski T, Pieper D. Adherence influencing factors - a systematic review of systematic reviews. Arch Public Health 2014; 72 (01) 37
  • 46 Simpson AJ, Honkoop PJ, Kennington E. , et al. Perspectives of patients and healthcare professionals on mHealth for asthma self-management. Eur Respir J 2017; 49 (05) 1601966
  • 47 Morita PP, Yeung MS, Ferrone M. , et al. A Patient-centered mobile health system that supports asthma self-management (breathe): design, development, and utilization. JMIR Mhealth Uhealth 2019; 7 (01) e10956
  • 48 Pernell BM, DeBaun MR, Becker K, Rodeghier M, Bryant V, Cronin RM. Improving medication adherence with two-way short message service reminders in sickle cell disease and asthma. a feasibility randomized controlled trial. Appl Clin Inform 2017; 8 (02) 541-559
  • 49 Han Y, Faulkner MS, Fritz H. , et al. A pilot randomized trial of text-messaging for symptom awareness and diabetes knowledge in adolescents with Type 1 diabetes. J Pediatr Nurs 2015; 30 (06) 850-861
  • 50 Agyapong VIO, Juhás M, Ohinmaa A. , et al. Randomized controlled pilot trial of supportive text messages for patients with depression. BMC Psychiatry 2017; 17 (01) 286-286
  • 51 Groat D, Soni H, Grando MA, Thompson B, Kaufman D, Cook CB. Design and testing of a smartphone application for real-time self-tracking diabetes self-management behaviors. Appl Clin Inform 2018; 9 (02) 440-449
  • 52 Datillo JR, Gittings DJ, Sloan M, Hardaker WM, Deasey MJ, Sheth NP. “Is there an app for that?” orthopaedic patient preferences for a smartphone application. Appl Clin Inform 2017; 8 (03) 832-844
  • 53 Carrera A, Pifarré M, Vilaplana J. , et al. BPcontrol. A mobile app to monitor hypertensive patients. Appl Clin Inform 2016; 7 (04) 1120-1134

Address for correspondence

Robert S. Rudin, PhD
RAND Corporation
20 Park Plaza, Suite 920, Boston, MA 02116
United States   

  • References

  • 1 Han M, Lee E. Effectiveness of mobile health application use to improve health behavior changes: a systematic review of randomized controlled trials. Healthc Inform Res 2018; 24 (03) 207-226
  • 2 Singh K, Drouin K, Newmark LP. , et al. Many mobile health apps target high-need, high-cost populations, but gaps remain. Health Aff (Millwood) 2016; 35 (12) 2310-2318
  • 3 Smith A. Record shares of Americans now own smartphones, have home broadband. 2017 . Available at: http://www.pewresearch.org/fact-tank/2017/01/12/evolution-of-technology/# . Accessed January 12, 2017
  • 4 Pew Research Center. Mobile fact sheet. Available at: http://www.pewinternet.org/fact-sheet/mobile/ . Published 2018. Accessed July 12, 2019
  • 5 Rhoades H, Wenzel S, Rice E, Winetrobe H, Henwood B. No digital divide? Technology use among homeless adults. J Soc Distress Homeless 2017; 26 (01) 73-77
  • 6 Tinschert P, Jakob R, Barata F, Kramer JN, Kowatsch T. The potential of mobile apps for improving asthma self-management: a review of publicly available and well-adopted asthma apps. JMIR Mhealth Uhealth 2017; 5 (08) e113
  • 7 Hui CY, Walton R, McKinstry B, Pinnock H. Time to change the paradigm? A mixed method study of the preferred and potential features of an asthma self-management app. Health Informatics J 2019 . Doi: 10.1177/1460458219853381
  • 8 Ramsey RR, Caromody JK, Voorhees SE. , et al. A systematic evaluation of asthma management apps examining behavior change techniques. J Allergy Clin Immunol Pract 2019 . Doi: 10.1016/j.jaip.2019.03.041
  • 9 Hui CY, McKinstry B, Walton R, Pinnock H. A mixed method observational study of strategies to promote adoption and usage of an application to support asthma self-management. J Innov Health Inform 2019; 25 (04) 243-253
  • 10 Laing BY, Mangione CM, Tseng CH. , et al. Effectiveness of a smartphone application for weight loss compared with usual care in overweight primary care patients: a randomized, controlled trial. Ann Intern Med 2014; 161 (10, Suppl): S5-S12
  • 11 Black N. Patient reported outcome measures could help transform healthcare. BMJ 2013; 346: f167
  • 12 Basch E, Deal AM, Kris MG. , et al. Symptom monitoring with patient-reported outcomes during routine cancer treatment: a randomized controlled trial. J Clin Oncol 2016; 34 (06) 557-565
  • 13 Howell D, Molloy S, Wilkinson K. , et al. Patient-reported outcomes in routine cancer clinical practice: a scoping review of use, impact on health outcomes, and implementation factors. Ann Oncol 2015; 26 (09) 1846-1858
  • 14 Zhang R, Burgess ER, Reddy MC. , et al. Provider perspectives on the integration of patient-reported outcomes in an electronic health record. JAMIA Open 2019; 2 (01) 73-80
  • 15 Centers for Disease Control and Prevention. Asthma in the US. CDC vital signs, May website. Available at: http://www.cdc.gov/vitalsigns/asthma . Published 2011. Accessed August 30, 2019
  • 16 World Health Organization. Asthma fact sheet No. 307. Updated November 2013. Available at: http://www.who.int/topics/asthma/en . Published 2008. Accessed August 30, 2019
  • 17 Wang T, Srebotnjak T, Brownell J, Hsia RY. Emergency department charges for asthma-related outpatient visits by insurance status. J Health Care Poor Underserved 2014; 25 (01) 396-405
  • 18 Okelo SO, Butz AM, Sharma R. , et al. Interventions to modify health care provider adherence to asthma guidelines: a systematic review. Pediatrics 2013; 132 (03) 517-534
  • 19 Wisnivesky JP, Lorenzo J, Lyn-Cook R. , et al. Barriers to adherence to asthma management guidelines among inner-city primary care providers. Ann Allergy Asthma Immunol 2008; 101 (03) 264-270
  • 20 Marcano Belisario JS, Huckvale K, Greenfield G, Car J, Gunn LH. Smartphone and tablet self management apps for asthma. Cochrane Database Syst Rev 2013; (11) CD010013
  • 21 Hui CY, Walton R, McKinstry B, Jackson T, Parker R, Pinnock H. The use of mobile applications to support self-management for people with asthma: a systematic review of controlled studies to identify features associated with clinical effectiveness and adherence. J Am Med Inform Assoc 2017; 24 (03) 619-632
  • 22 Wiecha JM, Adams WG, Rybin D, Rizzodepaoli M, Keller J, Clay JM. Evaluation of a web-based asthma self-management system: a randomised controlled pilot trial. BMC Pulm Med 2015; 15 (01) 17
  • 23 Vasbinder EC, Janssens HM, Rutten-van Mölken MPMH. , et al; e-MATIC Study Group. e-Monitoring of asthma therapy to improve compliance in children using a real-time medication monitoring system (RTMM): the e-MATIC study protocol. BMC Med Inform Decis Mak 2013; 13 (01) 38
  • 24 Rasmussen LM, Phanareth K, Nolte H, Backer V. Internet-based monitoring of asthma: a long-term, randomized clinical study of 300 asthmatic subjects. J Allergy Clin Immunol 2005; 115 (06) 1137-1142
  • 25 Wu AC, Carpenter JF, Himes BE. Mobile health applications for asthma. J Allergy Clin Immunol Pract 2015; 3 (03) 446-8.e1 , 16
  • 26 Kew KM, Cates CJ. Home telemonitoring and remote feedback between clinic visits for asthma. Cochrane Database Syst Rev 2016; (08) CD011714
  • 27 Stukus DR, Farooqui N, Strothman K. , et al. Real-world evaluation of a mobile health application in children with asthma. Ann Allergy Asthma Immunol 2018; 120 (04) 395-400
  • 28 Farzandipour M, Nabovati E, Sharif R, Arani MH, Anvari S. Patient self-management of asthma using mobile health applications: a systematic review of the functionalities and effects. Appl Clin Inform 2017; 8 (04) 1068-1081
  • 29 Merchant R, Inamdar R, Henderson K. , et al. Digital health intervention for asthma: patient-reported value and usability. JMIR Mhealth Uhealth 2018; 6 (06) e133
  • 30 van Gaalen JL, van Bodegom-Vos L, Bakker MJ, Snoeck-Stroband JB, Sont JK. Internet-based self-management support for adults with asthma: a qualitative study among patients, general practitioners and practice nurses on barriers to implementation. BMJ Open 2016; 6 (08) e010809
  • 31 Rudin RS, Fanta CH, Predmore Z. , et al. Core components for a clinically integrated mhealth app for asthma symptom monitoring. Appl Clin Inform 2017; 8 (04) 1031-1043
  • 32 Greenhalgh T, Wherton J, Papoutsi C. , et al. Beyond adoption: a new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J Med Internet Res 2017; 19 (11) e367
  • 33 Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract 2004; 10 (02) 307-312
  • 34 Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm Policy Ment Health 2015; 42 (05) 533-544
  • 35 Hooper R. Justifying Sample Size for a Feasibility Study. London, England: National Institute for Health Research; 2019
  • 36 Lang DM. New asthma guidelines emphasize control, regular monitoring. Cleve Clin J Med 2008; 75 (09) 641-653
  • 37 van der Meer V, van Stel HF, Bakker MJ. , et al; SMASHING (Self-Management of Asthma Supported by Hospitals, ICT, Nurses and General practitioners) Study Group. Weekly self-monitoring and treatment adjustment benefit patients with partly controlled and uncontrolled asthma: an analysis of the SMASHING study. Respir Res 2010; 11 (01) 74
  • 38 Lara M, Edelen MO, Eberhart NK, Stucky BD, Sherbourne CD. Development and validation of the RAND asthma control measure. Eur Respir J 2014; 44 (05) 1243-1252
  • 39 Schatz M, Sorkness CA, Li JT. , et al. Asthma control test: reliability, validity, and responsiveness in patients not previously followed by asthma specialists. J Allergy Clin Immunol 2006; 117 (03) 549-556
  • 40 Creswell JW, Clark VLP. Designing and Conducting Mixed Methods Research. 2nd ed. Los Angeles: SAGE Publications, Inc.; 2010
  • 41 Johnson KM, FitzGerald JM, Tavakoli H, Chen W, Sadatsafavi M. Stability of asthma symptom control in a longitudinal study of mild-moderate asthmatics. J Allergy Clin Immunol Pract 2017; 5 (06) 1663.e5-1670.e5
  • 42 Bernard HR, Ryan GW. Analyzing Qualitative Data: Systematic Approaches. Thousand Oaks, CA, US: Sage Publications, Inc; 2010
  • 43 Saunders B, Sim J, Kingstone T. , et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant 2018; 52 (04) 1893-1907
  • 44 Pignataro FS, Bonini M, Forgione A, Melandri S, Usmani OS. Asthma and gender: The female lung. Pharmacol Res 2017; 119: 384-390
  • 45 Mathes T, Jaschinski T, Pieper D. Adherence influencing factors - a systematic review of systematic reviews. Arch Public Health 2014; 72 (01) 37
  • 46 Simpson AJ, Honkoop PJ, Kennington E. , et al. Perspectives of patients and healthcare professionals on mHealth for asthma self-management. Eur Respir J 2017; 49 (05) 1601966
  • 47 Morita PP, Yeung MS, Ferrone M. , et al. A Patient-centered mobile health system that supports asthma self-management (breathe): design, development, and utilization. JMIR Mhealth Uhealth 2019; 7 (01) e10956
  • 48 Pernell BM, DeBaun MR, Becker K, Rodeghier M, Bryant V, Cronin RM. Improving medication adherence with two-way short message service reminders in sickle cell disease and asthma. a feasibility randomized controlled trial. Appl Clin Inform 2017; 8 (02) 541-559
  • 49 Han Y, Faulkner MS, Fritz H. , et al. A pilot randomized trial of text-messaging for symptom awareness and diabetes knowledge in adolescents with Type 1 diabetes. J Pediatr Nurs 2015; 30 (06) 850-861
  • 50 Agyapong VIO, Juhás M, Ohinmaa A. , et al. Randomized controlled pilot trial of supportive text messages for patients with depression. BMC Psychiatry 2017; 17 (01) 286-286
  • 51 Groat D, Soni H, Grando MA, Thompson B, Kaufman D, Cook CB. Design and testing of a smartphone application for real-time self-tracking diabetes self-management behaviors. Appl Clin Inform 2018; 9 (02) 440-449
  • 52 Datillo JR, Gittings DJ, Sloan M, Hardaker WM, Deasey MJ, Sheth NP. “Is there an app for that?” orthopaedic patient preferences for a smartphone application. Appl Clin Inform 2017; 8 (03) 832-844
  • 53 Carrera A, Pifarré M, Vilaplana J. , et al. BPcontrol. A mobile app to monitor hypertensive patients. Appl Clin Inform 2016; 7 (04) 1120-1134

Zoom Image
Fig. 1 Weekly questionnaire completion rates and use of callback features.
Zoom Image
Fig. 2 Distribution of questionnaire completion by patient characteristics. An “X” indicates the mean value in the distribution of % of questionnaire completion and the “o” marks outlier values.
Zoom Image
Fig. 3 Four example patients' responses to weekly ACM questionnaires.