Published on in Vol 8, No 4 (2020): April

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/16814, first published .
Methods and Measures Used to Evaluate Patient-Operated Mobile Health Interventions: Scoping Literature Review

Methods and Measures Used to Evaluate Patient-Operated Mobile Health Interventions: Scoping Literature Review

Methods and Measures Used to Evaluate Patient-Operated Mobile Health Interventions: Scoping Literature Review

Review

1Norwegian Centre for E-health Research, University Hospital of North Norway, Tromsø, Norway

2Department of Clinical Medicine, Faculty of Health Science, University of Tromsø The Arctic University of Norway, Tromsø, Norway

3Telemedicine and eHealth Research Group, Department of Clinical Medicine, University of Tromsø The Arctic University of Norway, Tromsø, Norway

4Norwegian Institute of Public Health, Oslo, Norway

5Tromsø Endocrine Research Group, Department of Clinical Medicine, University of Tromsø The Arctic University of Norway, Tromsø, Norway

6Division of Internal Medicine, University Hospital of North Norway, Tromsø, Norway

7Department of Health Science and Technology, Aalborg University, Aalborg, Denmark

*these authors contributed equally

Corresponding Author:

Meghan Bradway, MBA

Norwegian Centre for E-health Research

University Hospital of North Norway

PO Box 35

Tromsø, 9038

Norway

Phone: 47 91193393

Email: mbradway90@gmail.com


Background: Despite the prevalence of mobile health (mHealth) technologies and observations of their impacts on patients’ health, there is still no consensus on how best to evaluate these tools for patient self-management of chronic conditions. Researchers currently do not have guidelines on which qualitative or quantitative factors to measure or how to gather these reliable data.

Objective: This study aimed to document the methods and both qualitative and quantitative measures used to assess mHealth apps and systems intended for use by patients for the self-management of chronic noncommunicable diseases.

Methods: A scoping review was performed, and PubMed, MEDLINE, Google Scholar, and ProQuest Research Library were searched for literature published in English between January 1, 2015, and January 18, 2019. Search terms included combinations of the description of the intention of the intervention (eg, self-efficacy and self-management) and description of the intervention platform (eg, mobile app and sensor). Article selection was based on whether the intervention described a patient with a chronic noncommunicable disease as the primary user of a tool or system that would always be available for self-management. The extracted data included study design, health conditions, participants, intervention type (app or system), methods used, and measured qualitative and quantitative data.

Results: A total of 31 studies met the eligibility criteria. Studies were classified as either those that evaluated mHealth apps (ie, single devices; n=15) or mHealth systems (ie, more than one tool; n=17), and one study evaluated both apps and systems. App interventions mainly targeted mental health conditions (including Post-Traumatic Stress Disorder), followed by diabetes and cardiovascular and heart diseases; among the 17 studies that described mHealth systems, most involved patients diagnosed with cardiovascular and heart disease, followed by diabetes, respiratory disease, mental health conditions, cancer, and multiple illnesses. The most common evaluation method was collection of usage logs (n=21), followed by standardized questionnaires (n=18) and ad-hoc questionnaires (n=13). The most common measure was app interaction (n=19), followed by usability/feasibility (n=17) and patient-reported health data via the app (n=15).

Conclusions: This review demonstrates that health intervention studies are taking advantage of the additional resources that mHealth technologies provide. As mHealth technologies become more prevalent, the call for evidence includes the impacts on patients’ self-efficacy and engagement, in addition to traditional measures. However, considering the unstructured data forms, diverse use, and various platforms of mHealth, it can be challenging to select the right methods and measures to evaluate mHealth technologies. The inclusion of app usage logs, patient-involved methods, and other approaches to determine the impact of mHealth is an important step forward in health intervention research. We hope that this overview will become a catalogue of the possible ways in which mHealth has been and can be integrated into research practice.

JMIR Mhealth Uhealth 2020;8(4):e16814

doi:10.2196/16814

Keywords



Need for Mobile Health Evaluation

Health research is yet to agree upon a framework for evaluating mobile health (mHealth) interventions. This is especially true for tools, such as apps and wearables, that are intended primarily to aid patients in health self-management. Traditionally, the evaluation of mobile medical devices has been based on clinical evidence, and it can take years to bring these devices to the market. The continuous glucose monitor first came onto the market in 1999, but it was not until 2006 that the next version was available [1]. Similarly, the pulse oximeter struggled for decades to become a standard mobile tool for measuring blood oxygenation [2]. Because there are increasingly easy-to-use patient-operated mHealth technologies available on the market, patients are no longer willing to wait for a lengthy evaluation process. Instead, patients often use apps without assurance of quality or guidance from their health care providers [3].

Always-Available Self-Management Technologies

Individuals are more empowered to take greater responsibility for their health, and currently, they enthusiastically seek out mHealth apps and other devices for self-management. For chronic conditions in particular, health challenges occur continuously, not just when it is convenient or at a doctor’s office. Technologies for self-management must allow individuals to register and review the measurements that they input into the app or system at any time. Connectivity to devices, such as medical or commercial sensors and wearables, adds to the utility of an app. A report by Research2Guidance [4], an organization that provides market research on digital health, emphasized the central role of patient-operated mHealth apps in the “connectivity landscape” of electronic health technologies [5]. However, their diverse functionalities and intended uses pose great challenges to researchers.

Challenges of mHealth Evaluation: Single Apps Versus Multiplatform Interventions

The amount of assessment and testing that is necessary for health technology is directly related to its potential risks and benefits [6,7]. For example, medications based on patient-gathered health data are associated with higher health risks than those in patients with type 2 diabetes who seek motivation from an activity tracker for weight management. Although multiplatform (ie, system) interventions serve to increase the benefits (eg, automatic and less burdensome operations), they increase the risks related to data safety, integrity, and reliability [8,9]. Researchers must adapt their approaches, methods, and measures for patient self-management interventions involving single mHealth apps and those involving multiplatform systems.

Evaluation Framework: Coverage

There are two main categories of mobile medical or mHealth devices associated with the amount of oversight health authorities will show; those that are “actively regulated” and those that fall under “enforcement discretion.” These categories are described in the 2015 Guidance for Industry and Food and Drug Administration Staff [10] and are echoed in the updated 2019 Guidance [11] and included in the terms of The European Economic Area Certification (CE) Mark [12]. Devices that are actively regulated are required to undergo an evaluation and meet security and effectiveness standards for use in health care. On the other hand, many patient-operated technologies fall under “enforcement discretion,” and they pose less risk to patient safety and health. For individuals aiming to assess the usefulness or safety of these technologies, there are no evaluation frameworks or guidelines to follow. The year 2015 marked a relevant change in the mHealth arena, which we are still exploring today (connectivity between different device types, development on different platforms, and marked focus on mHealth integration into clinical practice) [13].

Although there have been many strategies [14-17] for the evaluation of this subset of mHealth (eg, National Institute for Health and Care Excellence [18]), there is no agreement about which qualitative or quantitative measures should be addressed or how they should be evaluated [19]. Evaluation frameworks, such as the World Health Organization (WHO) mHealth evidence reporting and assessment (mERA) checklist [20], suggest that traditional health research measures and methods are not sufficient. For assessing the comprehensive impacts of such patient-operated mHealth approaches, research needs to look into additional factors. This can be achieved by producing evidence that is relevant for both patients and clinicians.

Additional Factors for mHealth Evaluation

Although clinical evidence is essential for the evaluation of any health aid, the two major concepts of time and human behavior must also be addressed in mHealth evaluation. As “always available” technologies are being used continuously and uniquely by patients, it is uncertain how much time is needed to produce an effect and what changes in self-management behavior will occur. Traditionally, medical devices rely on established biological knowledge, have fewer alternatives in the market, and do not offer frequent updates. However, patient-operated mHealth approaches require the consideration of patients’ motivation, health beliefs, and resources for self-management. They must also compete with hundreds of other mHealth apps and devices that are continuously developed and updated. In recent years, clinical research has attempted to keep pace with mHealth by employing methods that aim to expedite the research process and produce more tailored knowledge for the field of mHealth [21].

Stakeholders associated with chronic health and care (researchers, individuals, health care providers, and health care authorities) have been calling for evidence related to the personal use of mHealth technologies for many years [22-24]. Regardless of the beneficial or harmful outcomes, we need to know their potential. Without such evidence, people in the health care field will not be able to effectively support and guide individuals in the use of these technologies for health self-management. This evidence must be obtained with appropriate questions and methods.

Recent scoping reviews of mHealth technologies for chronic conditions focused on evidence as it relates to a specific age group [25], the development process [26], or clinical outcomes [27] and not on how the research was performed or which resources were used in the evaluation. The purpose of this scoping review was to identify which methods were used and which qualitative and quantitative data were measured to assess patient-operated mHealth devices for the self-management of chronic noncommunicable diseases (NCDs). As evidence for health authorities and health care providers, quantitative clinical outcomes have historically been considered the primary target for evaluation [28]; however, given the growing trend of mHealth, we included qualitative measures of participants’ use of and experiences with the technology.

Research Questions

The research questions were as follows: (1) What methods are used to evaluate patient-operated mHealth apps and systems for self-management of chronic NCDs? (2) Which qualitative and quantitative measures are used to evaluate the impact of patient-operated mHealth apps and systems for self-management of chronic NCDs?


Scoping Review Objective

We performed a scoping review to document how researchers have evaluated mHealth interventions for self-management of chronic NCDs. Munn et al [29] stated that scoping reviews are favored over other review types in cases in which researchers are using an evolving set of methods owing to the novelty of the field or where the purpose of the review is to inform future questions about the field. We intended to provide an overview of what methods researchers use and which qualitative and quantitative measures were adopted to evaluate mHealth self-management interventions. This review reports information according to the Preferred Reporting Items for Systematic review and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist (Multimedia Appendix 1).

Search Strategy and Databases

The scope of the search and definitions of mHealth were discussed among the coauthors (MB, EG, EÅ, and MJ). The databases searched for scientific literature were PubMed, MEDLINE, Google Scholar, and ProQuest Research Library. PubMed and MEDLINE were both included because PubMed includes citations that are not yet indexed in MEDLINE [30]. We searched for articles published in English between January 1, 2015, and January 18, 2019, which were related to the evaluation of patient-operated mHealth interventions for self-management of chronic NCDs. The search string included key terms describing the intervention’s intended use (ie, self-efficacy, self-assessment, self-management, or self-monitoring) and the intervention’s platform (ie, mobile phones, wearables, sensors, or apps). The full search string was used for titles and abstracts, and the format was adapted to the database being searched (Multimedia Appendix 2).

Medical Subject Headings (MeSH) terms were not considered because our search included articles published recently, which may contain terminology that has not yet been indexed within the MeSH database. The identified abstracts and titles were collected in EndNote [31] and then uploaded into Rayyan [32], an online “library systematic review service” that allows researchers to collaborate on the organization, inclusion, and exclusion of articles for literature review.

Eligibility Criteria

We aimed to include research efforts that may have addressed new guidelines for mobile medical devices. Within our broad search criteria for low-risk mHealth apps and systems, articles were eligible for inclusion if they described low-risk technologies consistent with the FDA and CE Markings’ description of mobile medical devices under “enforcement discretion” [10-12]. Multimedia Appendix 3 describes the specificities of this subcategory.

A preliminary search was performed, and a random selection of 10 articles was reviewed for inclusion or exclusion by two authors (MB and EG). Refinements were made to the review criteria.

For this review, we included studies that evaluated interventions involving (1) mHealth technologies for chronic NCDs, including the primary NCDs listed by the WHO [33] (ie, diabetes, cancer, cardiovascular diseases, chronic respiratory diseases, and chronic mental health conditions); (2) mHealth technologies for self-management (tasks which a person must perform in order to manage the symptoms, treatment, physical and psychosocial consequences, and lifestyle changes inherent in living with a chronic condition, and efficacious self-management was considered to encompass the ability to monitor one’s condition and to affect the cognitive, behavioral, and emotional responses necessary to maintain a satisfactory quality of life) [34]; and (3) mHealth technologies that allow the patient to choose which measures to register and review.

The details of the inclusion and exclusion criteria are described in Multimedia Appendix 4, and they were used during the main review search.

Data Extraction and Synthesis

After removing duplicate articles, reviews, and protocol articles without evaluation results, two authors (MB and PJ) independently screened the titles and abstracts for eligibility according to the inclusion and exclusion criteria. In case of disagreement regarding eligibility, another author (EG) was called to join the discussion until an agreement was reached. Author MB reviewed the full-text articles and performed data extraction.

The identified studies were classified as either those that evaluated mHealth apps or mHealth systems. Interventions that included a single app were grouped as mHealth apps, whereas those that included services or devices connected to a central app were grouped as mHealth systems. In this way, we could more clearly assess the different approaches taken by researchers when addressing the various impacts of these two mHealth intervention types.

Abilities of Studies to Produce Results

For both groups, one author (MB) assessed whether a study was able to produce the evidence that it aimed to obtain, using the selected methods. This was performed by comparing the objectives as stated by the authors of the identified articles to the methods and reported results. The studies were judged according to their ability to produce the information, and the findings were reported as yes, yes and more than expected, no, and cannot tell. The results of these comparisons are detailed in Multimedia Appendix 5.


Overview

Among 3912 records identified by the search criteria, we reviewed 55 full-text articles and included 31 studies for data extraction and synthesis. Figure 1 illustrates the process of identifying the relevant articles for inclusion in data extraction.

Figure 1. Flow diagram illustrating the selection of studies for inclusion in data synthesis. NCD: noncommunicable disease.
View this figure

Summary of Studies: Apps Versus Systems

Among the 31 studies chosen for data extraction, 15 were categorized as those that evaluated mHealth apps and 17 were categorized as those that evaluated mHealth systems. One study evaluated both apps and systems [35] and was therefore included in both categories. General information about the selected studies that evaluated mHealth apps are summarized in Table 1 [35-49] and those that evaluated mHealth systems are summarized in Table 2 [35,50-65].

Table 1. Information about the studies that evaluated mHealth apps.
ReferenceApp nameYearCountryStudy designDurationHealth conditionPatient participantsHealth care provider and caregiver participantsIntended secondary users
[36]Diet and Activity Tracker (iDAT)2015SingaporeProspective study8 weeksType 2 diabetesPatients (n=84)N/AaN/A
[37]Diabetes Notepad2015KoreaCross-sectional studySingle evaluationDiabetesPatients (n=90)N/AN/A
[38]Personal Life-chart app2015GermanyProspective study72 weeksBipolar disorderPatients (n=54)N/AN/A
[39]HeartKeeper2015USACross-sectional studySingle evaluationHeart diseasesPatients (n=24) and researchersN/AN/A
[40]HeartKeeper2016SpainRetrospective study36 weeksHeart diseasesPatients (n=32)N/AN/A
[41]PTSD Coach2015USARetrospective studyDuration of availability of the app on app storesPost-traumatic stress disorderCurrent users (n=156)N/AN/A
[42]PTSD Coach2015USARCTb16 weeksPost-traumatic stress disorderPatients (n=10)Health care providers (n=3)Health care providers
[43]PTSD Coach2016USARCT4 weeksPost-traumatic stress disorderPatients (n=49)N/AN/A
[44]PTSD Coach2017USARCT24 weeksPost-traumatic stress disorderPatients (n=120)N/AN/A
[45]Hypertension management app (HMA)2016KoreacSingle event evaluationHypertensionPatients (n=38)Nurses (n=3) and experts (n=5)N/A
[35]dMultiple commercial apps for heart failure2016USACross-sectional studySingle evaluationHeart failureApps (n=34)N/AFamily, friends, and health care providers (not all apps)
[46]Multiple commercial apps (n=11)2016USACross-sectional studySingle evaluationMultiplePatients (n=20)Caregivers (n=9)N/A
[47]I-IMR intervention2017USACross-sectional studySingle evaluationSerious mental health conditionsePatients (n=10)N/AN/A
[48]Serenita2017IsraelProspective study16 weeksType 2 diabetesPatients (n=7)Health care providersN/A
[49]Sinasprite database2018USARetrospective study6 weeksDepression and anxietyPatients (n=34)N/AN/A

aN/A: not applicable.

bRCT: randomized controlled trial.

cNot available.

dStudy evaluated both apps and systems and therefore will appear in both categories.

eCombination of cardiovascular disease, obesity, diabetes, high blood pressure, high cholesterol, osteoporosis, gastroesophageal reflux disease, osteoarthritis, chronic obstructive pulmonary disease, congestive heart failure, coronary artery disease, and bipolar disorder, major depressive disorder, schizophrenia, or schizoaffective disorder [47].

Table 2. Information about the studies that evaluated mHealth systems.
ReferenceIntervention nameYearCountryStudy designDurationHealth conditionParticipantsIntended secondary usersOthers involved in the interventionMedical device included (Y/N)Other devices included
[50]SUPPORT-HF Study2015UKCross-sectional study45 weeksHeart failurePatients (n=26)Health care providersHealth care providers and informal care giversYBlood pressure monitor, weight scales, and pulse oximeter
[51]a2015USACross-sectional studySingle evaluationDiabetesPatients (n=87) and health care providers (n=5)Health care providersHealth care providersYGlucose meter
[52]Multiple commercial technologies for activity tracking2015USAProspective study80-100 days (mean 12.5 weeks)Serious mental health conditionbPatients (n=10)Health care providers and peers (optional)N/AcNWearable activity monitoring devices
[53]Diabetes Diary app2015NorwayProspective study2 weeksType 1 diabetesPatients (n=6)N/AN/AYSmart-watch app and glucose meter
[54]Diabetes Diary app2015NorwayRCTd23 weeksType 1 diabetesPatients (n=30)N/AN/AYGlucose meter
[55]Diabetes Diary app2016NorwayRCT

48 weeksType 2 diabetesPatients (n=151)Health care providersN/AYGlucose meter
[56]SnuCare2016KoreaProspective study8 weeksAsthmaPatients (n=44)N/AResearch teamYPeak flow meter
[57]HealthyCircles Platform2016USARCT24 weeksHypertensionPatients (n=52)Health care providersHealth care providersYWithings blood pressure monitor
[58]Multiple commercial technologies for activity tracking2016USAProspective study24 weeksSerious mental health conditionbPatients (n=11)N/AN/ANFitbit Zip
[35]eMultiple commercial apps for heart failure2016USACross-sectional studySingle evaluationStrokeApps (n=34)Family, friends, and health care providers (not all apps)N/ANY
[59]Electronic Patient Reported Outcome tool (ePRO)2016CanadaProspective study4 weeksMultiplePatients (n=8) and health care providers (n=6)Health care providersHealth care providersNN
[60]STARFISH2016UKProspective study6 weeksStrokePatients (n=23)Peers (automatic)N/ANActivPAL™ activity monitor
[61]HeartMapp2016USACross-sectional studySingle evaluationHeart failurePatients (n=25) and health care providers (n=12)Health care providersHealth care providersYZephyr Bioharness or Biopatch
[62]EDGE digital health system2017UKRCT48 weeksChronic obstructive pulmonary diseasePatients (n=110) and research nurses (n=2)Health care providers (automatic)Informal care giversNN
[63]IBGStar Diabetes Manager Application2017GermanyProspective study12 weeksDiabetesPatients (n=51)N/AN/AYiBGStar blood glucose meter
[64]MyHeart2017USAProspective study24 weeksHeart failurePatients (n=8) and nursesNurses (automatic)NursesYWeight scale, blood pressure monitor, and glucose meter
[65]2018UKCross-sectional study4 weeksCancerPatients (n=23)Peers and health care providersN/ANN

aNot available.

bSchizophrenia spectrum disorder, bipolar disorder, or major depressive disorder [52,58].

cN/A: not applicable.

dRCT: randomized controlled trial.

eStudy evaluated both apps and systems and therefore will appear in both categories.

App interventions mainly targeted mental health conditions (n=7), followed by diabetes (n=3) and cardiovascular and heart diseases (n=4), with one study evaluating multiple apps that were used to self-manage multiple health conditions (Table 1).

Patients were included in all studies, and the studies had between 3 and 156 participants (median 36, IQR 15-87, maximum 156). The exception was one study in which only researchers evaluated patient-operated apps according to Google recommendations and quality standards [35,39]. Although studies tested single apps intended to be used primarily by patients, two studies also explored the impact of patients sharing their collected data with health care providers [35,42].

Six studies utilized single evaluations, either through a cross-sectional design [35,37,39,45-47] or an analytic service to analyze data available through the app store [41]. The remaining studies evaluated the impacts of app use over time, lasting between 4 and 72 weeks, with a mean period of 22.75 weeks (median 16 weeks, IQR 6-36, maximum 72). Of these, four utilized prospective study designs, three were randomized controlled trials (RCTs), and two used a retrospective design.

Among the 17 studies that described mHealth systems, most involved patients diagnosed with cardiovascular and heart disease (n=6), followed by diabetes (n=5), respiratory disease (n=2), mental health conditions (n=2), cancer (n=1), and multiple illnesses (n=1; Table 2).

As with mHealth app studies, all system studies, except one [35], involved patients. The 16 studies had between 6 and 151 patients (median 30, IQR 14.5-51.5, maximum 151), with eight studies involving health care providers. In these cases, health care providers either provided input on the suitability of an app for patient use or reviewed patient-gathered data during consultations.

In 12 studies, patients were required to share data (n=6) [50,51,57,60,62,64] or encouraged to share data (n=6) [35,53,55,59,61,65] with their health care providers or peers as part of the study. Data were also collected and transmitted to the main app by medical devices [50,51,53-57,61,63,64] and commercial wearables [35,52,53,58,60], demonstrating the prevalence of connectivity in modern mHealth systems.

Few studies (n=3) used single evaluations. RCTs (n=4) lasted longer (35.75 weeks on average) than cross-sectional studies (mean 24.5 weeks, n=2) and prospective studies (mean 12.93 weeks, n=7). Overall system evaluations lasted a mean of 20.32 weeks, which is very close to that for app interventions, but with a higher median number of 23 weeks.

Methods and Measures

Most studies included a combination of qualitative and quantitative methods of evaluation. Evaluation of usage logs was the most commonly adopted method (21 studies), followed by standardized questionnaires (17 studies; Table 3). Only two studies adopted quality guidelines to evaluate mHealth interventions; the Mobile Application Rating Scale was used to evaluate multiple apps [35], and compliance with Google standards for Android systems, in addition to other approaches, was used to evaluate the HeartKeeper app [39].

Table 3. Categories of methods used to evaluate mHealth interventions.
Methods (adopted approaches)Studies that evaluated mHealth appsStudies that evaluated mHealth systems
Evaluation of usage logs[36,38,40-42,44,48,49][50,52,54,56-59,62-64]
Standardized questionnaires[35-39,41-45,48,49][35,55-57,60,64]
Ad-hoc questionnaires[36,37,40,42-44,47][51,53,55-58,61-63]
Interviews[40,45,46][50,52,58,59,65]
Clinical outcomes[36,48][54-56,63,64]
Open feedback (ie, oral or written)[35,41,43,45][35,53,62]
Collection of additional device data (eg, medical device data)N/Aa[54,56,57,60,62,64]
Field study and observation[46,47][61,65]
Focus groupsN/A[59,64]
Observational tests (in a lab setting)[45,47]N/A
Quality guidelines[35,39][35]
Medical record entries[42][63]
Attendance (intervention assigned activities/meetings)[42,48]N/A
Download count[41]N/A

aN/A: not applicable.

Among the 14 ad-hoc questionnaires used, four were developed according to concepts or questions from standardized questionnaires [47,58,61,62]. Similarly, two studies included interviews, where the interview guides were based on standardized questionnaires [40,45]. Some standardized questionnaires were used in more than one study. Multimedia Appendix 6 lists these questionnaires and illustrates the combination of questionnaires used in each study. Compared with traditional medical device testing, relatively few studies included information gathered from medical record entries (n=2), clinical outcomes (n=9), or observational tests (n=2).

Of note, some studies inferred more information from usage logs than the count and type of app interactions and patient-gathered data. For example, Triantafyllidis et al [50] interpreted information from the evaluation of usage logs on the usability of the device and participants’ engagement in the study. The complete set of the types of data that were measured and collected by the mHealth app and system intervention studies are listed in Table 4.

Table 4. Categories of qualitative and quantitative data that were measured to evaluate mHealth interventions.
Types of data measuredStudies that evaluated mHealth appsStudies that evaluated mHealth systems
Interactions (via app)[36,37,40-42,44,45,49][50,52,53,56-59,62-65]
Usability/feasibility[35,37,39-42,45,47][35,52,53,56,58,59,61,62,65]
Patient-gathered self-management data (via app)[36-38,41,45,49][50,54,55,57,59,62-64]
Efficacy/effectiveness[35-37,40,42,43,45,48][35,50,51,53,56,58,59,64,65]
Physical well-being[36,40,42,48][54-57,60,62-64]
Perceptions, opinions, and suggestions[35,40,41,45-47][35,51-53,58,64,65]
Intervention experiences[39,41,46,47][50,52,58,59,64,65]
Psychological well-being[38,41,42,44,49][55,60,62]
Patient-reported health[40-44][56,63]
Self-efficacy[36,44,47,49][55,57,61]
Engagement/motivation in self-management[36,41][50,52,56,63]
Health care utilization and impact[42][56,59,62-64]
Task performance[45-47][50,61,65]
Study engagement[35,41,42,48,49][35]
Patient-reported app use[43,44][53,58,59]
Patient-reported self-management[36,37][52,57,60]
Quality of life[48][55,56,60,64]
App features and quality[35,39,41,47][35]
EfficiencyN/Aa[62,65]
Security[39][51]
Lifestyle[48]N/A

aN/A: not applicable.

Although a single method can often provide information regarding more than one measure, over one-third of the studies in this review used more than one method to collect information on one type of measure [40,42,45,48,50,55-60]. For example, two studies used both the collection of additional device data and clinical outcomes to report physical well-being [54,64]. Multimedia Appendix 7 includes a description of which measures were produced by each method. Several of the studies collected information on twice as many types of data measured as methods used to collect them (n=9) [35,41,44,49,58-60,65], with two studies collecting three [51,52] and one collecting four [39] times the number of types of data measured as methods used to collect them. Only one study used four methods to evaluate the most unique data types that were measured (n=10) by utilizing information resources that mHealth technologies make available (eg, automatically collected data from current users in the Android app store) [41].

Conversely, measures can be reported using more than one method. For example, usability/feasibility was the most common measure (22 times in 17 studies), followed by efficacy/effectiveness (20 times in 16 studies), interactions (via app; 19 times in 19 studies), physical well-being (18 times in 13 studies), and patient-gathered self-management data (via app; 15 times in 14 studies; Multimedia Appendix 7).

The study by Possemato et al [42] described the only app intervention that measured health care utilization and impact from these methods. Kim et al [56], Alnosayan et al [64], and Sieber et al [63] described system interventions that measured health care utilization or impact (ie, hospitalizations reported by participating health care providers and hospitalizations recorded retroactively). The remaining studies (n=5) collected information regarding physical well-being from clinical outcomes measured by researchers or health care providers during follow-up [36,48,54,55,61].

More comprehensive mapping of methods and measures revealed that the methods that were used to produce the most diverse set of data were, as expected, interviews (n=9), standardized questionnaires (n=16), and study-specific questionnaires (n=13; Multimedia Appendix 7). However, evaluation of usage logs produced nearly as many different types of measures (n=8).

Objectives and Methods Versus Results

A comparison of the study objectives with the results demonstrated that 30 of the 31 studies reported the results that they intended. One study reported all but one of the intended results described in the original objectives (ie, whether the reviewed apps and systems had been previously validated) [35]. Ten studies reported more than they anticipated, some of which included the assessment of app [42,48] and system [50] usage patterns, as well as comparisons with other outcomes [41,44]. Other unforeseen outcomes included the accuracy of the app’s knowledge base, as evaluated by nurses [45]; usability according to patients’ performance of predetermined tasks with the app [47]; usability of connected devices in an mHealth system [53]; health care utilization [56]; and patient-reported symptoms [63]. Two studies stated that the objective was to develop mHealth systems; however, their outcomes also included evaluation results [50,51]. None of the studies phrased their goals as research questions and some reported what they intended, but the objective was not explicitly stated or detailed [40,63]. For example, Velardo et al [62] stated their intention to evaluate their intervention at scale. However, it was not clear how they intended to “evaluate” their intervention.


Principal Findings

We identified 31 studies that described evaluations of mHealth apps or systems, with one describing evaluation of both intervention types [35]. Our findings show that studies relied mostly upon more continuous measures. Except for the collection of additional device data used by system interventions but not app interventions, there were no significant differences between apps and systems with regard to their ability to produce the intended outcomes, health conditions, or types of methods or measures used within the studies. Overall, medical record entries [42], attendance of meetings or activities assigned by the intervention [63], and download count [41] were the least used methods for gathering information about an intervention’s impact on patients and providers. On the other hand, evaluation of usage logs [36,38,40-42,44,48-50,52,54,56-59,62-64] and standardized questionnaires [35-39,41-45,48,49,55-57,60,64] were the most commonly used methods. These two approaches (ie, one traditional and one mHealth) were also commonly used together in the same studies, demonstrating that mHealth is supplementing, not replacing, traditional research approaches.

mHealth Trends Versus Methods and Measures Used

Although clinical integration of mHealth technologies is on the rise, only two studies described app interventions that were meant to be used by secondary users (ie, health care providers and family and friends) [35,42], with three involving health care providers in the evaluation process [42,45,48]. Despite the focus on data safety and security, as well as patient privacy, as described by the new General Data Protection Regulation [66] and established FDA [10,11] and CE marking [12] expectations for health-related technologies, only two studies included measures regarding security [39,51].

Need to Reassess Evaluation Standards

Health evaluation studies are meant to produce evidence and understanding of how various interventions could affect patients and providers in real-world health care settings. Traditionally, studies have been classified within a hierarchy based on their designs, methods, and measures used to evaluate health interventions [67]. Health professionals consider high-level studies to be those that use rigorous and strict study designs, such as RCTs [68]. These studies provide an objective and quantitative understanding of how an intervention would influence patient clinical health measures, cost, or health care resource use [69]. On the other hand, low-level studies are often those that rely upon subjective and flexible study designs (eg, qualitative studies of participants’ perception of the intervention or its impact on their lifestyle) [70].

Challenges of Quality Assessment

Health intervention researchers are not given instructions or guidance about how to evaluate these mHealth apps or which additional evidence is needed to determine their comprehensive impacts on patients and providers. The recent addition of connected technologies, such as wearables and sensors, has introduced even more factors to the evaluation context. Interventions now vary from recording exercise, to decision support for patient self-management, to providing evidence of a patients’ actions for health care providers, to review from a variety of data sources. Because of these new information sources, we cannot always anticipate all of the impacts of these diverse networks of mHealth self-management technologies. For example, 10 studies did not intend to obtain results related to certain factors, such as usage logs and patient-reported outcomes [41,42,44,50,53,63].

The assessment of a study’s success, validity, or quality presents another challenge to traditional research practice. mHealth resources consist of factors that make standard quality assessments inconclusive for intervention studies. For example, identifying patterns of patient self-management habits and progress describes the impact of an mHealth intervention on a patient’s behavior. However, the analysis of usage logs, as a measure of intervention effectiveness, patient engagement, or self-management practices, has been minimally investigated as an appropriate method. As demonstrated by some of the reviewed articles, usage logs, download counts, and online ratings of apps were interpreted as indications of patient engagement, self-management behavior, intervention reach [41], effectiveness, and intervention utility [40] or feasibility.

Comparing Objectives and Results to Determine Successful Use of Methods

As opposed to completing a formal quality assessment, we chose to determine whether a study was able to produce the evidence that it aimed to provide, using selected methods. Some studies that performed usage log analysis were able to produce more information than they anticipated. Possemato et al [42] stated their intention to assess the fidelity of the PTSD Coach intervention by comparing health care utilization and health outcomes between those who used the app with and without clinical support. They were able to provide evidence for the effectiveness and fidelity of the intervention among health care providers, symptoms, and clinical health parameters from questionnaires. Moreover, they provided evidence for participants’ patterns of intervention use from usage logs. Thereby, they were able to discuss the relationship between health care provider involvement and reinforced use of the app, as patients may have felt more accountable for using the app to self-manage their post-traumatic stress disorder.

Among the 31 studies identified, one did not obtain all of the intended information (missing one of the intended outcomes) [35] and one was found to be inconclusive [53]. We found that it was challenging to determine the specific objective of a study when objectives were not stated as such or when they were vague. This made it difficult to determine if a study was successful in the use of its selected methods and study design to reach its goals. For example, Velardo et al [62] stated that they intended to evaluate the EDGE digital health system intervention at scale; however, they did not state how they intended to do so or provide a research question that they intended to answer. Sieber et al [63] did not state the objective of their study. Instead, they stated simply what was done (ie, investigated the effects of usage profiles on hemoglobin A1c). Without a stated objective, we are unable to judge the reliability of intervention studies, whether it be through standard traditional means or an alternative approach. Clear objectives must be included in order to validate mHealth resources as trustworthy and relevant measures for evaluating mHealth interventions.

Relevance

mHealth must work for health care providers as well as patients. Patients are more engaged in their health, and they incorporate mHealth into their self-management. Thus, patients are aware of and can even influence how an mHealth intervention should or could be used to influence the kind of impact that is relevant for them. Understanding the potential risks and benefits of patient-operated mHealth requires more continuous evidence of not only technical and clinical outcomes but also personal and psychological impacts. This review demonstrates, through the use of such measures as mHealth interactions and patient-gathered data via an app, that we as researchers have the resources at our disposal and are beginning to use them.

A 2016 study by Pham et al [71] called for alternative or additional methods and measures for mHealth clinical trials that address the additional needs of mHealth. As most mHealth technologies for chronic health self-management are intended to be always available and continuously used by the patient, research questions, approaches, and designs need to reflect the real-world situations in which patients use these apps and systems.

Several studies within the presented scoping review demonstrated an attempt to meet this call by including more flexibility in their intervention design. For example, the EDGE digital health system [62], PTSD Coach app [42,43], and HeartKeeper app [40] made the patient the “decision maker” by allowing the patient to choose which data are relevant for them to gather and share with their health care providers. Further, two studies focused on reporting that patient engagement improved as a result of using mHealth apps [36,52]. User engagement is a necessity for the success of any intervention. It is paramount to consider patients’ intentions when using these apps outside of the clinic; we should deem an app’s ability to engage patients with their health as necessary as clinical evidence. There are individuals who do not choose to manage their chronic illnesses at all, for example, those deemed “hard to reach,” who may benefit from merely acknowledging their health challenge by using an app primarily for education, without the expectation of performing complicated and time-consuming self-management. Therefore, when judging the success, usefulness, or potential benefit of an evaluated mHealth intervention, there should be less of a hierarchical gap between clinical health change or improvement and patients’ experiences and change in self-efficacy.

Limitations

We believe our review covers most of the articles that were published during the established period and dealt with mHealth interventions for chronic conditions. This review reported on patient-operated mHealth self-management and did not include other potentially relevant interventions, such as SMS-based interventions.

We chose to focus on self-management of chronic NCDs, as defined by the WHO, in addition to severe mental health conditions, according to the demand for solutions from two fields (the medical system and public app development market) [4,13,33,72]. As such, these health cases represented the most potential for including state-of-the-art technology studies, with chronically ill people consistently being the leading market. However, exclusion of preventive treatments and other chronic health challenges (eg, musculoskeletal diseases) may have excluded a large proportion of cases that both involve the use of self-management options and represent a relevant portion of the chronic disease burden for individuals and health care systems worldwide [73]. As such, this noninclusion may have omitted conditions that could have provided relevant insights into methods and measures used to assess motivational, educational, and empowering mHealth technologies for self-management.

Because we did not collect data on reported results for this scoping review and did not perform a systematic methodological quality assessment, we cannot comment on the usefulness or effectiveness of the mHealth app and system interventions presented in these studies.

Conclusion

Researchers are now using several mHealth resources to evaluate mHealth interventions for patient self-management of select NCDs. This is evident as studies relied mostly on more continuous measures, including usage logs [36,38,40-42,44,48-50,52,54,56-59,62-64] and patient-collected data from medical devices [54,56,57,60,62,64], in addition to pre-post measures, such as clinical health measures [36,40,48,54-56,63,64] and standardized questionnaires [35-39,41-45,48,49,55-57,60,64]. In doing so, they evaluated the health status, engagement, and feasibility of mHealth apps and systems. In this review, which focused on mHealth, we found that only 20% of the included studies relied solely on traditional study designs (eg, RCTs) and methods that measure only pre- and postintervention health changes. The findings illustrate that the tradition of focusing on “clinical effectiveness, cost-effectiveness, and safety” [74] or health-related quality of life and the use of health care resources [75] is not being replaced, but is instead being expanded by taking advantage of additional resources that mHealth provides to evaluate interventions.

There is still no clear standard for the evaluation of mHealth interventions for patient self-management of chronic conditions. However, because mHealth presents additional challenges, needs, and resources to the field of health intervention research, we have the opportunity to expand and maintain our relevance to patients, providers, and health authorities. mHealth provides new types of information that we can and should gather to determine the impact of the interventions.

The presented results demonstrate that health studies have started to take advantage of additional mHealth resources, such as app usage logs and other patient-involved research methods, to determine the comprehensive impacts of mHealth on patients and other stakeholders. We are able to not only answer questions, such as which tasks patients choose to perform during interventions that may affect their clinical outcomes, but also say more about the relevance of mHealth for various types of users. This is essential in health intervention research, as the call for evidence on mHealth continues to push for not only traditional clinical health measures but also impacts on patients’ self-efficacy and engagement. We believe that to achieve a compromise between the rigidity of traditional quality standards and the push for more patient-relevant outcomes, the definition of quality or meaningful impact, as well as available and appropriate evidence should be reassessed.

Acknowledgments

As a PhD candidate, the primary author is grateful for the input and guidance of the coauthors, who include all of the supervisors as part of the multidisciplinary Full Flow Project. This work was conducted as part of the Full Flow Project, which is funded by the Research Council of Norway (number 247974/O70). The publication charges for this article have been funded by a grant from UiT-The Arctic University of Norway’s publication fund.

Authors' Contributions

MB, EG, and EÅ developed the search and inclusion criteria. MB and PJ performed the literature search, article screening, and data collection. EG served as a third reviewer when disputes surrounding the inclusion of an article arose. MB performed data synthesis and drafting of the manuscript. PZ contributed to the planning and editing of the manuscript. EG and EÅ additionally contributed to the editing of the text. MJ and RJ provided quality assurance of the manuscript and the necessary details within the description of the literature search and article selection. LPH guided article content. All authors have read and approved the final version of this manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

PRISMA-ScR checklist.

PDF File (Adobe PDF File), 2500 KB

Multimedia Appendix 2

Search strategy.

DOCX File , 127 KB

Multimedia Appendix 3

Scope of included technologies.

DOCX File , 81 KB

Multimedia Appendix 4

Inclusion and exclusion criteria by category.

DOCX File , 15 KB

Multimedia Appendix 5

Comparison of study objectives to reported results.

DOCX File , 26 KB

Multimedia Appendix 6

List of questionnaires and scales used in mHealth intervention studies.

DOCX File , 18 KB

Multimedia Appendix 7

Mapping of measures to methods.

DOCX File , 15 KB

  1. Olczuk D, Priefer R. A history of continuous glucose monitors (CGMs) in self-monitoring of diabetes mellitus. Diabetes Metab Syndr 2018;12(2):181-187 [FREE Full text] [CrossRef] [Medline]
  2. Pole Y. Evolution of the pulse oximeter. International Congress Series 2002 Dec;1242:137-144 [FREE Full text] [CrossRef]
  3. Omer T. Empowered citizen 'health hackers' who are not waiting. BMC Med 2016 Aug 17;14(1):118 [FREE Full text] [CrossRef] [Medline]
  4. Research2Guidance. Berlin, Germany: Research2Guidance; 2018. mHealth Developer Economics: Connectivity in Digital Health   URL: https://research2guidance.com/product/connectivity-in-digital-health/ [accessed 2019-05-15]
  5. Research2Guidance. Berlin, Germany: Research2Guidance; 2017. mHealth app economics 2017: current status and future trends in mobile health   URL: https://tinyurl.com/y6urgf2x [accessed 2019-06-14]
  6. Silvis L. US Food and Drug Administration.: US Food and Drug Administration; 2018 Oct 25. The Long Run Is Now: How FDA is Advancing Digital Tools for Medical Product Development   URL: https://tinyurl.com/y9ssrspf [accessed 2019-06-14]
  7. nyemetoder.no.: The Regional Health Authorities, The Norwegian Medicines Agency, The Norwegian Knowledge Centre for Health Services, The Norwegian Directorate of Health; 2014 Jan 23. The national system for the introduction of new health technologies within the specialist health service – For better and safer patient care   URL: https:/​/nyemetoder.​no/​Documents/​Administrativt%20(brukes%20kun%20av%20sekretariatet!)/​System%20Description%20(23012014).​pdf [accessed 2020-04-11]
  8. Gurupur VP, Wan TT. Challenges in implementing mHealth interventions: a technical perspective. Mhealth 2017;3:32. [CrossRef] [Medline]
  9. Kotz D. A threat taxonomy for mHealth privacy. : IEEE; 2011 Feb 17 Presented at: 2011 Third International Conference on Communication Systems and Networks (COMSNETS 2011); 2011; Bangalore, India p. 4-8   URL: https://ieeexplore.ieee.org/document/5716518 [CrossRef]
  10. FDA.gov. Rockville, MD: US Food & Drug Administration; 2015 Sep 05. Humanitarian Use Device (HUD) Designations: Guidance for Industry and Food and Drug Administration Staff   URL: https://tinyurl.com/y8md9el6 [accessed 2018-06-12]
  11. FDA.gov.: The U.S. Food and Drug Administration; 2019 May 11. Device Software Functions Including Mobile Medical Applications   URL: https://tinyurl.com/y93jtst8 [accessed 2019-10-03]
  12. Berensmann M, Gratzfeld M. Requirements for CE-marking of apps and wearables. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2018 Mar;61(3):314-320. [CrossRef] [Medline]
  13. Research2Guidance.com.: Research2Guidance; 2016 Oct. mHealth Economics 2016 – Current Status and Trends of the mHealth App Market   URL: https://research2guidance.com/product/mhealth-app-developer-economics-2016/ [accessed 2017-08-01]
  14. Vallespin B, Cornet J, Kotzeva A. Ensuring Evidence-Based Safe and Effective mHealth Applications. Stud Health Technol Inform 2016;222:248-261. [Medline]
  15. Lewis TL, Wyatt JC. mHealth and mobile medical Apps: a framework to assess risk and promote safer use. J Med Internet Res 2014 Sep 15;16(9):e210 [FREE Full text] [CrossRef] [Medline]
  16. Agarwal S, LeFevre AE, Lee J, L'Engle K, Mehl G, Sinha C, WHO mHealth Technical Evidence Review Group. Guidelines for reporting of health interventions using mobile phones: mobile health (mHealth) evidence reporting and assessment (mERA) checklist. BMJ 2016 Mar 17;352:i1174. [CrossRef] [Medline]
  17. Torous J, Andersson G, Bertagnoli A, Christensen H, Cuijpers P, Firth J, et al. Towards a consensus around standards for smartphone apps and digital mental health. World Psychiatry 2019;18(1):97-98 [FREE Full text] [CrossRef] [Medline]
  18. nice.org.uk. UK: NICE; 2019 Mar. Evidence Standards Framework For Digital Health Technologies   URL: https:/​/www.​nice.org.uk/​about/​what-we-do/​our-programmes/​evidence-standards-framework-for-digital-health-technologies [accessed 2019-06-14]
  19. Ferretti A, Ronchi E, Vayena E. From principles to practice: benchmarking government guidance on health apps. The Lancet Digital Health 2019 Jun;1(2):e55-e57. [CrossRef]
  20. New checklist published to help improve reporting of mHealth interventions. WHO 2018 Apr 22 [FREE Full text]
  21. Baker TB, Gustafson DH, Shah D. How can research keep up with eHealth? Ten strategies for increasing the timeliness and usefulness of eHealth research. J Med Internet Res 2014 Feb 19;16(2):e36 [FREE Full text] [CrossRef] [Medline]
  22. Cheryl A, Jose FF, Christopher B, Joaquin B, Hamish F, Richard G. WHO. Bellagio, Italy: World Health Organization; 2011 Sep. Call to Action on Global eHealth Evaluation: Consensus Statement of the WHO Global eHealth Evaluation Meeting, Bellagio, Italy, September   URL: https://tinyurl.com/yc97wzyh [accessed 2019-06-11]
  23. Boudreaux ED, Waring ME, Hayes RB, Sadasivam RS, Mullen S, Pagoto S. Evaluating and selecting mobile health apps: strategies for healthcare providers and healthcare organizations. Transl Behav Med 2014 Dec;4(4):363-371 [FREE Full text] [CrossRef] [Medline]
  24. Charani E, Castro-Sánchez E, Moore LS, Holmes A. Do smartphone applications in healthcare require a governance and legal framework? It depends on the application!. BMC Med 2014 Feb 14;12:29 [FREE Full text] [CrossRef] [Medline]
  25. Matthew-Maich N, Harris L, Ploeg J, Markle-Reid M, Valaitis R, Ibrahim S, et al. Designing, Implementing, and Evaluating Mobile Health Technologies for Managing Chronic Conditions in Older Adults: A Scoping Review. JMIR Mhealth Uhealth 2016 Jun 09;4(2):e29 [FREE Full text] [CrossRef] [Medline]
  26. Woods L, Duff J, Cummings E, Walker K. Evaluating the Development Processes of Consumer mHealth Interventions for Chronic Condition Self-management: A Scoping Review. Comput Inform Nurs 2019 Jul;37(7):373-385. [CrossRef] [Medline]
  27. Wildevuur SE, Simonse LW. Information and communication technology-enabled person-centered care for the. J Med Internet Res 2015 Mar 27;17(3):e77 [FREE Full text] [CrossRef] [Medline]
  28. FDA.org. Rockville, MD: U.S. Department of Health and Human Services, Food and Drug Administration, Center for Devices and Radiological Health, Office of Device Evaluation; 1998 Nov 04. General/Specific Intended Use - Guidance for Industry   URL: https://tinyurl.com/y7c7dqws [accessed 2019-06-03]
  29. Munn Z, Peters MD, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol 2018 Nov 19;18(1):143 [FREE Full text] [CrossRef] [Medline]
  30. nlm.nih.gov. Bethesda, MD: National Library of Medicine; 2019 Sep 09. MEDLINE, PubMed, and PMC (PubMed Central): How are they different?   URL: https://www.nlm.nih.gov/bsd/difference.html [accessed 2017-07-01]
  31. Thomson R. Endnote.com.: Clarivate; 2020. EndNote X7   URL: https://endnote.com/ [accessed 2019-06-04]
  32. Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev 2016 Dec 05;5(1):210 [FREE Full text] [CrossRef] [Medline]
  33. euro.who.int. Copenhagen, Denmark: World Health Organization; 2016. Action Plan for the Prevention and Control of Noncommunicable Diseases in the WHO European Region 2016–2025   URL: https://tinyurl.com/y9mjbk57 [accessed 2018-03-15]
  34. ncbi.nlm.nih.gov. Bethesda, MD: US National Library of Medicine; 2018. Self-Management   URL: https://www.ncbi.nlm.nih.gov/mesh/?term=self-management [accessed 2019-07-08]
  35. Masterson Creber RM, Maurer MS, Reading M, Hiraldo G, Hickey KT, Iribarren S. Review and Analysis of Existing Mobile Phone Apps to Support Heart Failure Symptom Monitoring and Self-Care Management Using the Mobile Application Rating Scale (MARS). JMIR Mhealth Uhealth 2016 Jun 14;4(2):e74 [FREE Full text] [CrossRef] [Medline]
  36. Goh G, Tan NC, Malhotra R, Padmanabhan U, Barbier S, Allen JC, et al. Short-term trajectories of use of a caloric-monitoring mobile phone app among patients with type 2 diabetes mellitus in a primary care setting. J Med Internet Res 2015 Feb 03;17(2):e33 [FREE Full text] [CrossRef] [Medline]
  37. Kim YJ, Rhee SY, Byun JK, Park SY, Hong SM, Chin SO, et al. A Smartphone Application Significantly Improved Diabetes Self-Care Activities with High User Satisfaction. Diabetes Metab J 2015 Jun;39(3):207-217 [FREE Full text] [CrossRef] [Medline]
  38. Schärer LO, Krienke UJ, Graf S, Meltzer K, Langosch JM. Validation of life-charts documented with the personal life-chart app - a self-monitoring tool for bipolar disorder. BMC Psychiatry 2015 Mar 14;15:49 [FREE Full text] [CrossRef] [Medline]
  39. Martínez-Pérez B, de la Torre-Díez I, López-Coronado M. Experiences and Results of Applying Tools for Assessing the Quality of a mHealth App Named Heartkeeper. J Med Syst 2015 Nov;39(11):142 [FREE Full text] [CrossRef] [Medline]
  40. de Garibay VG, Fernández MA, de la Torre-Díez I, López-Coronado M. Utility of a mHealth App for Self-Management and Education of Cardiac Diseases in Spanish Urban and Rural Areas. J Med Syst 2016 Aug;40(8):186. [CrossRef] [Medline]
  41. Owen JE, Jaworski BK, Kuhn E, Makin-Byrd KN, Ramsey KM, Hoffman JE. mHealth in the Wild: Using Novel Data to Examine the Reach, Use, and Impact of PTSD Coach. JMIR Ment Health 2015;2(1):e7 [FREE Full text] [CrossRef] [Medline]
  42. Possemato K, Kuhn E, Johnson E, Hoffman JE, Owen JE, Kanuri N, et al. Using PTSD Coach in primary care with and without clinician support: a pilot randomized controlled trial. Gen Hosp Psychiatry 2016;38:94-98 [FREE Full text] [CrossRef] [Medline]
  43. Miner A, Kuhn E, Hoffman JE, Owen JE, Ruzek JI, Taylor CB. Feasibility, acceptability, and potential efficacy of the PTSD Coach app: A pilot randomized controlled trial with community trauma survivors. Psychol Trauma 2016 May;8(3):384-392. [CrossRef] [Medline]
  44. Kuhn E, Kanuri N, Hoffman JE, Garvert DW, Ruzek JI, Taylor CB. A randomized controlled trial of a smartphone app for posttraumatic stress disorder symptoms. J Consult Clin Psychol 2017 Mar;85(3):267-273. [CrossRef] [Medline]
  45. Kang H, Park H. A Mobile App for Hypertension Management Based on Clinical Practice Guidelines: Development and Deployment. JMIR Mhealth Uhealth 2016 Feb 02;4(1):e12 [FREE Full text] [CrossRef] [Medline]
  46. Sarkar U, Gourley GI, Lyles CR, Tieu L, Clarity C, Newmark L, et al. Usability of Commercially Available Mobile Applications for Diverse Patients. J Gen Intern Med 2016 Dec;31(12):1417-1426. [CrossRef] [Medline]
  47. Fortuna KL, Lohman MC, Gill LE, Bruce ML, Bartels SJ. Adapting a Psychosocial Intervention for Smartphone Delivery to Middle-Aged and Older Adults with Serious Mental Illness. Am J Geriatr Psychiatry 2017 Aug;25(8):819-828 [FREE Full text] [CrossRef] [Medline]
  48. Munster-Segev M, Fuerst O, Kaplan SA, Cahn A. Incorporation of a Stress Reducing Mobile App in the Care of Patients With Type 2 Diabetes: A Prospective Study. JMIR Mhealth Uhealth 2017 May 29;5(5):e75 [FREE Full text] [CrossRef] [Medline]
  49. Silva Almodovar A, Surve S, Axon DR, Cooper D, Nahata MC. Self-Directed Engagement with a Mobile App (Sinasprite) and Its Effects on Confidence in Coping Skills, Depression, and Anxiety: Retrospective Longitudinal Study. JMIR Mhealth Uhealth 2018 Mar 16;6(3):e64 [FREE Full text] [CrossRef] [Medline]
  50. Triantafyllidis A, Velardo C, Chantler T, Shah SA, Paton C, Khorshidi R, SUPPORT-HF Investigators. A personalised mobile-based home monitoring system for heart failure: The SUPPORT-HF Study. Int J Med Inform 2015 Oct;84(10):743-753. [CrossRef] [Medline]
  51. Park HS, Cho H, Kim HS. Development of Cell Phone Application for Blood Glucose Self-Monitoring Based on ISO/IEEE 11073 and HL7 CCD. Healthc Inform Res 2015 Apr;21(2):83-94 [FREE Full text] [CrossRef] [Medline]
  52. Naslund JA, Aschbrenner KA, Barre LK, Bartels SJ. Feasibility of popular m-health technologies for activity tracking among individuals with serious mental illness. Telemed J E Health 2015 Mar;21(3):213-216 [FREE Full text] [CrossRef] [Medline]
  53. Årsand E, Muzny M, Bradway M, Muzik J, Hartvigsen G. Performance of the first combined smartwatch and smartphone diabetes diary application study. J Diabetes Sci Technol 2015 May;9(3):556-563 [FREE Full text] [CrossRef] [Medline]
  54. Skrøvseth SO, Årsand E, Godtliebsen F, Joakimsen RM. Data-Driven Personalized Feedback to Patients with Type 1 Diabetes: A Randomized Trial. Diabetes Technol Ther 2015 Jul;17(7):482-489 [FREE Full text] [CrossRef] [Medline]
  55. Holmen H, Wahl A, Torbjørnsen A, Jenum AK, Småstuen MC, Ribu L. Stages of change for physical activity and dietary habits in persons with type 2 diabetes included in a mobile health intervention: the Norwegian study in RENEWING HEALTH. BMJ Open Diabetes Res Care 2016;4(1):e000193 [FREE Full text] [CrossRef] [Medline]
  56. Kim M, Lee S, Jo E, Lee S, Kang M, Song W, et al. Feasibility of a smartphone application based action plan and monitoring in asthma. Asia Pac Allergy 2016 Jul;6(3):174-180 [FREE Full text] [CrossRef] [Medline]
  57. Kim JY, Wineinger NE, Steinhubl SR. The Influence of Wireless Self-Monitoring Program on the Relationship Between Patient Activation and Health Behaviors, Medication Adherence, and Blood Pressure Levels in Hypertensive Patients: A Substudy of a Randomized Controlled Trial. J Med Internet Res 2016 Jun 22;18(6):e116 [FREE Full text] [CrossRef] [Medline]
  58. Naslund JA, Aschbrenner KA, Bartels SJ. Wearable Devices and Smartphones for Activity Tracking Among People with Serious Mental Illness. Ment Health Phys Act 2016 Mar;10:10-17 [FREE Full text] [CrossRef] [Medline]
  59. Steele Gray C, Gill A, Khan AI, Hans PK, Kuluski K, Cott C. The Electronic Patient Reported Outcome Tool: Testing Usability and Feasibility of a Mobile App and Portal to Support Care for Patients With Complex Chronic Disease and Disability in Primary Care Settings. JMIR Mhealth Uhealth 2016 Jun 02;4(2):e58 [FREE Full text] [CrossRef] [Medline]
  60. Paul L, Wyke S, Brewster S, Sattar N, Gill JM, Alexander G, et al. Increasing physical activity in stroke survivors using STARFISH, an interactive mobile phone application: a pilot study. Top Stroke Rehabil 2016 Jun;23(3):170-177. [CrossRef] [Medline]
  61. Athilingam P, Labrador MA, Remo EF, Mack L, San Juan AB, Elliott AF. Features and usability assessment of a patient-centered mobile application (HeartMapp) for self-management of heart failure. Appl Nurs Res 2016 Nov;32:156-163. [CrossRef] [Medline]
  62. Velardo C, Shah SA, Gibson O, Clifford G, Heneghan C, Rutter H, EDGE COPD Team. Digital health system for personalised COPD long-term management. BMC Med Inform Decis Mak 2017 Feb 20;17(1):19 [FREE Full text] [CrossRef] [Medline]
  63. Sieber J, Flacke F, Link M, Haug C, Freckmann G. Improved Glycemic Control in a Patient Group Performing 7-Point Profile Self-Monitoring of Blood Glucose and Intensive Data Documentation: An Open-Label, Multicenter, Observational Study. Diabetes Ther 2017 Oct;8(5):1079-1085 [FREE Full text] [CrossRef] [Medline]
  64. Alnosayan N, Chatterjee S, Alluhaidan A, Lee E, Houston Feenstra L. Design and Usability of a Heart Failure mHealth System: A Pilot Study. JMIR Hum Factors 2017 Mar 24;4(1):e9 [FREE Full text] [CrossRef] [Medline]
  65. Brett J, Boulton M, Watson E. Development of an e-health app to support women prescribed adjuvant endocrine therapy after treatment for breast cancer. Patient Prefer Adherence 2018;12:2639-2647 [FREE Full text] [CrossRef] [Medline]
  66. Council of the European Union, European Parliament. Publications Office of the EU. Brussels, Belgium: Official Journal of the European Union; 2016 Apr 27. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance)   URL: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679 [accessed 2019-06-12]
  67. Burns PB, Rohrich RJ, Chung KC. The levels of evidence and their role in evidence-based medicine. Plast Reconstr Surg 2011 Jul;128(1):305-310 [FREE Full text] [CrossRef] [Medline]
  68. Winona State University. Winona, MN: Winona State University; 2019 Sep 12. Evidence Based Practice Toolkit   URL: https://libguides.winona.edu/c.php?g=11614&p=61584 [accessed 2019-10-15]
  69. Guyatt GH, Haynes RB, Jaeschke RZ, Cook DJ, Green L, Naylor CD, et al. Users' Guides to the Medical Literature: XXV. Evidence-based medicine: principles for applying the Users' Guides to patient care. Evidence-Based Medicine Working Group. JAMA 2000 Sep 13;284(10):1290-1296. [CrossRef] [Medline]
  70. Giacomini M. The rocky road: qualitative research as evidence. ACP J Club 2001;6(1):4-6 [FREE Full text] [CrossRef]
  71. Pham Q, Wiljer D, Cafazzo JA. Beyond the Randomized Controlled Trial: A Review of Alternatives in mHealth Clinical Trial Methods. JMIR Mhealth Uhealth 2016 Sep 09;4(3):e107 [FREE Full text] [CrossRef] [Medline]
  72. WHO.: World Health Organization; 2020 Jan 20. Depression   URL: https://www.who.int/en/news-room/fact-sheets/detail/depression [accessed 2020-01-30]
  73. WHO.: World Health Organization; 2019 Nov 26. Musculoskeletal conditions   URL: https://www.who.int/news-room/fact-sheets/detail/musculoskeletal-conditions [accessed 2020-01-30]
  74. Bidonde J, Fagerlund B, Fronsdal K, Lund U, Robberstad B. fhi.no. Oslo, Norway: Norwegian Institute of Public Health; 2017 Aug. FreeStyle Libre Flash Glucose Self‐Monitoring System: A Single‐Technology Assessment   URL: https://tinyurl.com/y8mymfru [accessed 2019-06-24]
  75. Murphy LA, Harrington P, Taylor SJ, Teljeur C, Smith SM, Pinnock H, et al. Clinical-effectiveness of self-management interventions in chronic obstructive pulmonary disease: An overview of reviews. Chron Respir Dis 2017 Aug;14(3):276-288 [FREE Full text] [CrossRef] [Medline]


MeSH: Medical Subject Headings
mHealth: mobile health
NCD: noncommunicable disease
RCT: randomized controlled trial
WHO: World Health Organization


Edited by G Eysenbach; submitted 28.10.19; peer-reviewed by R Grainger, S Baptista, D Gunasekeran, W Zhang; comments to author 19.11.19; revised version received 10.02.20; accepted 25.03.20; published 30.04.20

Copyright

©Meghan Bradway, Elia Gabarron, Monika Johansen, Paolo Zanaboni, Patricia Jardim, Ragnar Joakimsen, Louise Pape-Haugaard, Eirik Årsand. Originally published in JMIR mHealth and uHealth (http://mhealth.jmir.org), 30.04.2020.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on http://mhealth.jmir.org/, as well as this copyright and license information must be included.