Original Paper
Abstract
Background: There is mixed evidence to support current ambitions for mobile health (mHealth) apps to improve chronic health and well-being. One proposed explanation for this variable effect is that users do not engage with apps as intended. The application of analytics, defined as the use of data to generate new insights, is an emerging approach to study and interpret engagement with mHealth interventions.
Objective: This study aimed to consolidate how analytic indicators of engagement have previously been applied across clinical and technological contexts, to inform how they might be optimally applied in future evaluations.
Methods: We conducted a scoping review to catalog the range of analytic indicators being used in evaluations of consumer mHealth apps for chronic conditions. We categorized studies according to app structure and application of engagement data and calculated descriptive data for each category. Chi-square and Fisher exact tests of independence were applied to calculate differences between coded variables.
Results: A total of 41 studies met our inclusion criteria. The average mHealth evaluation included for review was a two-group pretest-posttest randomized controlled trial of a hybrid-structured app for mental health self-management, had 103 participants, lasted 5 months, did not provide access to health care provider services, measured 3 analytic indicators of engagement, segmented users based on engagement data, applied engagement data for descriptive analyses, and did not report on attrition. Across the reviewed studies, engagement was measured using the following 7 analytic indicators: the number of measures recorded (76%, 31/41), the frequency of interactions logged (73%, 30/41), the number of features accessed (49%, 20/41), the number of log-ins or sessions logged (46%, 19/41), the number of modules or lessons started or completed (29%, 12/41), time spent engaging with the app (27%, 11/41), and the number or content of pages accessed (17%, 7/41). Engagement with unstructured apps was mostly measured by the number of features accessed (8/10, P=.04), and engagement with hybrid apps was mostly measured by the number of measures recorded (21/24, P=.03). A total of 24 studies presented, described, or summarized the data generated from applying analytic indicators to measure engagement. The remaining 17 studies used or planned to use these data to infer a relationship between engagement patterns and intended outcomes.
Conclusions: Although researchers measured on average 3 indicators in a single study, the majority reported findings descriptively and did not further investigate how engagement with an app contributed to its impact on health and well-being. Researchers are gaining nuanced insights into engagement but are not yet characterizing effective engagement for improved outcomes. Raising the standard of mHealth app efficacy through measuring analytic indicators of engagement may enable greater confidence in the causal impact of apps on improved chronic health and well-being.
doi:10.2196/11941
Keywords
Introduction
Background
There is mixed evidence to support current ambitions for mobile health (mHealth) apps to improve chronic health and well-being [
]. While some apps have demonstrated efficacy in definitive trials [ - ], others have performed poorly [ - ]. One proposed explanation for this variable effect is that users do not engage with apps as intended [ ]. The construct of engagement has been quantitatively conceptualized as the amount, duration, breadth, and depth of intervention usage [ , ]. For many mHealth app evaluations, users can be segmented along a continuum of engagement; some will never use the app, some will use it but quickly abandon it, and some will use it in unexpected ways. Complex patterns of engagement with mHealth apps are emerging and challenge current conceptual paradigms for interpreting their impact on chronic health outcomes. These digitally mediated mechanisms of action require more granular evaluations capable of analyzing multilevel, temporally dense engagement data [ ]. Evaluating engagement is therefore a priority and calls for the integration of nonintrusive measures of this construct in mHealth evaluation methodology [ ].Recently, scholars sought to further the conceptualization of engagement by proposing that it may be more valuable to identify the mechanisms that underlie effective engagement, defined as sufficient engagement with an intervention to achieve intended outcomes [
, ]. The construct of effective engagement differs conceptually from both engagement and adherence, which have historically been used interchangeably [ ]. Sieverink et al reason that the following 3 elements are necessary to determine adherence to a digital health intervention: (1) the ability to measure usage behaviors, (2) an operationalization of intended use, and (3) an empirical, theoretical, or rational justification of intended use [ ]. We propose that effective engagement is more intentional than engagement but less justified than adherence. It sits between both constructs and bridges the transition from identifying patterns of engagement toward evidencing their capacity to achieve intended outcomes.There has been recognition that the definition of engagement has evolved to include offline interactions with the behavior change mediated by a digital health intervention. Yardley et al have been instrumental in furthering this conceptualization of engagement by suggesting that there are 2 levels of engagement: (1) the micro level of immediate engagement with the digital health intervention and (2) the macro level of engagement with the wider intervention-mediated behavior change [
]. They posit that engagement is a dynamic process marked by shifts in both micro and macroengagement, which will vary depending on the intervention, the user, and their context. Users may be macroengaging and experiencing positive behavior change, but this may not necessarily be reflected in their microengagement analytics data. In acknowledgment of this distinction between engagement with the technological and behavioral aspects of an intervention, Yardley et al critically posit that microengagement alone cannot be taken as a valid indicator of effective engagement. We do not dispute Yardley et al’s arguments and recognize the limitations of relying solely on microengagement data to infer effective engagement. However, we posit that measuring and reporting on microengagement is fundamental to understanding how people actually use an app to improve their health and well-being. In turn, these analytic insights can be coupled with measures of macroengagement to identify the mediating mechanisms that motivate effective engagement.The application of analytics, defined as the use of data to generate new insights [
], is an emerging approach to study and interpret engagement with mHealth interventions [ ]. Van Gemert-Pijnen et al have advanced the application of log data analysis to inform how an intervention works in practice and which components should be improved to yield greater benefit [ - ]. Arden-Close et al have developed and implemented a novel R-based tool to visually explore patterns of engagement [ ]. Heckler et al have called for the adoption of a continuous optimization model of evaluation that leverages simulated computational models to predict how users might engage with an intervention before data collection [ ]. Scherer et al have demonstrated the value of joint models in the analysis of longitudinal engagement data. In fact, Scherer et al recently participated in a workshop sponsored by the National Institutes of Health on emerging technology and data analytics for behavioral health, and espoused the need for new analytic methods that can scale to thousands of individuals and billions of data points [ ]. Short et al recently published a viewpoint on engagement measurement options that can be employed in electronic health (eHealth) and mHealth behavior change intervention evaluations [ ]. They found that system engagement data are the most commonly collected and reported measures of engagement in eHealth and mHealth interventions. From this, they recommend having shared ways of conceptualizing these data as the field progresses to consolidate categorization.Objectives
Motivated by the proven value of analytics to study engagement with mHealth apps, we sought to compile and catalog a library of analytic indicators of engagement with consumer mHealth apps for self-managing chronic conditions. We defined analytic indicators as proxy measures of engagement with an mHealth app based on objective usage that generates log data [
, ]. When positioned alongside other measures suitable for evaluating the subjective experience of mHealth app engagement, they may provide complementary data-driven insights into the objective extent of engagement. We propose that analytic indicators of engagement do exactly this: they indicate that users may be engaging effectively with a digital health intervention but do not definitively confirm a relationship between engagement and intended outcomes. Establishing this relationship requires adopting a mixed-methods multidimensional approach to measure effective engagement using multiple assessment strategies [ , ].While many researchers have included analytic indicators as a study measure when evaluating apps, they are not consistent or systematic in their selection [
]. We propose that there is benefit to understanding how engagement with mHealth apps for chronic conditions has been defined, measured, and analyzed across evaluations. The aim of this scoping review was therefore to consolidate how analytic indicators of engagement have previously been applied across clinical and technological contexts to inform how they might be optimally applied in future evaluations.Methods
Review Framework
This scoping review was guided by the methodological framework developed by Arksey and O’Malley [
] and advanced by Levac et al [ ]. They endorse an iterative review process with 5 distinct steps: (1) identifying the research question, (2) searching for relevant studies, (3) selecting studies, (4) charting the data, and (5) collating, summarizing, and reporting results. This framework is particularly relevant to disciplines with emerging evidence, such as mHealth, in which the paucity of definitive research makes it difficult for researchers to undertake systematic reviews [ ]. In this context, conducting a scoping review allowed us to incorporate a range of study designs beyond those accepted for inclusion in systematic reviews, to generate broad findings on how researchers are measuring engagement with consumer mHealth apps for chronic conditions. We made efforts to adhere to recommendations for each step, starting with the selection of a research question that was sufficiently broad to map the extent, range, and nature of mHealth engagement research activity. We conducted this review to explore the following research question: what analytic indicators of engagement are being used in evaluations of consumer mHealth apps for chronic conditions?Search Strategy
A literature search was conducted in the MEDLINE, PsycINFO, CINAHL, and EMBASE databases. In addition, the Journal of Medical Internet Research and its sister journals were independently searched given their frequent and high-impact publication of mHealth research. A combination of different keywords for the constructs “engagement” and “mHealth” was used. No search terms for chronic conditions were defined a priori to broaden search results. We adopted the World Health Organization’s definition of a chronic condition as a “non-communicable disease of long duration and slow progression [
].” presents our search strategy for MEDLINE on the Ovid platform.Eligibility Criteria
Titles and abstracts retrieved from the search strategy were screened for inclusion against the following criteria: (1) the article described an evaluation or a protocol for an evaluation of a consumer mHealth app for self-managing a chronic condition; (2) the study included operationalization of an engagement-related construct—
provides the full list of screened constructs; (3) the study included objective, quantifiable measurements using log data analytics; (4) the app was intended to be used more than once; (5) the article was published between November 1, 2015, and November 1, 2017; and (6) the article was published in English.Studies were excluded if (1) the mHealth app was solely an appointment reminder service; (2) the primary app technology was short message service or interactive voice response; (3) the app was for an acute condition or preventive health purposes; (4) the app was a support tool for a patient’s circle of care; (5) the app did not require user input through active or passive (sensor) data entry; (6) the app only delivered educational content; and (7) the article primarily described the design, development, or usability testing of the app.
Data Collection and Analysis
The first author conducted the electronic searches with support from a faculty-affiliated librarian and reviewed the reference lists of relevant articles. All identified titles and abstracts were downloaded and merged using Mendeley (Elsevier) [
] and duplicated records were removed. The first author independently screened all titles and abstracts against eligibility criteria. Any articles that caused the author uncertainty were retained until data extraction when more information was available to make an informed decision for inclusion in the review. Following title and abstract review, full papers of included abstracts were assessed for final selection by all study authors.Codes extracted from included articles.
- General information regarding the study title, authors, journal, year, and country.
- App information, specifically the public name, chronic condition addressed, and accessibility of health care provider services.
- Study information, specifically the purpose, duration, sample size, and design.
- App structure (structured, hybrid, or unstructured): “Structured” apps contained locked, sequential components (eg, modules, lessons, and features) that users had to complete before moving forward. “Hybrid” apps contained both fixed core components and variable components for free use. “Unstructured” apps contained variable components that users could access and use at will.
- Analytic indicators used to measure engagement, specifically the number of log-ins or sessions logged, the number of modules or lessons started or completed, the number of features accessed, the number of measures recorded, the number or content of pages accessed, the frequency of interactions logged, and total time spent engaging with the app.
- Engagement-based segmentation: studies that segmented users based on engagement data (eg, “of the users who logged in at least five times…”) were assigned this code.
- Application of engagement data (descriptive or inferential): a “descriptive” code was assigned to studies that presented, described, or summarized engagement data. An “inferential” code was assigned to studies that used engagement data to predict the intended outcome. Outcome types were coded for studies that applied engagement data inferentially.
- Attrition type (dropout or nonusage) and statistical method of analysis: dropout attrition is the phenomenon of users not returning to complete follow-up study activities. Nonusage attrition is the phenomenon of users losing interest in a digital health intervention and ceasing to use it [ ].
A data extraction form was developed by the first author to extract relevant study information. We referenced work by Sieverink [
] and Kelders [ ] on analytic indicators of adherence to eHealth technologies to establish preliminary codes. The form was piloted on a sample of included articles to validate proposed codes and add emergent codes. The codes extracted from each study are presented in . All study data were entered into SPSS version 24 (IBM) [ ]. Each study along with its corresponding data was treated as a separate case. We categorized studies according to app structure and application of engagement data and calculated descriptive data for each category. Chi-square and Fisher exact tests of independence were applied to calculate differences between coded variables. A Monte Carlo correction was applied when observed counts were below expected counts.Results
Study Selection
A total of 1873 articles were identified through the database search. Of the 60 full texts screened, 19 were excluded, 8 of which did not include objective, quantifiable measurements using log data analytics. In total, 41 articles comprising 33 studies and 8 protocols met the eligibility criteria and were included for review.
presents the Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram of the study selection progress [ ].Methodological Characteristics
The first authors of reviewed studies were affiliated with institutions in the United States (46%, 19/41), Canada (20%, 8/41), the United Kingdom (10%, 4/41), Australia (5%, 2/41), Germany (5%, 2/41), the Netherlands (5%, 2/41), France (2%, 1/41), India (2%, 1/41), Singapore (2%, 1/41), Spain (2%, 1/41), Sweden (2%, 1/41), and Switzerland (2%, 1/41).
Researchers reported log data analytics across 14 different engagement-related constructs: engagement (27%, 11/41), adherence (17%, 7/41), usage (15%, 6/41), use (15%, 6/41), feasibility (10%, 4/41), acceptability (7%, 3/41), utilization (5%, 2/41), attrition (5%, 2/41), participation (5%, 2/41), activity (2%, 1/41), adoption (2%, 1/41), compliance (2%, 1/41), fidelity (2%, 1/41), and retention (2%, 1/41). There was significant variation in how constructs were defined across studies, which limited our ability to (1) extract reliable definitions for each construct, (2) map analytic indicators to specific constructs, and (3) conduct cross-construct comparisons of analytic indicators.
The majority of reviewed studies were experimental (51%, 21/41), with the two-group pretest-posttest randomized controlled trial (RCT) as the most prevalent experimental study design (48%, 10/21), followed by the one-group pretest-posttest design (43%, 9/21). Quasi-experimental design selection (17%, 7/41) was more diverse and included cohort (29%, 2/7), interrupted time-series (14%, 1/7), and single case (14%, 1/7) studies. The remaining 13 studies included for review were observational in design (32%, 13/41). Studies were on average 5 months long (median 152 days, interquartile range, IQR 106), with a sample size of over 100 participants (median 103, IQR 252). The longest reviewed observational study conducted by Serrano et al was 7 years long with over 1 million participants [
]. A total of 19 studies applied engagement-based segmentation and reported results for separate user cohorts (58%, 19/33). In total, 14 of the reviewed studies were published in the Journal of Medical Internet Research or its sister journals (34%, 14/41).Intervention Characteristics
A wide range of chronic conditions were targeted through the apps under study, with mental health (29%, 12/41), chronic pain (12%, 5/41), asthma (10%, 4/41), cardiovascular disease (7%, 3/41), and diabetes (type 1 and 2; 15%, 6/41) leading the clinical charge. Researchers also evaluated apps for cancer (5%, 2/41), hypertension (5%, 2/41), obesity (5%, 2/41), chronic kidney disease (2%, 1/41), chronic obstructive pulmonary disease (2%, 1/41), cystic fibrosis and inflammatory bowel disease (2%, 1/41), Parkinson disease (2%, 1/41), and sleep apnea (2%, 1/41). Over half of the apps had a hybrid structure (59%, 24/41), 10 apps were unstructured (24%), and 7 apps were structured (17%). Nearly half of all structured apps were aimed at improving mental health (40%, 4/10). Health care provider services were accessible to users to support managing their condition in nearly half of all reviewed apps (44%, 18/41). Characteristics of the included studies are presented in
alongside the full dataset of coded analytic indicators for each study, which are summarized below.Analytic Indicators
Across the reviewed studies, engagement was measured using the following 7 analytic indicators in order of prevalence: the number of measures recorded (76%, 31/41), the frequency of interactions logged (73%, 30/41), the number of features accessed (49%, 20/41), the number of log-ins or sessions logged (46%, 19/41), the number of modules or lessons started or completed (29%, 12/41), time spent engaging with the app (27%, 11/41), and the number or content of pages accessed (17%, 7/41).
presents a tally of the analytic indicators measured in each included study. On average, researchers applied 3 different analytic indicators to measure their engagement data (mean 3.20, SD 1.42; median 3, IQR 2). The Fisher exact test of independence indicated that engagement with unstructured apps was mostly measured by the number of features accessed (8/10, P=.04), and engagement with hybrid apps was mostly measured by the number of measures recorded (21/24, P=.03). provides a descriptive overview of structured, hybrid, and unstructured apps across study characteristics and analytic indicators.Author | Measures | Interactions | Features | Log-ins | Modules | Time spent | Pages | |
Mental health (n=12) | ||||||||
Beiwinkel et al [ | ]✓a | ✓ | —b | ✓ | ✓ | ✓ | — | |
Ben-Zeev et al [ | ]✓ | ✓ | ✓ | ✓ | ✓ | — | — | |
Ben-Zeev et al [ | ]✓ | ✓ | ✓ | — | ✓ | — | — | |
Davies et al [ | ]✓ | ✓ | ✓ | ✓ | — | ✓ | ✓ | |
Frisbee et al [ | ]— | — | ✓ | ✓ | — | — | — | |
Kinderman et al [ | ]✓ | — | — | ✓ | — | — | — | |
Kuhn et al [ | ]— | ✓ | — | — | — | — | ✓ | |
Owen et al [ | ]— | ✓ | ✓ | ✓ | — | ✓ | ✓ | |
Pham et al [ | ]— | ✓ | — | ✓ | — | ✓ | — | |
Torous et al [ | ]✓ | ✓ | ✓ | — | — | — | — | |
Vansimaeys et al [ | ]✓ | ✓ | — | — | ✓ | — | — | |
Wahle et al [ | ]✓ | ✓ | — | ✓ | ✓ | — | — | |
Chronic pain (n=5) | ||||||||
Fortier et al [ | ]✓ | — | — | — | — | — | — | |
Jamison et al [ | ]✓ | ✓ | — | ✓ | — | — | — | |
Jibb et al [ | ]✓ | ✓ | — | — | — | — | — | |
Reade et al [ | ]✓ | ✓ | — | ✓ | ✓ | — | — | |
Skrepnik et al [ | ]✓ | ✓ | — | — | — | — | — | |
Asthma (n=4) | ||||||||
Chan et al [ | ]✓ | ✓ | ✓ | — | ✓ | — | — | |
Cook et al [ | ]— | — | — | ✓ | — | — | — | |
Fedele et al [ | ]— | ✓ | ✓ | ✓ | — | — | — | |
Kosse et al [ | ]✓ | — | — | — | ✓ | — | — | |
Cardiovascular disease (n=3) | ||||||||
Agboola et al [ | ]✓ | — | — | ✓ | — | ✓ | ✓ | |
Goyal et al [ | ]✓ | ✓ | ✓ | ✓ | ✓ | — | — | |
Sakakibara et al [ | ]✓ | — | ✓ | — | — | — | — | |
Type 1 diabetes (n=3) | ||||||||
Goyal et al [ | ]— | ✓ | ✓ | — | — | — | — | |
Ryan et al [ | ]✓ | ✓ | ✓ | — | — | ✓ | — | |
Sieber et al [ | ]✓ | — | — | — | — | — | — | |
Type 2 diabetes (n=3) | ||||||||
Desveaux et al [ | ]— | ✓ | ✓ | ✓ | — | ✓ | — | |
Goh et al [ | ]— | ✓ | — | — | — | — | — | |
Kleinman et al [ | ]✓ | — | — | ✓ | — | — | — | |
Other (n=11) | ||||||||
Bot et al [ | ]✓ | ✓ | ✓ | — | ✓ | — | — | |
Hardinge et al [ | ]✓ | ✓ | ✓ | — | — | ✓ | — | |
Isetta et al [ | ]— | ✓ | — | — | — | — | — | |
Kaplan et al [ | ]✓ | ✓ | ✓ | — | ✓ | ✓ | — | |
Langius-Eklof et al [ | ]✓ | — | — | — | ✓ | — | — | |
Ong et al [ | ]✓ | — | ✓ | ✓ | — | — | — | |
Pham et al [ | ]✓ | ✓ | ✓ | ✓ | — | — | ✓ | |
Serrano et al [ | ]✓ | ✓ | ✓ | — | — | — | ✓ | |
Taki et al [ | ]✓ | ✓ | — | ✓ | — | ✓ | ✓ | |
Thies et al [ | ]✓ | ✓ | ✓ | — | — | ✓ | — | |
Toro-Ramos et al [ | ]✓ | ✓ | — | — | ✓ | — | — |
aAnalytic indicators of engagement used in reviewed studies.
bNot applicable.
Number of Measures
Of the analytic indicators identified in this review, the number of measures recorded by users on an app was the most commonly used indicator of engagement with mHealth apps for chronic conditions. Researchers evaluated a range of measures that aligned with their target chronic condition, such as blood glucose [
, , , , ], weight [ , , ], symptoms [ , , ], patient-reported outcomes [ , , , , ], diary entries [ , ], and steps [ ]. There was some overlap in the types of measures being collected across apps targeting the same chronic conditions, such as the number of blood glucose readings recorded as an indicator of engagement with diabetes apps. Overall, the target chronic condition and functionality of the app under study ultimately determined which measures would be collected and subsequently reported as an analytic indicator of engagement.Frequency of Interactions
The frequency of interactions logged was the second most prevalent analytic indicator of engagement. Researchers often chose to complement assessing the number of measures recorded on an app with the frequency by which the measures were recorded. Stratifying frequency of interactions by specific date ranges was also common; Davies et al measured the number of users who used a mental health app at least once after 1 week, 4 weeks, and 20 weeks [
]. They also applied within-date range indicators such as the number of users who used the app once, 2 to 3 times, 4 to 6 times, or 6 or more times per week. Some researchers assigned a benchmark number of days to signify engagement, such as Isetta et al who measured the number of users who engaged with an app for sleep apnea on at least 66% of all days in the study [ ]. Others assigned significance to a specific day and considered reaching it as an indicator of engagement, such as Jamison et al who measured the number of users who continued to submit daily assessments of their chronic pain after 90 and 180 days [ ]. Layering this analytic indicator over other indicators added temporal context to better understand how users were engaging over time.Number of Features
The range of features accessed by users in an app was frequently measured as an analytic indicator of engagement. Researchers primarily logged (1) the number of features accessed and (2) the number of times each feature was accessed. In their trial of the Veterans Affairs' Comprehensive Assistance for Family Caregivers Program where users were provided with access to a suite of 6 apps for posttraumatic stress disorder (PTSD) self-management, Frisbee et al measured the number of unique apps used in the suite [
]. To better understand user preferences between 2 features of their app for schizophrenia self- management, Ben-Zeev et al measured the number of times users chose the video feature over the written content feature [ ]. Our research group proposed exploring whether users would access all the features made available in their app for prostate cancer survivorship care, particularly whether users would enable caregiver permissions or write notes to document changes in their care [ ]. Overall, researchers applied this analytic indicator to explore the breadth of app engagement and inform feature popularity and relevance for the target population.Number of Log-Ins
The number of log-ins or sessions logged by users continues to be a commonly used analytic indicator of engagement. This indicator was often coupled with the frequency of interactions logged to standardize counts. Researchers also frequently measured the number of users who opened an app at least once to segment them from users who had downloaded the app but never logged any subsequent activity. Owen et al made both these associations by measuring the number of sessions logged by users on their PTSD self-management app, as well as the number of users who logged at least one session on the first day, week, and month post download [
]. Researchers used this analytic indicator to reflect the shift from adoption to habituation, with a greater number of log-ins or sessions denoting greater engagement.Characteristics | Structured (N=7), n (%) | Hybrid (N=24), n (%) | Unstructured (N=10), n (%) | |
Chronic condition | ||||
Mental health (n=12) | 2 (29) | 6 (25) | 4 (40) | |
Chronic pain (n=5) | 2 (29) | 3 (13) | 0 (0) | |
Asthma (n=4) | 1 (14) | 3 (13) | 0 (0) | |
Cardiovascular disease (n=3) | 0 (0) | 2 (8) | 1 (10) | |
Type 1 diabetes (n=3) | 1 (14) | 1 (4) | 1 (10) | |
Type 2 diabetes (n=3) | 0 (0) | 1 (4) | 2 (20) | |
Other (n=11) | 1 (14) | 8 (33) | 2 (20) | |
Segmentation | ||||
Yes (n=19) | 1 (14) | 12 (50) | 6 (60) | |
No (n=14) | 4 (47) | 7 (29) | 3 (30) | |
Analytic indicators | ||||
Number of measures (n=31)a | 6 (86) | 21 (88) | 4 (40) | |
Frequency of interactions (n=30) | 4 (57) | 18 (75) | 8 (80) | |
Number of features (n=20)a | 2 (29) | 10 (42) | 8 (80) | |
Number of log-ins (n=19) | 4 (57) | 12 (50) | 3 (30) | |
Number of modules (n=12) | 2 (29) | 10 (42) | 0 (0) | |
Time spent (n=11) | 0 (0) | 8 (33) | 3 (30) | |
Number of pages (n=7) | 0 (0) | 4 (17) | 3 (30) | |
Application of engagement data | ||||
Descriptive (n=24) | 7 (100) | 13 (54) | 4 (40) | |
Inferential (n=17) | 0 (0) | 11 (46) | 6 (60) | |
Study design | ||||
Experimental (n=21) | 3 (43) | 13 (54) | 5 (50) | |
Quasi-experimental (n=7) | 0 (0) | 6 (25) | 1 (10) | |
Observational (n=13) | 4 (57) | 5 (21) | 4 (40) | |
Number of indicators | ||||
1 (n=5) | 1 (14) | 3 (13) | 1 (1) | |
2 (n=10) | 2 (29) | 4 (17) | 4 (40) | |
3 (n=8) | 3 (43) | 4 (17) | 1 (10) | |
4 (n=10) | 1 (14) | 6 (25) | 3 (30) | |
5 (n=7) | 0 (0) | 6 (25) | 1 (10) | |
6 (n=1) | 0 (0) | 1 (4) | 0 (0) |
aP<.05.
Number of Modules
When defining analytic indicators for categorization, we differentiated between unrestricted and restricted data collection. Unrestricted data collection was defined as data that could be entered into an app at a frequency or volume dictated by the user, such as the number of blood glucose readings or medications recorded [
]. Restricted data collection was defined as requiring the user to enter data according to a set frequency or volume, such as a list of assigned articles to be read [ ] or challenges to be completed [ ]. We coded studies reporting unrestricted data collection as number of measures and coded studies reporting restricted data collection as number of modules. A range of studies measured the number of outcome surveys completed from those assigned [ , , ]. Others assessed the number of videos watched from a playlist [ , ], educational modules completed [ ], or self-care advice accessed [ ]. Overall, researchers studying apps with modular content considered module completion to be indicative of engagement and consequently, tracked module progression and completion rates.Time Spent
The amount of time that users engaged with an app was considered by a subset of researchers to be an analytic indicator of engagement. Researchers measured the time spent on unique sections of an app [
], the time spent on unique pages [ ], the length of a unique session [ , , , ], the length between unique sessions [ ], and the total time spent on an app [ , , ]. Davies et al also segmented sessions by those that were in the 30- to 60-second range [ ]. Measuring time spent engaging with an app helped researchers to distinguish between exploratory and purposeful engagement; a rapid succession of short page views was indicative of scanning through content, whereas prolonged viewing suggested greater intention and interest in content. Overall, this analytic indicator informed defining accurate session duration parameters to track session-based analytics.Number of Pages
The number of pages accessed by users was logged by researchers to reflect overall patterns of app engagement and discoverability of specific content. Kuhn et al measured the number and content of pages visited by users in their app for PTSD self-management, as did other researchers [
, , ]. Taki et al combined session analytics with page analytics and measured the number of pages viewed per session in their app for obesity self-management [ ]. Owen et al recorded click stream data documenting their users’ navigation through page content [ ]. Insights gleaned from this analytic indicator provided researchers with a broader understanding of the user journey through an app and drew attention to specific content that might drive engagement.Conceptual Categories of Analytic Indicators
We sought to conceptually clarify the 7 identified analytic indicators by grouping them according to the 4 categories that constitute the quantitative conceptualization of engagement: amount, duration, breadth, and depth [
, ]. presents an overview of the categories, their comprised analytic indicators, and the number of reviewed studies that fall into each category. The focus of most reviewed studies was on the depth (76%, 31/41) and amount of engagement (73%, 30/41). There was less attention on the breadth (49%, 20/41) and duration (27%, 11/41) of engagement. TThese findings suggest that a subset of researchers are either not measuring the breadth and duration of engagement in their mHealth evaluations or underreporting the findings.Application of Engagement Data
Of the 41 studies included for review, 24 presented, described, or summarized the data generated from applying analytic indicators to measure engagement. The remaining 17 studies used or planned to use these data to infer a relationship between engagement patterns and intended outcomes.
Clinical Outcomes
Over half of all researchers assessed the relationship between engagement and clinical outcomes (53%, 9/17). Toro-Ramos et al measured the number of weeks users engaged with their hypertension self-management app and found that users with sustained usage across 19 weeks experienced significant reductions in systolic blood pressure and weight [
]. In their trial of an app for PTSD self-management, Kuhn et al applied the number of days and weeks users engaged with the app as a predictor variable for changes in PTSD symptoms but did not find a significant relationship [ ]. Goyal et al segmented all users who reported 5 or more blood glucose readings a day into a subgroup for secondary analyses and found a significant relationship between increased readings and improved glycated hemoglobin after 6 months [ ]. They also identified a significant interaction between users who entered a reading on at least three days a week, and improved daily blood glucose self-monitoring. Overall, there was evidence of predictive validity across reviewed studies, with engagement correlating with improved clinical outcomes. However, the majority of analyses conducted to establish this predictive validity relied on nonexperimental variations in engagement due to nonadherence or implementation infidelity. Future evaluations assessing the relationship between engagement and clinical outcomes should consider alternative trial designs with multiple randomizations to ensure that findings are not biased by confounding [ - ].Category and analytic indicators | Studies, n (%) | |
Amount | ||
Frequency of interactions | 30 (73) | |
Number of log-ins | 30 (73) | |
Duration: Time spent | 11 (27) | |
Breadth | ||
Number of features | 20 (49) | |
Number of pages | 20 (49) | |
Depth | ||
Number of modules | 31 (76) | |
Number of measures | 31 (76) |
Engagement Outcomes
Many researchers sought to investigate the effect of engagement behaviors on other engagement outcomes (53%, 9/17). In their study examining engagement with a weight loss app, Serrano et al applied classification and regression tree methods to identify subgroups with unique engagement behaviors [
]. They were able to distinguish highly engaged subgroups by the number of customizations made to the diet and exercise features of the app. Ben-Zeev et al found that participants who engaged with their schizophrenia self-management app for a period of 5 to 6 months also had a higher frequency of interactions and engaged 4.3 days per week on average [ ]. Torous et al also characterized engagement for a schizophrenia self-management app through fitting frequency of interaction data to a piecewise power law distribution [ ]. They found that future use with the app is directly related to prior app use, suggesting that those who engage with the app more often will have a higher probability of app engagement in the future. In their trial of a caloric-monitoring app for type 2 diabetes self-management, Goh et al applied latent-class growth modeling to delineate 8-week trajectories of app engagement [ ]. They were able to identify 3 distinct app trajectories based on the frequency of interactions and also associate patient characteristics with these trajectories. In summary, there were strong predictive relationships between numerous engagement domains. This finding motivates establishing complementary domains across multiple contexts to optimize data triangulation.Utilization Outcomes
Two studies proposed to evaluate the impact of engagement patterns on health care utilization outcomes (12%, 2/17). Kaplan et al plan to examine the impact of sustained engagement over time with an app for pediatric cystic fibrosis and inflammatory bowel disease self-management on the number of hospitalizations and emergency department visits [
]. However, they anticipate that changes in these outcomes may not be realized in a 6-month intervention period. Our research group is evaluating a prostate cancer survivorship app [ ] and aims to investigate the relationship between (1) the number of patient-reported outcome measures completed and (2) the frequency of interactions logged on the number of in-clinic visits for prostate cancer–related concerns. Altogether, the limited sample of reviewed studies suggests that the relationship between engagement and utilization outcomes is underdeveloped and warrants further study.The Fisher exact test of independence indicated that studies of structured apps were more likely to only report descriptive statistics on engagement data (7/7, P=.04). In addition, most studies that applied inferential statistics also measured the frequency of interactions logged (16/17, P=.014). Most researchers who did not segment users into cohorts based on engagement data only reported descriptive statistics on their engagement data (13/14, P<.001), while researchers who segmented their users into cohorts were more likely to conduct subgroup analyses and infer properties of the larger clinical population (14/19, P<.001).
provides a descriptive overview of studies applying descriptive or inferential analyses on engagement data.Attrition Type and Analyses
The majority of reviewed studies did not report on attrition (70%, 23/33). Of the 10 studies that did, 5 reported on dropout attrition (50%), 4 reported on nonusage attrition (40%), and 1 reported on both phenomena (10%). Researchers were more likely to descriptively summarize raw attrition proportions than statistically analyze them (70%, 7/10). Those that conducted comparisons across attrition curves used Kaplan-Meier survival curves (10%, 1/10), Cox regression models (10%, 2/10), and latent class growth models (10%, 1/10).
Characteristics | Descriptive (N=24), n (%) | Inferential (N=17), n (%) | |
Chronic condition | |||
Mental health (n=12) | 6 (25) | 6 (35) | |
Chronic pain (n=5) | 4 (17) | 1 (6) | |
Asthma (n=4) | 3 (13) | 1 (6) | |
Cardiovascular disease (n=3) | 2 (8) | 1 (6) | |
Type 1 diabetes (n=3) | 2 (8) | 1 (6) | |
Type 2 diabetes (n=3) | 2 (8) | 1 (6) | |
Other (n=11) | 5 (21) | 6 (35) | |
Segmentation | |||
Yes (n=19)a | 5 (21) | 14 (82) | |
No (n=14)a | 13 (54) | 1 (6) | |
Analytic indicators | |||
Number of measures (n=31) | 20 (83) | 11 (65) | |
Frequency of interactions (n=30)a | 14 (58) | 16 (94) | |
Number of features (n=20) | 11 (46) | 9 (53) | |
Number of log-ins (n=19) | 12 (50) | 7 (41) | |
Number of modules (n=12) | 7 (29) | 5 (29) | |
Time spent (n=11) | 8 (33) | 3 (18) | |
Number of pages (n=7) | 3 (13) | 4 (24) | |
Structure | |||
Structured (n=7)a | 7 (29) | 0 (0) | |
Hybrid (n=24) | 13 (54) | 11 (65) | |
Unstructured (n=10) | 4 (17) | 6 (35) | |
Study design | |||
Experimental (n=21) | 13 (54) | 8 (47) | |
Quasi-experimental (n=7) | 4 (17) | 3 (18) | |
Observational (n=13) | 7 (29) | 6 (35) | |
Number of indicators | |||
1 (n=5) | 3 (13) | 2 (12) | |
2 (n=10) | 7 (29) | 3 (18) | |
3 (n=8) | 3 (13) | 5 (29) | |
4 (n=10) | 7 (29) | 3 (18) | |
5 (n=7) | 3 (13) | 4 (24) | |
6 (n=1) | 1 (4) | 0 (0) |
aP<.05.
Discussion
Principal Findings
In conducting this scoping review, we sought to catalog the range of analytic indicators being used in evaluations of consumer mHealth apps for chronic conditions. We applied Arksey and O’Malley’s methods of reporting and provided a descriptive analysis of the extent, nature, and distribution of analytic indicators across 41 studies, as well as a narrative and thematic summary of collected data [
]. The average mHealth evaluation included for review was a two-group pretest-posttest RCT of a hybrid-structured app for mental health self- management, had 103 participants, lasted 5 months, did not provide access to health care provider services, measured 3 analytic indicators of engagement, segmented users based on engagement data, applied engagement data for descriptive analyses, and did not report on attrition.Analytic Indicators
Our results indicate that researchers are measuring engagement across 7 analytic indicators, specifically: (1) the number of measures recorded, (2) the frequency of interactions logged, (3) the number of features accessed, (4) the number of log-ins or sessions logged, (5) the number of modules or lessons started or completed, (6) time spent engaging with the app, and (7) the number or content of pages accessed. We found that the researchers favored evaluating the number of measures recorded on an app as an indicator of engagement, closely followed by the frequency of interactions logged. We also found that both these indicators were most often used to assess hybrid and unstructured apps; these 2 app structures also made up the majority of apps under review.
We noted that researchers were least likely to measure the number of pages accessed and time spent engaging with the app; the latter indicator was mostly reported descriptively (73%, 8/11). This finding was surprising given the historical popularity of these indicators for measuring engagement with Web-based interventions [
, , ]. The breadth and duration categories that conceptually comprise these analytic indicators were also deprioritized. We propose that these indicators are falling out of favor because of the growing recognition that users engage differently with apps. Users perceive apps to be a short-term commitment [ ] and access app-based content sporadically for shorter periods of time compared with Web-based interventions [ ]. Recent research by Morrison et al comparing patterns of engagement with a stress management intervention delivered via website versus app mitigated these differences by significantly reducing the number of pages on the app version of the intervention compared with the website [ ]. They subsequently found that app users logged in twice as often but spent half as much time engaging compared with website users. They did not report the number of pages accessed or time spent engaging with the app as indicators of engagement. This body of research, in conjunction with our own findings, suggests that researchers evaluating mHealth apps for self-managing chronic conditions should refrain from measuring and reporting these 2 analytic indicators of engagement unless they are expressly relevant to the app under study.Our identification of the number of measures recorded on an app as an analytic indicator of engagement deviates from previous research by Sieverink et al on usage and adherence to eHealth interventions [
], which found no evidence that researchers were operationalizing constructs in this way. Our focus on reviewing studies of mHealth apps for self-managing chronic conditions may explain this finding, as these interventions encourage users to systematically record data and capture the variability of their disease state over time [ ]. In thinking of the frequency of interactions logged as a common analytic indicator of engagement, we note that there has been a shift toward on-demand apps with features and functionality that users can engage with at their own discretion. Benchmarking engagement by time range provides more context on a user’s intentions and needs than just the total amount of engagement.We did not observe any significant differences between the number or type of analytic indicators used to measure engagement across chronic conditions. Researchers applied indicators that were relevant to the features and functionality of their app. For example, studies of apps for diabetes self-management often measured the number of blood glucose readings due to the popularity of this feature but never measured the number of modules or lessons because these features were not offered to users. In a recent review on the barriers and facilitators of engagement with remote measurement technology for managing health, Simblett et al found that studies were reporting idiosyncratic measures of engagement and adherence that were not comparable across studies [
]. Their findings align with our own, and support Yardley et al’s assertion that effective engagement is defined in relation to the purpose of a specific intervention and can only be established empirically in the context of that intervention [ ]. Although Simblett et al call for less variation in how engagement is quantitatively measured across studies, we propose that researchers continue to apply context-specific analytic indicators but report them more systematically to enable cross-study comparison. Researchers might consider categorizing indicators according to the 7 domains identified in this research and providing detailed specifications on the analytic tags required to implement each indicator. When reporting on indicators, researchers should specify that they are measuring the construct of engagement and then catalog each domain. This practice may contribute to greater taxonomic consensus by curbing the arbitrary reporting of engagement-related constructs identified in this review.Application of Engagement Data
Although researchers measured, on average, 3 indicators in a single study, the majority reported findings descriptively and did not further investigate how engagement with an app contributed to its impact on health and well-being. This finding suggests that researchers are gaining nuanced insights into how users are engaging with their apps but are not conducting inferential analyses to characterize effective engagement for improved outcomes. Relating analytic engagement patterns to behavior change and intended outcomes has been advocated across the behavioral and computational sciences [
, , , , ], with recent efforts made to equip researchers with strategies for performing inferential analyses on engagement data [ , , ]. Our analyses indicated that studies of structured apps were more likely to only report descriptive statistics on engagement data. Given that structured apps primarily require users to follow a predetermined engagement pathway and complete a series of milestones, it is reasonable for researchers to report on completion rates and identify drop-off points. However, it may be helpful to conduct inferential analyses to understand if completion of an app-mediated program is required to achieve intended outcomes, or whether users may derive proportional benefits from progressing through stages of the program. Of the studies that applied inferential statistics, most measured the number of days, week, or months users engaged with an app. This finding suggests that researchers consider a temporal understanding of engagement to be important in determining a predictive effect on intended outcomes.Recommendations
In their systematic review, Sieverink et al found that over half of all reviewed studies measured adherence to eHealth interventions using a single analytic indicator, and a quarter used 2 indicators [
]. The authors conclude that a limited but deliberate set of only one of 2 different indicators in accordance with the goal of the technology is sufficient to operationalize adherence. On reviewing how researchers were operationalizing adherence, they found that the majority reported adherence only in terms of how an intervention was used. The absence of a comparison to a threshold for intended use renders this operationalization incongruent with the definition of adherence. Instead, we propose that it aligns with the current understanding of engagement, which is more exploratory in nature and thus supports applying a greater number of analytic indicators.In contrast to Sieverink et al’s findings, the majority of our reviewed studies applied between 2 and 4 analytic indicators to measure engagement. This variance suggests that researchers are starting to recognize a conceptual and methodological distinction between the constructs of engagement and adherence. From these findings, we make the following recommendation: researchers seeking to gain a preliminary understanding of how users are engaging with their app are encouraged to apply all relevant analytic indicators from those identified in this review.
presents data that may support researchers to select indicators that have previously been measured for their target chronic condition or for an app with similar features and functionality. Upon generation of analytic findings, researchers might consider segmenting users by engagement behaviors to interrogate the data and refine their engagement models. Conducting inferential subgroup analyses with engagement as a predictor of observed health outcomes might uncover potential patterns of effective engagement and inform an operationalization of intended use. In this way, measuring engagement can be positioned on a methodological continuum toward determining adherence. presents a process model of our recommendations.During our full-text review, we excluded a large number of studies because they did not include objective, quantifiable measurements using log data analytics. Some studies had users self-report their engagement, whereas others omitted reporting engagement altogether and solely related findings on app efficacy. One possible explanation for this gap might be that researchers are unfamiliar with how to derive analytic insights from their app. From our experience, the process of tagging interaction data to enable analytic insights requires deliberate foresight. A shared understanding between a researcher and a software developer of the research questions being answered is critical to determine how analytics data should be modeled.
presents a use case for applying analytic tags to evaluate effective engagement.Our final recommendation concerns the reporting of attrition in data-driven mHealth evaluations. In 2005, Eysenbach published landmark work on the law of attrition [
], which was his observation that a substantial portion of participants in eHealth trials stop using the intervention before study end. He posits that attrition is a fundamental characteristic and methodological challenge in the evaluation of eHealth interventions and recommends that “usage metrics and determinants of attrition should be highlighted, measured, analyzed, and discussed” [ ]. Our findings suggest that this counsel has not fully translated into practice in the mHealth field. There is less inclination to log and report on analytic indicators of disengagement. We encourage researchers to attribute the same value to attrition data as they currently do to engagement data, as both constructs provide consequential insights into the viability of an app in the real world.Limitations
Some methodological limitations of our scoping review warrant discussion, the most significant being that we only reviewed articles published over a 2-year period. This sampling frame may not have captured a representative sample of mHealth literature. As such, we may have missed relevant studies published before November 2015 and after November 2017 that would have met our eligibility criteria. While we acknowledge that our sampling frame is limited in scoping the entire field of mHealth, we believe it captures the application of analytics within the field of mHealth. From our review of the literature before conducting our search, we identified a paucity of papers that focused on mHealth log data analyses. The systematic review on usage-based adherence to eHealth interventions conducted by Sieverink et al reviewed 62 papers, of which 7 were on smartphone-based interventions [
]. Of those 7 papers, 5 were published after 2016, and the other 2 were both published in 2013. Perski et al conducted a systematic review on engagement with digital behavior change interventions that comprised all studies up to November 2015 [ ]. They reviewed 113 studies, of which 13 were on mobile phone–based interventions. Only 4 of those studies applied log data analyses to study engagement with the intervention. These insights confirm that our scoping review did not include all studies that applied log data analyses to study engagement with mHealth apps. However, they also suggest that the number of studies we omitted is small. Our sampling frame of November 2015 to November 2017 directly follows Perski et al’s review and includes 41 studies to address our specific research questions. For these reasons, we posit that our sample is sufficiently robust to provide a representative understanding of how analytics are being applied to study engagement with mHealth apps. Due to limited resources, only 1 reviewer conducted the electronic searches and screened all titles and abstracts against eligibility criteria, thereby potentially introducing bias. We did not assess the quality of included articles; however, this is in line with our review framework, which does not mandate this methodological practice. Finally, we did not map analytic indicators to the 14 identified engagement-related constructs for analysis. We acknowledge that conceptual differences exist between some of these constructs (eg, usage, feasibility, and adherence), and it is possible to use multiple constructs in the same study. However, we reviewed each construct and its analytic operationalizations separately during our data extraction process and could not discern significant differences. As such, we feel that we have included a homogenous body of research in this review and provided accurate insights into how researchers have used analytic indicators to measure engagement.Conclusions
To date, the potential for mHealth apps to positively impact chronic health outcomes has not yet been realized [
]. This is, in part, due to the difficulties of generating a solid evidence base to guide clinical, policy, and regulatory decision making [ ]. Indeed, the mHealth field has been reproached for arguing that apps warrant digital exceptionalism given the iterative nature of their design and the prohibitive cost of trials compared with their perceived level of risk [ ]. We propose that our review supports researchers to harness these natural attributes for conducting data-driven evaluations of digitally mediated behavior change. Without objective knowledge of how users engage with an app to care for themselves, the mechanisms of action that underlie complex models of digitally mediated behavior change cannot be identified.Our proposed library of analytic indicators to evaluate effective engagement with consumer mHealth apps for chronic conditions may be of value to researchers as a resource to support their evaluative practice. Researchers can systematically incorporate these analytic indicators into their study measures by adding analytic tags to their app’s source code, allowing them to measure engagement without creating user burden or reactivity. Once generated, these data can be used in inferential analyses to delineate relationships with observed health outcomes. Researchers can further interrogate these data by conducting rapid cycles of research and development to validate hypothesized models of effective engagement. On the basis of these insights, researchers can (1) build a cumulative body of evidence for how users should engage with their app to achieve intended outcomes, (2) incrementally improve their app to optimize effective engagement, and (3) determine the optimal digital dose of effective engagement with their app for validation in a definitive trial to meet required levels of evidence for procurement and distribution [
]. Successful implementation of these practices may elevate the discourse of these apps beyond the coarse evaluations and monolithic policy recommendations against their value in health care.Raising the standard of mHealth app efficacy through measuring analytic indicators of engagement may enable greater confidence in the causal impact of apps on improved chronic health and well-being. It is this opportunity afforded by data-driven research to close the gap between promised and realized health benefits that is most meaningful.
Acknowledgments
The authors wish to thank Ms Vincci Lui, Faculty Liaison and Instruction Librarian for the Institute of Health Policy, Management and Evaluation, for guiding our search strategy. The authors are grateful to Dr Chitra Lalloo for reviewing an early draft of this manuscript and providing comments.
Conflicts of Interest
None declared.
Multimedia Appendix 2
Full dataset of coded analytic indicators.
XLSX File (Microsoft Excel File), 27KB
Multimedia Appendix 3
Practical considerations for applying analytic indicators to evaluate effective engagement.
PDF File (Adobe PDF File), 27KBReferences
- Byambasuren O, Sanders S, Beller E, Glasziou P. Prescribable mHealth apps identified from an overview of systematic reviews. NPJ Digit Med 2018 May 9;1(1):1089-1098 [FREE Full text] [CrossRef]
- Proudfoot J, Clarke J, Birch MR, Whitton AE, Parker G, Manicavasagar V, et al. Impact of a mobile phone and web program on symptom and functional outcomes for people with mild-to-moderate depression, anxiety and stress: a randomised controlled trial. BMC Psychiatry 2013 Nov 18;13:312 [FREE Full text] [CrossRef] [Medline]
- Clarke J, Proudfoot J, Birch MR, Whitton AE, Parker G, Manicavasagar V, et al. Effects of mental health self-efficacy on outcomes of a mobile phone and web intervention for mild-to-moderate depression, anxiety and stress: secondary analysis of a randomised controlled trial. BMC Psychiatry 2014 Sep 26;14:272 [FREE Full text] [CrossRef] [Medline]
- Ivanova E, Lindner P, Ly KH, Dahlin M, Vernmark K, Andersson G, et al. Guided and unguided acceptance and commitment therapy for social anxiety disorder and/or panic disorder provided via the internet and a smartphone application: a randomized controlled trial. J Anxiety Disord 2016 Dec;44:27-35 [FREE Full text] [CrossRef] [Medline]
- Roepke AM, Jaffee SR, Riffle OM, McGonigal J, Broome R, Maxwell B. Randomized controlled trial of SuperBetter, a smartphone-based/internet-based self-help tool to reduce depressive symptoms. Games Health J 2015 Jun;4(3):235-246 [FREE Full text] [CrossRef] [Medline]
- Laing BY, Mangione CM, Tseng CH, Leng M, Vaisberg E, Mahida M, et al. Effectiveness of a smartphone application for weight loss compared with usual care in overweight primary care patients: a randomized, controlled trial. Ann Intern Med 2014 Nov 18;161(10 Suppl):S5-12 [FREE Full text] [CrossRef] [Medline]
- Direito A, Jiang Y, Whittaker R, Maddison R. Smartphone apps to improve fitness and increase physical activity among young people: protocol of the Apps for IMproving FITness (AIMFIT) randomized controlled trial. BMC Public Health 2015 Jul 11;15:635 [FREE Full text] [CrossRef] [Medline]
- Turner-McGrievy G, Tate D. Tweets, apps, and pods: results of the 6-month Mobile Pounds Off Digitally (Mobile POD) randomized weight-loss intervention among adults. J Med Internet Res 2011 Dec 20;13(4):e120 [FREE Full text] [CrossRef] [Medline]
- Holmen H, Torbjørnsen A, Wahl AK, Jenum AK, Småstuen MC, Arsand E, et al. A mobile health intervention for self-management and lifestyle change for persons with type 2 diabetes, part 2: one-year results from the Norwegian randomized controlled trial RENEWING HEALTH. JMIR Mhealth Uhealth 2014 Dec 11;2(4):e57 [FREE Full text] [CrossRef] [Medline]
- Eysenbach G. The law of attrition. J Med Internet Res 2005 Mar 31;7(1):e11 [FREE Full text] [CrossRef] [Medline]
- Perski O, Blandford A, West R, Michie S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl Behav Med 2017 Dec;7(2):254-267 [FREE Full text] [CrossRef] [Medline]
- O'Brien H, Toms EG. What is user engagement? A conceptual framework for defining user engagement with technology. J Am Soc Inf Sci 2008 Apr;59(6):938-955. [CrossRef]
- Pham Q, Graham G, Lalloo C, Morita PP, Seto E, Stinson JN, et al. An Analytics Platform to Evaluate Effective Engagement With Pediatric Mobile Health Apps: Design, Development, and Formative Evaluation. JMIR Mhealth Uhealth 2018 Dec 21;6(12):e11447 [FREE Full text] [CrossRef] [Medline]
- Yardley L, Spring BJ, Riper H, Morrison LG, Crane DH, Curtis K, et al. Understanding and promoting effective engagement with digital behavior change interventions. Am J Prev Med 2016 Nov;51(5):833-842. [CrossRef] [Medline]
- Michie S, Yardley L, West R, Patrick K, Greaves F. Developing and evaluating digital interventions to promote behavior change in health and health care: recommendations resulting from an international workshop. J Med Internet Res 2017 Dec 29;19(6):e232 [FREE Full text] [CrossRef] [Medline]
- Barello S, Triberti S, Graffigna G, Libreri C, Serino S, Hibbard J, et al. eHealth for patient engagement: a systematic review. Front Psychol 2016 Jan 08;6:2013 [FREE Full text] [CrossRef] [Medline]
- Sieverink F, Kelders SM, van Gemert-Pijnen JE. Clarifying the concept of adherence to eHealth technology: systematic review on when usage becomes adherence. J Med Internet Res 2017 Dec 06;19(12):e402 [FREE Full text] [CrossRef] [Medline]
- Hervatis V, Loe A, Barman L, O'Donoghue J, Zary N. A conceptual analytics model for an outcome-driven quality management framework as part of professional healthcare education. JMIR Med Educ 2015 Oct 06;1(2):e11 [FREE Full text] [CrossRef] [Medline]
- Kotz D, Lord SE, O'Malley AJ, Stark L, Marsch LA. Workshop on emerging technology and data analytics for behavioral health. JMIR Res Protoc 2018 Jun 20;7(6):e158 [FREE Full text] [CrossRef] [Medline]
- Van Gemert-Pijnen JE, Kelders SM, Bohlmeijer ET. Understanding the usage of content in a mental health intervention for depression: an analysis of log data. J Med Internet Res 2014 Jan 31;16(1):e27 [FREE Full text] [CrossRef] [Medline]
- Sieverink F, Kelders SM, Braakman-Jansen LM, van Gemert-Pijnen JE. The added value of log file analyses of the use of a personal health record for patients with type 2 diabetes mellitus: preliminary results. J Diabetes Sci Technol 2014 Mar;8(2):247-255 [FREE Full text] [CrossRef] [Medline]
- Sieverink F, Kelders S, Poel M, van Gemert-Pijnen L. Opening the black box of electronic health: collecting, analyzing, and interpreting log data. JMIR Res Protoc 2017 Aug 07;6(8):e156 [FREE Full text] [CrossRef] [Medline]
- Arden-Close EJ, Smith E, Bradbury K, Morrison L, Dennison L, Michaelides D, et al. A visualization tool to analyse usage of web-based interventions: the example of Positive Online Weight Reduction (POWeR). JMIR Hum Factors 2015 May 19;2(1):e8 [FREE Full text] [CrossRef] [Medline]
- Hekler EB, Klasnja P, Riley WT, Buman MP, Huberty J, Rivera DE, et al. Agile science: creating useful products for behavior change in the real world. Transl Behav Med 2016 Dec;6(2):317-328 [FREE Full text] [CrossRef] [Medline]
- Short C, DeSmet A, Woods C, Williams SL, Maher C, Middelweerd A, et al. Measuring engagement in eHealth and mHealth behavior change interventions: viewpoint of methodologies. J Med Internet Res 2018 Nov 16;20(11):e292 [FREE Full text] [CrossRef] [Medline]
- Simblett S, Greer B, Matcham F, Curtis H, Polhemus A, Ferrão J, et al. Barriers to and facilitators of engagement with remote measurement technology for managing health: systematic review and content analysis of findings. J Med Internet Res 2018 Jul 12;20(7):e10480 [FREE Full text] [CrossRef] [Medline]
- Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol 2005 Feb 01;8(1):19-32 [FREE Full text] [CrossRef]
- Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implement Sci 2010 Sep 20;5:69 [FREE Full text] [CrossRef] [Medline]
- Global Status Report on Noncommunicable Diseases. In: World Health Organization. Geneva: World Health Organization; 2010.
- Elsevier. Mendeley Internet URL: https://www.mendeley.com/?interaction_required=true [accessed 2018-12-22] [WebCite Cache]
- Kelders SM, Kok RN, Ossebaard HC, Van Gemert-Pijnen JE. Persuasive system design does matter: a systematic review of adherence to web-based interventions. J Med Internet Res 2012 Nov 14;14(6):e152 [FREE Full text] [CrossRef] [Medline]
- IBM Corporation. IBM SPSS Statistics URL: https://www.ibm.com/products/spss-statistics [accessed 2018-12-22] [WebCite Cache]
- Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009 Jul 21;6(7):e1000097 [FREE Full text] [CrossRef] [Medline]
- Serrano KJ, Coa KI, Yu M, Wolff-Hughes DL, Atienza AA. Characterizing user engagement with health app data: a data mining approach. Transl Behav Med Internet 2017 Jun;7(2):277-285 [FREE Full text] [CrossRef] [Medline]
- Beiwinkel T, Kindermann S, Maier A, Kerl C, Moock J, Barbian G, et al. Using smartphones to monitor bipolar disorder a symptoms: a pilot study. JMIR Ment Health 2016 Jan 06;3(1):e2 [FREE Full text] [CrossRef] [Medline]
- Ben-Zeev D, Brian R, Aschbrenner KA, Jonathan G, Steingard S. Video-based mobile health interventions for people with schizophrenia: bringing the "pocket therapist" to life. Psychiatr Rehabil J 2018 Mar;41(1):39-45 [FREE Full text] [CrossRef] [Medline]
- Ben-Zeev D, Scherer EA, Gottlieb JD, Rotondi AJ, Brunette MF, Achtyes ED, et al. mHealth for schizophrenia: patient engagement with a mobile phone intervention following hospital discharge. JMIR Ment Health 2016 Jul 27;3(3):e34 [FREE Full text] [CrossRef] [Medline]
- Davies EB, Craven MP, Martin JL, Simons L. Proportionate methods for evaluating a simple digital mental health tool. Evid Based Ment Health 2017 Nov;20(4):112-117 [FREE Full text] [CrossRef] [Medline]
- Frisbee KL. Variations in the use of mHealth tools: the VA mobile health Study. JMIR Mhealth Uhealth 2016 Jul 19;4(3):e89 [FREE Full text] [CrossRef] [Medline]
- Kinderman P, Hagan P, King S, Bowman J, Chahal J, Gan L, et al. The feasibility and effectiveness of Catch It, an innovative CBT smartphone app. BJPsych Open 2016 May;2(3):204-209 [FREE Full text] [CrossRef] [Medline]
- Kuhn E, Kanuri N, Hoffman JE, Garvert DW, Ruzek JI, Taylor CB. A randomized controlled trial of a smartphone app for posttraumatic stress disorder symptoms. J Consult Clin Psychol 2017 Mar;85(3):267-273 [FREE Full text] [CrossRef] [Medline]
- Owen JE, Jaworski BK, Kuhn E, Makin-Byrd KN, Ramsey KM, Hoffman JE. mHealth in the wild: using novel data to examine the reach, use, and impact of PTSD coach. JMIR Ment Health 2015 Mar 25;2(1):e7 [FREE Full text] [CrossRef] [Medline]
- Pham Q, Khatib Y, Stansfeld S, Fox S, Green T. Feasibility and efficacy of an mHealth game for managing anxiety: "Flowy" randomized controlled pilot trial and design evaluation. Games Health J 2016 Feb;5(1):50-67 [FREE Full text] [CrossRef] [Medline]
- Torous J, Staples P, Slaters L, Adams J, Sandoval L, Onnela JP, et al. Characterizing smartphone engagement for schizophrenia: results of a naturalist mobile health study. Clin Schizophr Relat Psychoses 2017 Aug 04 Epub ahead of print [FREE Full text] [CrossRef] [Medline]
- Vansimaeys C, Zuber M, Pitrat B, Join-Lambert C, Tamazyan R, Farhat W, et al. Combining standard conventional measures and ecological momentary assessment of depression, anxiety and coping using smartphone application in minor stroke population: a longitudinal study protocol. Front Psychol 2017 Jul 12;8:1172 [FREE Full text] [CrossRef] [Medline]
- Wahle F, Kowatsch T, Fleisch E, Rufer M, Weidt S. Mobile sensing and support for people with depression: a pilot trial in the wild. JMIR Mhealth Uhealth 2016 Sep 21;4(3):e111 [FREE Full text] [CrossRef] [Medline]
- Fortier MA, Chung WW, Martinez A, Gago-Masague S, Sender L. Pain buddy: a novel use of m-health in the management of children's cancer pain. Comput Biol Med 2016 Dec 01;76:202-214 [FREE Full text] [CrossRef] [Medline]
- Jamison RN, Jurcik DC, Edwards RR, Huang CC, Ross EL. A pilot comparison of a smartphone app with or without 2-way messaging among chronic pain patients: who benefits from a pain app? Clin J Pain 2017 Aug;33(8):676-686 [FREE Full text] [CrossRef] [Medline]
- Jibb LA, Stevens BJ, Nathan PC, Seto E, Cafazzo JA, Johnston DL, et al. Implementation and preliminary effectiveness of a real-time pain management smartphone app for adolescents with cancer: a multicenter pilot clinical study. Pediatr Blood Cancer 2017 Oct;64(10) [FREE Full text] [CrossRef] [Medline]
- Reade S, Spencer K, Sergeant JC, Sperrin M, Schultz DM, Ainsworth J, et al. Cloudy with a chance of pain: engagement and subsequent attrition of daily data entry in a smartphone pilot study tracking weather, disease severity, and physical activity in patients with rheumatoid arthritis. JMIR Mhealth Uhealth 2017 Mar 24;5(3):e37 [FREE Full text] [CrossRef] [Medline]
- Skrepnik N, Spitzer A, Altman R, Hoekstra J, Stewart J, Toselli R. Assessing the impact of a novel smartphone application compared with standard follow-up on mobility of patients with knee osteoarthritis following treatment with hylan G-F 20: a randomized controlled trial. JMIR Mhealth Uhealth 2017 May 09;5(5):e64 [FREE Full text] [CrossRef] [Medline]
- Chan YY, Wang P, Rogers L, Tignor N, Zweig M, Hershman SG, et al. The asthma mobile health study, a large-scale clinical observational study using ResearchKit. Nat Biotechnol 2017 Apr;35(4):354-362 [FREE Full text] [CrossRef] [Medline]
- Cook KA, Modena BD, Simon RA. Improvement in asthma control using a minimally burdensome and proactive smartphone application. J Allergy Clin Immunol Pract 2016;4(4):730-7.e1 [FREE Full text] [CrossRef] [Medline]
- Fedele DA, McConville A, Graham Thomas J, McQuaid EL, Janicke DM, Turner EM, et al. Applying interactive mobile health to asthma care in teens (AIM2ACT): development and design of a randomized controlled trial. Contemp Clin Trials 2018 Dec;64:230-237. [CrossRef] [Medline]
- Kosse RC, Bouvy ML, de Vries TW, Kaptein AA, Geers HC, van Dijk L, et al. mHealth intervention to support asthma self-management in adolescents: the ADAPT study. Patient Prefer Adherence 2017 Mar 16;11:571-577 [FREE Full text] [CrossRef] [Medline]
- Agboola S, Palacholla RS, Centi A, Kvedar J, Jethwani K. A multimodal mHealth intervention (FeatForward) to improve physical activity behavior in patients with high cardiometabolic risk factors: rationale and protocol for a randomized controlled trial. JMIR Res Protoc 2016 May 12;5(2):e84 [FREE Full text] [CrossRef] [Medline]
- Goyal S, Morita PP, Picton P, Seto E, Zbib A, Cafazzo JA. Uptake of a consumer-focused mHealth application for the assessment and prevention of heart disease: the <30 days study. JMIR Mhealth Uhealth 2016 Mar 24;4(1):e32 [FREE Full text] [CrossRef] [Medline]
- Sakakibara BM, Ross E, Arthur G, Brown-Ganzert L, Petrin S, Sedlak T, et al. Using mobile-health to connect women with cardiovascular disease and improve self-management. Telemed J E Health 2017 Mar;23(3):233-239 [FREE Full text] [CrossRef] [Medline]
- Goyal S, Nunn CA, Rotondi M, Couperthwaite AB, Reiser S, Simone A, et al. A mobile app for the self-management of type 1 diabetes among adolescents: a randomized controlled trial. JMIR Mhealth Uhealth 2017 Jun 19;5(6):e82 [FREE Full text] [CrossRef] [Medline]
- Ryan EA, Holland J, Stroulia E, Bazelli B, Babwik SA, Li H, et al. Improved A1C levels in type 1 diabetes with smartphone App use. Can J Diabetes 2017 Feb;41(1):33-40. [CrossRef] [Medline]
- Sieber J, Flacke F, Link M, Haug C, Freckmann G. Improved glycemic control in a patient group performing 7-point profile self-monitoring of blood glucose and intensive data documentation: an open-label, multicenter, observational study. Diabetes Ther 2017 Oct;8(5):1079-1085 [FREE Full text] [CrossRef] [Medline]
- Desveaux L, Agarwal P, Shaw J, Hensel JM, Mukerji G, Onabajo N, et al. A randomized wait-list control trial to evaluate the impact of a mobile application to improve self-management of individuals with type 2 diabetes: a study protocol. BMC Med Inform Decis Mak 2016 Nov 15;16(1):144 [FREE Full text] [CrossRef] [Medline]
- Goh G, Tan NC, Malhotra R, Padmanabhan U, Barbier S, Allen JC, et al. Short-term trajectories of use of a caloric-monitoring mobile phone app among patients with type 2 diabetes mellitus in a primary care setting. J Med Internet Res 2015 Feb 03;17(2):e33 [FREE Full text] [CrossRef] [Medline]
- Kleinman NJ, Shah A, Shah S, Phatak S, Viswanathan V. Improved medication adherence and frequency of blood glucose self-testing using an m-Health platform versus usual care in a multisite randomized clinical trial among people with type 2 diabetes in India. Telemed J E Health 2017 Sep;23(9):733-740 [FREE Full text] [CrossRef] [Medline]
- Bot BM, Suver C, Neto EC, Kellen M, Klein A, Bare C, et al. The mPower study, Parkinson disease mobile data collected using ResearchKit. Sci Data 2016 Mar 03;3:160011 [FREE Full text] [CrossRef] [Medline]
- Hardinge M, Rutter H, Velardo C, Shah SA, Williams V, Tarassenko L, et al. Using a mobile health application to support self-management in chronic obstructive pulmonary disease: a six-month cohort study. BMC Med Inform Decis Mak 2015 Jun 18;15:46 [FREE Full text] [CrossRef] [Medline]
- Isetta V, Torres M, González K, Ruiz C, Dalmases M, Embid C, et al. A new mHealth application to support treatment of sleep apnoea patients. J Telemed Telecare 2017 Jan;23(1):14-18 [FREE Full text] [CrossRef] [Medline]
- Kaplan HC, Thakkar SN, Burns L, Chini B, Dykes DM, McPhail GL, et al. Protocol of a pilot study of technology-enabled coproduction in pediatric chronic illness care. JMIR Res Protoc 2017 Apr 28;6(4):e71 [FREE Full text] [CrossRef] [Medline]
- Langius-Eklöf A, Crafoord MT, Christiansen M, Fjell M, Sundberg K. Effects of an interactive mHealth innovation for early detection of patient-reported symptom distress with focus on participatory care: protocol for a study based on prospective, randomised, controlled trials in patients with prostate and breast cancer. BMC Cancer 2017 Jul 04;17(1):466 [FREE Full text] [CrossRef] [Medline]
- Ong S, Jassal SV, Miller JA, Porter EC, Cafazzo JA, Seto E, et al. Integrating a smartphone-based self-management system into usual care of advanced CKD. Clin J Am Soc Nephrol 2016 Dec 06;11(6):1054-1062 [FREE Full text] [CrossRef] [Medline]
- Pham Q, Cafazzo JA, Feifer A. Adoption, acceptability, and effectiveness of a mobile health app for personalized prostate cancer survivorship care: protocol for a realist case study of the Ned App. JMIR Res Protoc 2017 Oct 12;6(10):e197 [FREE Full text] [CrossRef] [Medline]
- Taki S, Lymer S, Russell CG, Campbell K, Laws R, Ong KL, et al. Assessing user engagement of an mHealth intervention: development and implementation of the growing healthy app engagement index. JMIR Mhealth Uhealth 2017 Jun 29;5(6):e89 [FREE Full text] [CrossRef] [Medline]
- Thies K, Anderson D, Cramer B. Lack of adoption of a mobile app to support patient self-management of diabetes and hypertension in a federally qualified health center: interview analysis of staff and patients in a failed randomized trial. JMIR Hum Factors 2017 Oct 03;4(4):e24 [FREE Full text] [CrossRef] [Medline]
- Toro-Ramos T, Kim Y, Wood M, Rajda J, Niejadlik K, Honcz J, et al. Efficacy of a mobile hypertension prevention delivery platform with human coaching. J Hum Hypertens 2017 Dec;31(12):795-800 [FREE Full text] [CrossRef] [Medline]
- Beiwinkel T, Hey S, Bock O, Rössler W. Supportive mental health self-monitoring among smartphone users with psychological distress: protocol for a fully mobile randomized controlled trial. Front Public Health 2017;5:249 [FREE Full text] [CrossRef] [Medline]
- Lei H, Nahum-Shani I, Lynch K, Oslin D, Murphy SA. A "SMART" design for building individualized treatment sequences. Annu Rev Clin Psychol 2012;8:21-48 [FREE Full text] [CrossRef] [Medline]
- Almirall D, Nahum-Shani I, Sherwood NE, Murphy SA. Introduction to SMART designs for the development of adaptive interventions: with application to weight loss research. Transl Behav Med 2014 Sep;4(3):260-274 [FREE Full text] [CrossRef] [Medline]
- Collins LM, Murphy SA, Strecher V. The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. Am J Prev Med 2007 May;32(5 Suppl):S112-S118 [FREE Full text] [CrossRef] [Medline]
- Serrano KJ, Yu M, Coa KI, Collins LM, Atienza AA. Mining health app data to find more and less successful weight loss subgroups. J Med Internet Res 2016 Dec 14;18(6):e154 [FREE Full text] [CrossRef] [Medline]
- Morrison C, Doherty G. Analyzing engagement in a web-based intervention platform through visualizing log-data. J Med Internet Res 2014 Nov 13;16(11):e252 [FREE Full text] [CrossRef] [Medline]
- Dennison L, Morrison L, Conway G, Yardley L. Opportunities and challenges for smartphone applications in supporting health behavior change: qualitative study. J Med Internet Res 2013 Apr 18;15(4):e86 [FREE Full text] [CrossRef] [Medline]
- Morrison LG, Hargood C, Lin SX, Dennison L, Joseph J, Hughes S, et al. Understanding usage of a hybrid website and smartphone app for weight management: a mixed-methods study. J Med Internet Res 2014 Oct 22;16(10):e201 [FREE Full text] [CrossRef] [Medline]
- Morrison LG, Geraghty AW, Lloyd S, Goodman N, Michaelides DT, Hargood C, et al. Comparing usage of a web and app stress management intervention: an observational study. Internet Interv 2018 Jun 21;12:74-82 [FREE Full text] [CrossRef] [Medline]
- Stinson JN, Lalloo C, Harris L, Isaac L, Campbell F, Brown S, et al. iCanCope with Pain™: user-centred design of a web- and mobile-based self-management program for youth with chronic pain based on identified health care needs. Pain Res Manag 2014;19(5):257-265 [FREE Full text] [Medline]
- Yardley L, Choudhury T, Patrick K, Michie S. Current issues and future directions for research into digital behavior change interventions. Am J Prev Med 2016 Nov;51(5):814-815. [CrossRef] [Medline]
- Patrick K, Hekler EB, Estrin D, Mohr DC, Riper H, Crane D, et al. The pace of technologic change: implications for digital health behavior intervention research. Am J Prev Med 2016 Nov;51(5):816-824. [CrossRef] [Medline]
- Scherer E, Ben-Zeev D, Li Z, Kane J. Analyzing mHealth engagement: joint models for intensively collected user engagement data. JMIR Mhealth Uhealth 2017 Jan 12;5(1):e1 [FREE Full text] [CrossRef] [Medline]
- Tignor N, Wang P, Genes N, Rogers L, Hershman SG, Scott ER, et al. Methods for clustering time series data acquired from mobile health apps. Pac Symp Biocomput 2017;22:300-311 [FREE Full text] [CrossRef] [Medline]
- Nilsen W. American Association for the Advancement of Science. 2015. mHealth's Revolution: Balancing Help and Harm URL: https://www.aaas.org/sites/default/files/Nilsen%20mHealths%20Revolution%20Balancing%20Help%20and%20Harm.pdf [accessed 2018-12-24] [WebCite Cache]
- Kumar S, Nilsen WJ, Abernethy A, Atienza A, Patrick K, Pavel M, et al. Mobile health technology evaluation: the mHealth evidence workshop. Am J Prev Med 2013 Aug;45(2):228-236 [FREE Full text] [CrossRef] [Medline]
- The Lancet. Is digital medicine different? Lancet 2018 Dec 14;392(10142):95. [CrossRef] [Medline]
- The National Institute for Health and Care Excellence. NICE IAPT assessment programm: Eligibility requirements of digitally enabled therapy technologies URL: https://www.nice.org.uk/Media/Default/About/what-we-do/NICE-advice/IAPT/eligibility-and-prioritisation-criteria.pdf [accessed 2018-12-22] [WebCite Cache]
Abbreviations
eHealth: electronic health |
IQR: interquartile range |
mHealth: mobile health |
PTSD: posttraumatic stress disorder |
RCT: randomized controlled trial |
Edited by S Kitsiou; submitted 14.08.18; peer-reviewed by S Kelders, H Oh, E Meinert; comments to author 28.09.18; revised version received 04.10.18; accepted 10.12.18; published 18.01.19
Copyright©Quynh Pham, Gary Graham, Carme Carrion, Plinio P Morita, Emily Seto, Jennifer N Stinson, Joseph A Cafazzo. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 18.01.2019.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mhealth and uhealth, is properly cited. The complete bibliographic information, a link to the original publication on http://mhealth.jmir.org/, as well as this copyright and license information must be included.