Original Paper
Abstract
Background: Digital health interventions have gained momentum to change health behaviors such as physical activity (PA) and sedentary behavior (SB). Although these interventions show promising results in terms of behavior change, they still suffer from high attrition rates, resulting in a lower potential and accessibility. To reduce attrition rates in the future, there is a need to investigate the reasons why individuals stop using the interventions. Certain demographic variables have already been related to attrition; however, the role of psychological determinants of behavior change as predictors of attrition has not yet been fully explored.
Objective: The aim of this study was to examine when, which, and why users stopped using a digital health intervention. In particular, we aimed to investigate whether psychological determinants of behavior change were predictors for attrition.
Methods: The sample consisted of 473 healthy adults who participated in the intervention MyPlan 2.0 to promote PA or reduce SB. The intervention was developed using the health action process approach (HAPA) model, which describes psychological determinants that guide individuals in changing their behavior. If participants stopped with the intervention, a questionnaire with 8 question concerning attrition was sent by email. To analyze when users stopped using the intervention, descriptive statistics were used per part of the intervention (including pre- and posttest measurements and the 5 website sessions). To analyze which users stopped using the intervention, demographic variables, behavioral status, and HAPA-based psychological determinants at pretest measurement were investigated as potential predictors of attrition using logistic regression models. To analyze why users stopped using the intervention, descriptive statistics of scores to the attrition-related questionnaire were used.
Results: The study demonstrated that 47.9% (227/473) of participants stopped using the intervention, and drop out occurred mainly in the beginning of the intervention. The results seem to indicate that gender and participant scores on the psychological determinants action planning, coping planning, and self-monitoring were predictors of first session, third session, or whole intervention completion. The most endorsed reasons to stop using the intervention were the time-consuming nature of questionnaires (55%), not having time (50%), dissatisfaction with the content of the intervention (41%), technical problems (39%), already meeting the guidelines for PA/SB (31%), and, to a lesser extent, the experience of medical/emotional problems (16%).
Conclusions: This study provides some directions for future studies. To decrease attrition, it will be important to personalize interventions on different levels, questionnaires (either for research purposes or tailoring) should be kept to a minimum especially in the beginning of interventions by, for example, using objective monitoring devices, and technical aspects of digital health interventions should be thoroughly tested in advance.
Trial Registration: ClinicalTrials.gov NCT03274271; https://clinicaltrials.gov/ct2/show/NCT03274271
International Registered Report Identifier (IRRID): RR2-10.1186/s13063-019-3456-7
doi:10.2196/30583
Keywords
Introduction
Digital health interventions have gained momentum to change health behaviors such as physical activity (PA) and sedentary behavior (SB) [
, ]. Their potential value lies in the ability to reach large groups in a personal, cost-effective, and time-efficient way [ - ]. Previous interventions have shown promising results in terms of behavior change [ - ]. Nevertheless, there are differences in use and completion [ - ]. It is important to ensure that participants do not drop out of the intervention. So far, attrition rates in digital health interventions are high (50% to 80%) [ , , ]. As a result, interventions may lose part of their potential and accessibility. Also, effective evaluation of trials becomes a challenge. In order to reduce attrition, there is a need to investigate the reasons why individuals drop out. This question is often not addressed, as most studies focus on the effectiveness of interventions [ ]. The answers may, however, provide valuable information in developing future digital health interventions [ ]. In response to this problem, there has been a call for a science of attrition [ ].In order to understand attrition, 3 questions could be considered [
]. First, when do users stop using the intervention? Answers to this question may allow identification of weak parts of an intervention and may help in redesigning, restructuring, or removing certain parts. Most interventions only describe attrition rates at the end of the intervention. However, reporting attrition proportions at several time points can provide valuable information. For example, different patterns of attrition may occur: (1) a constant proportion of users may drop out of the intervention, (2) users may stay in the intervention first out of curiosity, which relates to the novelty effect (ie, the human tendency for heightened engagement to a novel phenomenon [ ]), and then drop out when the novelty has worn off and eventually a stable group remains, (3) a group of users drops out of the intervention immediately and a stable group of users remains [ ]. Each pattern could indicate different underlying causes of attrition.Second, which users stop using the intervention? An answer to this question may direct researchers to tailor the content of the intervention to particular subgroups. Demographic variables such as being male [
, , ], having a young age [ - ], having a lower educational level [ ], and not having a partner [ ] have been related to higher attrition rates in digital health interventions. The role of BMI in relation to attrition shows inconsistent results [ , ]. Also, the behavioral status of the participant at the start of the intervention may have an effect. Participants meeting the guidelines for moderate physical activity and for vegetable consumption at baseline showed lower attrition rates in comparison with those who did not meet these guidelines [ , ]. Davis and Addis [ ] argued for investigation into the psychological determinants of behavior as predictors of attrition. Users with a low intention to change behavior have already been shown to drop out more often [ , ]. Accordingly, the role of other psychological determinants as predictors of attrition has not yet been fully explored.Third, why do users stop using the intervention? Answers to this question may help researchers identify whether attrition is caused by features embedded in the intervention (eg, design of the intervention or technical problems with the intervention, lack of useful intervention content, too much questionnaires) or by reasons outside the intervention (eg, no interest in the topic, medical or emotional problems, lack of time).
In summary, the aim of this paper is threefold. The first aim is to examine when users stop using the intervention. The second aim is to investigate which users stop using the intervention informed by demographic variables, behavioral status at the beginning of the intervention, and psychological determinants. The third aim is to explore why users stop using the intervention by describing reasons for noncompletion.
This paper addresses these questions through secondary analysis of a digital health intervention that aimed to increase PA or reduce SB among the general population [
, ]. This intervention was developed using the health action process approach (HAPA) model, which describes psychological determinants that guide individuals in changing their behavior [ ]. It is a 2-phase model that includes (1) motivational processes identified by determinants such as risk perception, outcome expectancies, and self-efficacy leading to a behavioral intention and (2) volitional processes identified by determinants such as action planning, coping planning, and self-monitoring bridging the gap between intention and the actual behavior [ ]. As HAPA has been shown to effectively change behavior, the HAPA-based psychological determinants are considered important predictors of behavior change [ , ]. These predictors might not only influence behavior change but also the decision of whether to stop using an intervention.Methods
Data Source
The data reported in this paper were from the MyPlan 2.0 factorial randomized controlled trial registered at ClinicalTrials.gov [NCT03274271] and approved by the Ghent University Hospital Ethics Committee. The protocol of the trial can be found elsewhere [
].Intervention
The MyPlan 2.0 digital health intervention consisted of a website and an optional mobile app to promote PA or reduce SB in healthy adults from the general population. MyPlan 2.0 was based on the HAPA model and consisted of a number of behavior change techniques (BCTs) aiming to influence participants’ HAPA-based psychological determinants of behavior change. The BCTs used in this study were goal setting, providing information on consequences of behavior, providing feedback on performance, social support, action planning, coping planning, self-monitoring, and reviewing behavior goals. These BCTs are described below.
Before the start of the intervention, participants chose which behavior (PA or SB) they wanted to improve (ie, goal setting). Depending on their choice, they were directed to the version of MyPlan 2.0 targeting PA or SB. The structure of the intervention was identical for the two behaviors.
The website is considered the main part of the intervention and consisted of 5 website sessions, with 1 week between each session (see
). Participants were expected to go through each of these sessions. The structure of the website sessions was fixed. In the first session, participants created a profile, were offered an optional quiz with information about the benefits of the selected target behavior (ie, providing information on consequences of behavior), and received tailored feedback on the current state of their chosen behavior (ie, providing feedback on performance). Thereafter, participants created an action plan by specifying how they wanted to reach their PA or SB goal, what they wanted do to, and where and when they wanted to do it (ie, action planning). Consequently, they identified potential barriers and thought about possible solutions (ie, coping planning). Thereafter, participants were prompted to monitor their behavior via the app or other options such as writing in their diary or on their calendar (ie, self-monitoring). At the end of the first session, they could read about how they could obtain social support from their partner, friends, family, or colleagues. In the 4 follow-up sessions, participants were asked to reflect on their progress of behavior change of the past week by evaluating their PA or SB goal (ie, reviewing behavior goals). They were also prompted to adapt or maintain their action plan, coping plan, and self-monitoring method. Screenshots of the website can be found in .The app was offered to participants as an optional tool to provide support on a daily basis. The app was synchronized with the website and developed as an extension to support users with their plans created in the website sessions. Use of the app was not mandatory. It consisted of 5 modules through which participants could freely navigate. In the first module, participants could again obtain a quiz regarding the benefits of more PA or less SB. In the second module, participants could review their action plan (which was created on the website) and change their plan throughout the week (ie, action planning). Moreover, the app reminded participants of their plan by sending notifications at scheduled times. In the third module, they could select barriers and receive an overview of possible solutions (ie, coping planning). In the fourth module, participants received a notification every evening to monitor their behavior by rating if they succeeded in their plan for the day on a scale from 0 to 5 (ie, self-monitoring). In the fifth module, users could collect medals by completing the website sessions, completing quizzes, and monitoring their behavior. These elements of gamification (ie, “the use of game design elements in nongaming contexts” [
]) were added to increase engagement with the intervention [ ]. Screenshots of the app can be found in .Intervention Content as Part of the Design
Intervention content differed as part of the design of the MyPlan 2.0 factorial randomized controlled trial [
]. Participants were randomly allocated to 8 different groups to evaluate the efficacy of 3 BCTs (ie, action planning, coping planning, and self-monitoring) and their combinations. As such, each group received a different version of the intervention, in which the 3 different BCTs were combined ( ). In both the website and the app, the BCTs could easily be removed or added in order to create the different groups [ ]. Nevertheless, each participant received a basic intervention including the following BCTs: goal setting, providing information on consequences of behavior, providing feedback on performance, social support, and reviewing behavior goals [ ].Action planning | Coping planning | Self-monitoring | |
Group 1 | +a | + | + |
Group 2 | + | + | –b |
Group 3 | + | – | + |
Group 4 | – | + | + |
Group 5 | + | – | – |
Group 6 | – | + | – |
Group 7 | – | – | + |
Group 8 | – | – | – |
a+: group received the intervention content including the behavior change technique.
b–: group received the intervention content without the behavior change technique.
Participants and Procedure
The sample consisted of 473 participants who were recruited between February and December 2018 at the city library of Ghent or through social media. Inclusion criteria were a minimum age of 18 years, speaking Dutch, having internet access at home or work, and owning a smartphone (iOS or Android). Participants completed the 7 items of the Physical Activity Readiness Questionnaire as a screening instrument to detect individuals at risk for adverse effects when being more physically active [
]. Participants who answered no to all items were eligible for the study.The flowchart for MyPlan 2.0 can be found in
. Participants completed pretest measurements including demographic variables, psychological determinants of behavior change, and questions assessing their current PA or SB level. When the pretest measurements were completed, participants were randomly allocated to 1 of the 8 different versions of the intervention ( ). Immediately after the randomization, participants could start with the intervention (maximum 1 week after the pretest measurements). The intervention consisted of 5 consecutive website sessions, ideally with 1 week between each session, and the optional mobile app, which could be used at any time during the intervention. Approximately 1 week after completing the last session, participants completed posttest measurements. The pretest measurements and posttest measurements were conducted via an online survey tool (Limesurvey GmbH).Boosting strategies were used to encourage completion of each part (pretest and posttest measurements and the 5 website sessions): participants who did not complete a certain part after 1 week were sent a reminder, if they had not completed the part after 2 weeks, they were contacted by phone by the researcher (HS). If there was no response after 3 weeks, the participant was considered a noncompleter, and attrition was documented to have occurred during that specific part of the intervention. As such, the duration of the study could be different for each participant, depending on when they completed each part of the study.
After finishing website sessions 1 and 3, participants were invited to complete an additional questionnaire assessing psychological determinants during the intervention. After completing session 3, participants were also asked to complete another questionnaire assessing the use of the app [
]. Noncompletion of these additional questionnaires did not affect the continuation of the intervention. Data from these questionnaires were not used for analyses in this study.Measures
Definition of Attrition
In this paper, we differentiated between 2 types of attrition: (1) nonusage attrition, which refers to participants who were not using the intervention (ie, not completing the website sessions) and (2) dropout attrition, which refers to participants who were lost to follow-up because they stopped completing questionnaires for research purposes (ie, did not complete posttest measurements). Here, nonusage attrition automatically equaled dropout attrition because of the linear design of the study (eg, it was not possible to start, for example, with session 4 on the website if session 3 was not completed). Consequently, in this paper, we will just use the term attrition. Not using the app was not considered attrition because participants could use the app as an optional choice. Moreover, not completing the additional questionnaires after website sessions 1 and 3 was not considered attrition because it was still possible to proceed with the online website sessions.
Demographic Variables
At the pretest measurement, the following demographic variables were assessed: age, gender, education level (categorized as not having vs having a college/university degree), BMI (categorized as not overweight [≤25 kg/m²] vs overweight [>25 kg/m²]), and marital status (categorized as not having a partner vs having a partner).
Behavioral Status
The current level of PA or SB was assessed at pretest measurement to determine the behavioral status at the beginning of the intervention. For PA, the Dutch long version of the International Physical Activity Questionnaire [
] was used to measure moderate-to-vigorous intensity PA (MVPA) in minutes per week. For SB, the Dutch 7-day sedentary behavior self-report questionnaire [ ] was used to measure total sedentary time in hours per day. Behavioral status at the beginning of the intervention was categorized as not meeting the guidelines (<150 minutes per week of MVPA or >8 hours per day of sitting time) versus meeting the guidelines (>150 minutes per week of MVPA or <8 hours per day of sitting time).Psychological Determinants
The HAPA-based psychological determinants were measured at pretest measurement using a set of 26 items (ie, at least 3 items per determinant), which can be found in
. As described in our protocol paper [ ], the set was based on the HAPA model and was iteratively developed and validated by an expert panel using cognitive interviewing [ , ] and a discriminant content validity method [ ]. The same set of items was used for the 2 behaviors but the items were adapted to either PA or SB. Risk perception was assessed by 4 items; 1 of the items was “I am a person who is prone to high blood pressure.” The 4 items showed poor internal consistency (α=0.59). Removing the last item (“I am a person who is prone to have depression”) increased the internal consistency (α=0.71). Therefore, only the first 3 items were used to assess risk perception. Outcome expectancies were assessed with 5 items; 1 of the items for PA was “If I start being physically active regularly, I will feel better afterward.” Also here, the internal consistency was low (α=0.56). Removing the last item (“If I am physically active regularly, I have the feeling I lose time”) increased the internal consistency (α=0.68). As a result, only the first 4 items were taken into account to assess outcome expectancies. Self-efficacy was assessed by 5 items; 1 of the items for SB was “I am sure I can reduce my sitting time, even when I feel tired.” The items showed good internal consistency (α=0.83). Three items were used to assess intention; 1 of the items for PA was “I intend to be physically active regularly.” The internal consistency for these items was good (α=0.87). Action planning was assessed by 3 items; 1 of the items for PA was “I know exactly what to do (how, where, when, ...) to be physically active regularly.” All items showed good internal consistency (α=0.84). For coping planning, 3 items were used; 1 of the items for PA was “I already have thought about possible solutions in case I encounter obstacles in order to be physically active regularly (eg, if the swimming pool is closed, I go for a walk instead).” Also here, the items showed good internal consistency (α=0.88). Finally, 3 items were used to assess self-monitoring; 1 of the items for SB was “I am constantly monitoring how long I sit.” The internal consistency for these items was good (α=0.76). Participants rated all items on a 5-point response scale (1=totally disagree, 2=somewhat disagree, 3=neutral, 4=somewhat agree, 5=totally agree). For each determinant, the mean score of the items was used in the analyses.Attrition-Related Questionnaire
When a participant was determined to be a noncompleter of a certain intervention part, a questionnaire with reasons for discontinuation was sent by email. Participants could indicate whether they found the reason for attrition totally not applicable, not applicable, neutral, applicable or totally applicable in response to 8 statements concerning attrition. The questions were based on attrition-related factors described in an article by Eysenbach [
].Statistical Analyses
Attrition Pattern
To analyze when users stopped using the intervention (aim 1), the numbers of participants per part of the intervention were described. For this paper, the different parts included the pretest and posttest measurements as well as the 5 website sessions (7 parts in total, see
). Each part was considered completed if participants completed the last question (for the pretest and posttest measurements) or visited the last page on the website (for the website sessions). Descriptive analyses were performed in Excel (Microsoft Corp).Predictors of Attrition
To analyze which users stopped using the intervention, the following predictors of attrition were investigated: demographic variables, behavioral status, and psychological determinants at pretest measurement. Analyses were performed in SPSS (version 26, IBM Corp). Logistic regression models were fitted with attrition as a dependent variable at different time points (the number of the logistic regression models depended on the attrition pattern of aim 1). All independent variables (demographic variables, behavioral status, and psychological determinants) were entered separately into the logistic regression models. P<.05 was considered statistically significant, whereas P values between .05 and .10 were considered borderline significant; 95% confidence intervals were also reported.
In order to investigate whether different intervention content as part of the design of MyPlan 2.0 was a reason for attrition, 2 other predictors were added to the logistic regression models described above: the group to which the participants were allocated (group 1-8) and the choice of behavior participants wanted to improve (PA versus SB).
Reasons for Attrition
To analyze why users stopped using the intervention, the scores of participants to the attrition relation questionnaire were used. For each question, the number and percentage of participants who found the question (totally) not applicable, neutral, or (totally) applicable was shown. Descriptive analyses were performed in Excel (Microsoft Corp).
Results
Participant Characteristics
In total, 473 participants agreed to participate in the MyPlan 2.0 trial, completed pretest measurement, and were therefore considered users in this study. Of these participants, the mean age was 36.7 (SD 16.3) years, 69.1% (327/473) of participants were female, 66.4% (314/473) had a high level of education (college or university degree), 30.2% (143/473) were overweight, 42.3% (200/473) had a partner, and 49.0% (232/473) met the guidelines for either PA or SB. Descriptive statistics of the psychological determinants at pretest measurement are provided in
. In addition, the number and percentage of participants in each group as well as the percentage of participants who chose PA or SB is provided ( ).Variable | Participants (n=473) | |
Age (years), mean (SD) | 36.7 (16.3) | |
Gender (female), n (%) | 327 (69.1) | |
Level of education (% high = university/college) | 314 (66.4) | |
BMI (kg/m2), mean (SD) | 23.5 (3.7) | |
Overweight, n (%) | 143 (30.2) | |
Marital status (with partner), n (%) | 200 (42.3) | |
Behavioral status (meets guidelines of PAa or SBb), n (%) | 232 (49.0) | |
Self-efficacyc, mean (SD) | 3.54 (0.62) | |
Outcome expectanciesc, mean (SD) | 3.95 (0.49) | |
Risk perceptionc, mean (SD) | 2.08 (0.66) | |
Intentionc, mean (SD) | 4.08 (0.55) | |
Action planningc, mean (SD) | 2.84 (0.81) | |
Coping planningc, mean (SD) | 2.47 (0.82) | |
Self-monitoringc, mean (SD) | 2.06 (0.85) | |
Participants in each groupd, n (%) | ||
Group 1 | 59 (12.5) | |
Group 2 | 60 (12.7) | |
Group 3 | 56 (11.8) | |
Group 4 | 56 (11.8) | |
Group 5 | 61 (12.9) | |
Group 6 | 59 (12.5) | |
Group 7 | 61 (12.9) | |
Group 8 | 61 (12.9) | |
Choice of behavior (participants who chose PA), n (%) | 335 (70.8) |
aPA: physical activity.
bSB: sedentary behavior.
cMean on a score of 5 (SD).
dSee
for information about each group.Attrition Pattern
Of the participants, 47.9% (227/473) did not complete the intervention.
shows the attrition pattern of the intervention. The biggest loss of participants was found in the early stage of the intervention; 20.7% (98/473) dropped out before completing the first website session, 14.8% (70/473) before completing the second session, and 6.1% (29/473) before the third session. This means that 41.6% (197/473) of participants dropped out before the third session and only 6.3% (30/473) after that session, which could determine a steady state of attrition after that part of the intervention.Predictors of Attrition
Predictors of attrition were investigated at 3 time points (
). In order to do so, 3 logistic regression models were fitted: (1) identification of the predictors of first session completion, (2) identification of predictors of third session completion, and (3) identification of predictors of whole intervention completion (ie, completion of all 7 parts). This decision was based on the attrition pattern of aim 1: a large number of participants did not complete the first session, and a steady state of attrition was found after third session completion.Characteristics | First session completion (0=dropout before first session, 1=first session completion), ORa (95% CI) | P value | Third session completion (0=dropout before third session, 1=third session completion), OR (95% CI) | P value | Whole intervention completion (0=dropout before posttest measurements, 1=whole intervention completion), OR (95% CI) | P value | |
Demographic variables | |||||||
Age | 1.0 (1.0-1.0) | 0.91 | 1.0 (1.0-1.1) | 0.21 | 1.0 (1.0-1.0) | 0.16 | |
Gender (0=female, 1=male) | 1.4 (0.8-2.3) | 0.20 | 1.4 (1.0-2.2) | 0.08 | 1.4 (1.0-2.1) | 0.07 | |
Education (0=no college/university degree, 1=college/university degree) | 1.2 (0.8-1.9) | 0.46 | 1.2 (0.8-1.7) | 0.46 | 1.0 (0.7-1.5) | 0.89 | |
BMI (0=not overweight, 1=overweight) | 0.9 (0.5-1.4) | 0.57 | 0.8 (0.6-1.2) | 0.38 | 0.9 (0.6-1.3) | 0.52 | |
Marital status (0=no partner, 1=partner) | 1.3 (0.8-2.1) | 0.23 | 1.1 (0.8-1.6) | 0.64 | 1.0 (0.7-1.5) | 0.95 | |
Baseline norm (0=did not meet guidelines, 1=met guidelines) | 1.1 (0.7-1.7) | 0.81 | 1.0 (0.7-1.4) | 0.80 | 1.0 (0.7-1.5) | 0.95 | |
Psychological determinants | |||||||
Self-efficacy | 1.0 (0.7-1.5) | 0.86 | 1.2 (0.9-1.6) | 0.30 | 1.1 (0.8-1.5) | 0.54 | |
Outcome expectancies | 1.1 (0.7-1.7) | 0.62 | 1.1 (0.8-1.5) | 0.63 | 1.0 (0.7-1.4) | 0.84 | |
Risk perception | 0.9 (0.7-1.2) | 0.40 | 0.9 (0.7-1.2) | 0.64 | 0.9 (0.7-1.2) | 0.44 | |
Intention | 1.0 (0.7-1.5) | 0.98 | 1.0 (0.7-1.4) | 0.98 | 1.0 (0.7-1.4) | 0.84 | |
Action planning | 0.8 (0.6-1.0) | 0.08 | 0.8 (0.7-1.0) | 0.09 | 0.9 (0.7-1.1) | 0.25 | |
Coping planning | 0.8 (0.6-1.0) | 0.05 | 0.9 (0.8-1.2) | 0.64 | 1.0 (0.8-1.2) | 0.65 | |
Self-monitoring | 0.8 (0.6-1.0) | 0.19 | 0.8 (0.6-1.0) | 0.04 | 0.8 (0.7-1.0) | 0.08 | |
Intervention content as part of the design | |||||||
Group | 1.0 (0.9-1.1) | 0.45 | 1.1 (1.0-1.1) | 0.22 | 1.1 (1.0-1.1) | 0.18 | |
Behavior choice (0=participant chose PAb, 1=participant chose SBc) | 1.2 (0.7-1.9) | 0.52 | 1.3 (0.8-1.9) | 0.26 | 1.2 (0.8-1.8) | 0.29 |
aOR: odds ratio.
bPA: physical activity.
cSB: sedentary behavior.
Predictors of First Session Completion
No significant predictors of first session completion were found. However, the psychological determinants action planning and coping planning were found to be borderline significant (odds ratio [OR] 0.782 [95% CI 0.595-1.028], P=.08 and OR 0.769 [95% CI 0.590-1.002], P=.05, respectively), with participants with a higher score on action planning and coping planning being less likely to complete the first website session (
).Furthermore, the group to which participants were allocated and the choice of behavior participants wanted to improve as part of the design of MyPlan 2.0 were not significant predictors of first session completion.
Predictors of Third Session Completion
The psychological determinant self-monitoring significantly predicted whether participants completed the third session (OR 0.801 [95% CI 0.646-0.993], P=.04), with participants with a higher score on self-monitoring being less likely to complete the third website session. Furthermore, the demographic variable gender and the psychological determinant action planning were found to be borderline significant (OR 1.440 [95% CI 0.963-2.155], P=.08 and OR 0.822 [95% CI 0.655-1.032], P=.09, respectively). Men were more likely to complete the third website session, and participants with a higher score on action planning were less likely to complete the third website session (
).Furthermore, the group to which participants were allocated and the choice of behavior participants wanted to improve as part of the design of MyPlan 2.0 were not significant predictors of third session completion.
Predictors of Whole Intervention Completion
The demographic variable gender and the psychological determinant self-monitoring were found to be borderline significant (OR 1.437 [95% CI 0.969-2.13], P=.07 and OR 0.828 [95% CI 0.669-1.025], P=.08, respectively). Men were more likely to complete the whole intervention, and participants with a higher score on self-monitoring were less likely to complete the whole intervention (
).Furthermore, the group to which participants were allocated and the choice of behavior participants wanted to improve as part of the design of MyPlan 2.0 were not significant predictors of whole intervention completion.
Reasons for Attrition
The reasons why participants stopped using the intervention were obtained from 51 of 227 participants (22% of all noncompleters) and can be found in
. We were not able to contact the other participants. Participants who were older (OR 1.021 [95% CI 1.002-1.041]) and had a higher educational level (OR 2.938 [95% CI 1.345-6.418]) were more likely to complete the questionnaire.The most endorsed reasons to stop using the intervention were “Filling out the questionnaires took a lot of my time” (28/51, 55%), “I don’t have time” (26/51, 50%), “The intervention doesn’t provide useful content” (21/51, 41%), “I experienced technical problems with the website or app” (20/51, 39%), “I already meet the health guidelines for PA/SB” (16/51, 31%), and “I experienced medical/emotional problems” (8/51, 16%).
The following reasons were reported not to be important to stop using the intervention: “I am not interested in the topic” (1/51, 2%) and “I don’t want to change my behavior” (1/51, 2%).
I stopped using the interventions because... | (Totally) not applicable, n (%) | Neutral, n (%) | (Totally) applicable, n (%) |
I am not interested in the topic | 46 (90) | 4 (8) | 1 (2) |
I don’t have time | 19 (37) | 6 (12) | 26 (51) |
I already meet the health guidelines for PAa/SBb | 19 (37) | 16 (31) | 16 (32) |
I don’t want to change my behavior | 41 (80) | 9 (18) | 1 (2) |
The intervention doesn’t provide useful content | 25 (49) | 5 (10) | 21 (41) |
Filling out the questionnaires took a lot of my time | 16 (31) | 7 (14) | 28 (55) |
I experience technical problems with the website or app | 28 (55) | 3 (6) | 20 (39) |
I experience medical/emotional problems | 42 (82) | 1 (2) | 8 (16) |
aPA: physical activity.
bSB: sedentary behavior.
Discussion
Principal Findings
This study investigated when, which, and why users stopped using an intervention to promote PA and reduce SB. The study demonstrated that 227 of 473 participants stopped using the intervention, and drop out occurred mainly in the first weeks. Certain predictors of first session, third session, or whole intervention completion were found. The most endorsed reasons to stop using the intervention were time-consuming nature of the questionnaires, not having time, dissatisfaction with the content of the intervention, technical problems, already meeting the guidelines for PA/SB, and, to a lesser extent, experiencing medical/emotional problems.
Although our intervention was systematically developed based on both qualitative and quantitative research [
, ] and boosting strategies were used to keep participants engaged with the intervention, an overall attrition rate of 47.9% was observed. This rate is similar to other interventions [ , ]. Attrition is an important obstacle in digital health interventions and should be reduced in order to increase the public health impact. The overall pattern of results indicates that the largest group of users drop out in the first weeks of the intervention, and a stable group of users remains after that. This pattern is frequently observed in digital health research [ , ] and is described as a L-shaped curve [ ]. A possible explanation for this attrition pattern might be the time-consuming nature of the questionnaires, and participants reported this as a reason to quit the intervention. Indeed, at the start of the intervention, participants completed several questionnaires, mainly for research purposes and for tailoring advice throughout the intervention. Although we reduced the amount of questions substantially in comparison with a previous version of the intervention (MyPlan 1.0) to prevent attrition [ , ], participants still perceived it as too long. The review by Sharpe et al [ ] showed that users are indeed less inclined to persevere with digital health interventions when they are found to be time-consuming and burdensome. Researchers should thus thoroughly reflect which and how many questions should be included in digital health intervention studies. Another option might be to collect baseline data through monitoring devices (eg, wearables such as Fitbit). Although this is often done for research purposes [ , ], such devices can also be used to provide tailored support in the beginning of an intervention (eg, tailored feedback on their current PA level).According to Eysenbach [
], attrition might also be the result of a wrong user group, the members of which quickly lost interest. Indeed, the overall pattern of results indicates that participants already doing action planning, coping planning, and self-monitoring were more likely to drop out. As the MyPlan 2.0 intervention focused on these postintentional determinants [ ], the intervention may not have added value for these participants as they might have been the wrong user group, causing them to stop using the intervention. However, one should be reminded that some of these effects were borderline significant in the current analyses, and thus await further replication and corroboration. Notwithstanding, an important question to answer is “Do participants already doing action planning, coping planning, and self-monitoring still benefit from an intervention?” On the one hand, one may reason that individuals who already have the competencies and skills to change behavior by themselves may not need additional support. On the other hand, it may well be that these individuals require a different, more individual approach that takes into account the needs and characteristics of the individual. Innovations in digital technology and artificial intelligence [ ] enable researchers to develop more personalized interventions, making such an individual approach possible. Indeed, various studies indicate that personalization is crucial for future digital health interventions to increase engagement [ , , ]. Yet, personalization can occur on different levels [ ], and to the best of our knowledge, there is no consensus or framework on how to specifically personalize digital health interventions for PA or SB. Based on our findings, interventions may be personalized on 2 levels: dynamic tailoring of BCTs to the motivational stage to which an individual belongs and including personalized suggestions of BCTs at the operational level.Regarding the first level, individuals may differ in terms of motivational stages: preintenders are individuals who do not yet have an intention to change, intenders are individuals who have an intention but do not yet act on these intentions, and actors are individuals who already act on their intentions [
]. Tailoring interventions to the stage of the participant may be more successful than mismatched interventions [ ], but this tailoring often occurs only once at the beginning of an intervention. However, stages can also differ over time during the intervention (eg, intenders can become actors). By extension, research suggests [ , ] further differentiating between actors (individuals who recently started to perform the behavior) and maintainers (individuals who perform the behavior with high automatization over a long period of time). Intenders and actors may still benefit from BCTs such as action planning, coping planning, and self-monitoring, whereas maintainers might need other BCTs [ , , ]. The findings of Schwarzer et al [ ] indeed show that habitual activity does not require planning because the activity occurs rather automatically, whereas in the absence of the habit, planning appears to be a facilitator of PA. As such, providing dynamic tailoring of BCTs to these changing demands could reduce dropout in future interventions.Determining the motivational stage of an individual is not an easy endeavor [
]. One might argue that individuals should be matched to stages based on meeting the health guidelines (eg, meeting the health guidelines may reflect being a maintainer and not meeting the health guidelines but having plans to work toward them may reflect being an intender). Accordingly, “Already meeting the guidelines for PA/SB” was one of the main reasons participants indicated stopping the intervention, with participants possibly needing other BCTs. However, meeting or not meeting the guidelines for PA/SB alone does not necessarily reflect the stage of the individual. One might also argue that individuals should be matched to stages based on the goal they have in mind. One can be a maintainer for a small behavior goal (eg, walking twice a week to work) and still be an intender or actor for a more challenging behavior goal (eg, running twice a week).This brings us to the second level of personalization: including personalized suggestions of BCTs at the operational level. Participants who dropped out in our study might have been actors or maintainers who were looking for more challenging support. However, participants in our study were their own expert in terms of making action and coping plans, which means they had full control over the content of their plans. Although this is in line with self-regulation theory and increases autonomy [
], the delivery of these BCTs remained abstract and generic, offering standard but not personalized support. Indeed, it could be that participants were limiting themselves to plans that were already familiar to them, whereas they actually needed personalized suggestions that could provide them with new information and inspiration. Accordingly, dissatisfaction with intervention content was a reason for attrition in this study. This is in line with other research investigating user engagement [ ]; participants in digital interventions for weight management most disliked generic information and repetition of content. As such, there is a need to tailor support at the operational level, involving suggestions of specific plans that are personalized to the individual. In addition, not having time was also found to be a reason for attrition. We acknowledge that thinking about action and coping plans is time intensive and requires high effort. Here, providing more personalized suggestions could result in a lower effort, time-effective intervention that could reduce dropout [ ]. One should note, however, that behavior change in itself is not an easy endeavor, and raising awareness that behavior change takes time and effort is important. Here again, collecting objective data on PA/SB through monitoring devices [ ] not only at the beginning but also throughout the course of the intervention will be important for personalization on both levels. That way, shifts between stages can be more easily identified (eg, intender to actor, actor to maintainer, actor to intender when there is a relapse). Passive data collection also offers the opportunity to provide more accurate personalized suggestions (eg, guiding participants from 8000 to 10,000 steps, guiding participants from walking to running, suggesting appropriate moments for a certain participant to do PA).Remarkably, men were less likely to drop out than women. This is in contrast with previous findings [
, , ]. A possible reason may be that this was an RCT compared to an open access study where participants had to give verbal consent for enrollment in the intervention. Several studies, including this one, have shown that men are less likely to enroll in studies compared to women [ ], as women are more prone to respond in a socially desirable fashion [ ]. However, once men do enroll in studies, they are more determined to complete the study. In order to increase engagement with the intervention, specific suggestions of plans as described in the previous paragraphs could also be personalized based on gender. Overall, most demographic variables did not predict whether certain subgroups of users stopped using the intervention. This could imply that the intervention can be broadly implemented and does not exclude specific target groups. Still, almost half of the participants dropped out of the study. This might indicate that other contextual and personal factors that were not investigated in this study (eg, social and physical environment, weather, location, mood, or health status) play a role.Some other reasons not yet mentioned might explain why participants stopped using the intervention. First, although our website and app were thoroughly alpha and pilot tested [
], some technical problems were present at the beginning of the intervention that could have been a burden on participants. For example, our app did not work well with older smartphones, and the website caused technical problems when used in specific internet browsers (eg, Firefox did not always work well on tablets). Also, in the first weeks of the study, a bug caused the website to crash when a button for an optional website page was clicked on, preventing participants from returning to the main website page and completing their first website session. Future interventions should alpha test their websites and apps through all possible scenarios (various types of smartphones, internet browsers, laptops/tablets, etc) with a large user group. However, we should not assume that all technical problems can be solved in advance, as unforeseen barriers will always come up in digital health. Therefore, it may be useful if future interventions would provide a short manual with information to keep technical problems to a minimum (ie, press refresh when the website is stuck, use the internet browser Google Chrome, use the latest version of Android/iOS, do not use tablets). Second, some participants experienced medical/emotional problems during the intervention causing them to drop out. Future interventions should have the option to respond accordingly with particular advice or should refer to specific assistance (eg, doctor, psychologist, physiotherapist).Strengths and Limitations
This study has several strengths. To the best of our knowledge, this is the first study that investigated multiple psychological determinants of behavior as predictors of attrition in an intervention to promote an active lifestyle. Many studies have already described predictors of attrition in digital health but focused mainly on demographic variables [
, , , ] or factors relating to the digital intervention itself instead of the health behavior (eg, attitudes toward the digital tool, perceived control over the tool) [ ]. Second, this study investigated attrition in different parts of the intervention, whereas most studies only describe attrition rates at the end of their interventions [ ]. Third, this study had a large study sample.This study also has a number of limitations. First, only a small proportion of users reported reasons why they stopped using the intervention (22% of all noncompleters). This low response rate could be explained by the format used to investigate reasons for dropout (eg, although it was stated in our protocol paper that telephone calls would be used, an online questionnaire was used due to lack of time). Future studies still may consider the use of telephone calls. In addition, most of these users dropped out before the third session of the intervention, making it impossible to compare reasons for attrition at the beginning of the intervention with those at the end of the intervention. Second, considering the linear design of the study (see Methods), no posttest measurements of noncompleters were collected. As such, it was not possible to explore whether noncompleters improved their PA or SB levels due to their (short) participation in the intervention. Investigating this could be important in future studies, as stopping the intervention does not necessarily coincide with failure [
]. Third, as this is an RCT, this study could have shown different results if it would have been an open access study (where attrition rates are usually even higher) [ , ]. Fourth, the study sample consisted mostly of women (69.1%) and highly educated adults (66.4%), which has also been the case in other digital health intervention studies [ ]. As such, one should be careful when generalizing the study outcomes to a broader population.Conclusion
This study offered insights into when, which, and why users stop using a digital health intervention and provided some directions where future studies might focus on to prevent attrition. Personalization of interventions will be important, on one hand by dynamic tailoring of BCTs to the motivational stage to which an individual belongs and on the other hand by including personalized suggestions of BCTs at the operational level. Future studies should keep questionnaires (either for research purposes or tailoring) to a minimum by, for example, using objective monitoring devices, and technical aspects of digital health interventions should be thoroughly tested in advance.
Acknowledgments
We would like to thank Prof Dr Armand De Clercq, PhD, Louise Poppe, and Celien Van der Mispel for their support in developing MyPlan 2.0.
Authors' Contributions
All authors were involved in designing the study. HS collected and analyzed the data and drafted the manuscript. GC, IDB, and DVD critically revised the manuscript. All authors read and approved the final manuscript.
Conflicts of Interest
None declared.
Screenshots of the MyPlan 2.0 intervention website.
PDF File (Adobe PDF File), 1030 KB
Screenshots of the MyPlan 2.0 mobile app.
PDF File (Adobe PDF File), 526 KB
A 26-item questionnaire on the Health Action Process Approach–based psychological determinants.
PDF File (Adobe PDF File), 74 KBReferences
- Müller A, Maher CA, Vandelanotte C, Hingle M, Middelweerd A, Lopez ML, et al. Physical activity, sedentary behavior, and diet-related ehealth and mHealth research: bibliometric analysis. J Med Internet Res 2018 Apr 18;20(4):e122 [FREE Full text] [CrossRef] [Medline]
- Kohl LFM, Crutzen R, de Vries NK. Online prevention aimed at lifestyle behaviors: a systematic review of reviews. J Med Internet Res 2013 Jul;15(7):e146 [FREE Full text] [CrossRef] [Medline]
- Glasgow RE, McKay H, Piette JD, Reynolds KD. The RE-AIM framework for evaluating interventions: what can it tell us about approaches to chronic illness management? Patient Educ Couns 2001 Aug;44(2):119-127. [CrossRef] [Medline]
- Broekhuizen K, Simmons D, Devlieger R, van Assche A, Jans G, Galjaard S, et al. Cost-effectiveness of healthy eating and/or physical activity promotion in pregnant women at increased risk of gestational diabetes mellitus: economic evaluation alongside the DALI study, a European multicenter randomized controlled trial. Int J Behav Nutr Phys Act 2018 Mar 14;15(1):23 [FREE Full text] [CrossRef] [Medline]
- Vandelanotte C, Müller AM, Short CE, Hingle M, Nathan N, Williams SL, et al. Past, present, and future of eHealth and mHealth research to improve physical activity and dietary behaviors. J Nutr Educ Behav 2016 Mar;48(3):219-228.e1. [CrossRef] [Medline]
- Free C, Phillips G, Galli L, Watson L, Felix L, Edwards P, et al. The effectiveness of mobile-health technology-based health behaviour change or disease management interventions for health care consumers: a systematic review. PLoS Med 2013 Jan;10(1):e1001362 [FREE Full text] [CrossRef] [Medline]
- Joseph RP, Durant NH, Benitez TJ, Pekmezi DW. Internet-based physical activity interventions. Am J Lifestyle Med 2014 Dec;8(1):42-68 [FREE Full text] [CrossRef] [Medline]
- Davies CA, Spence JC, Vandelanotte C, Caperchione CM, Mummery WK. Meta-analysis of internet-delivered interventions to increase physical activity levels. Int J Behav Nutr Phys Act 2012;9:52 [FREE Full text] [CrossRef] [Medline]
- Schoeppe S, Alley S, Van Lippevelde W, Bray NA, Williams SL, Duncan MJ, et al. Efficacy of interventions that use apps to improve diet, physical activity and sedentary behaviour: a systematic review. Int J Behav Nutr Phys Act 2016 Dec 07;13(1):127 [FREE Full text] [CrossRef] [Medline]
- Eysenbach G. The law of attrition. J Med Internet Res 2005 Mar;7(1):e11 [FREE Full text] [CrossRef] [Medline]
- Poppe L, De Bourdeaudhuij I, Verloigne M, Shadid S, Van Cauwenberg J, Compernolle S, et al. Efficacy of a self-regulation-based electronic and mobile health intervention targeting an active lifestyle in adults having type 2 diabetes and in adults aged 50 years or older: two randomized controlled trials. J Med Internet Res 2019 Aug 02;21(8):e13363 [FREE Full text] [CrossRef] [Medline]
- Degroote L, Plaete J, De Bourdeaudhuij I, Verloigne M, Van Stappen V, De Meester A, et al. The effect of the eHealth intervention 'MyPlan 1.0' on physical activity in adults who visit general practice: a quasi-experimental trial. Int J Environ Res Public Health 2018 Jan 30;15(2):228 [FREE Full text] [CrossRef] [Medline]
- Kelders SM, Kok RN, Ossebaard HC, Van Gemert-Pijnen JEWC. Persuasive system design does matter: a systematic review of adherence to web-based interventions. J Med Internet Res 2012 Nov 14;14(6):e152 [FREE Full text] [CrossRef] [Medline]
- Guertler D, Vandelanotte C, Kirwan M, Duncan MJ. Engagement and nonusage attrition with a free physical activity promotion program: the case of 10,000 Steps Australia. J Med Internet Res 2015;17(7):e176 [FREE Full text] [CrossRef] [Medline]
- Hutchesson MJ, Gough C, Müller A, Short CE, Whatnall MC, Ahmed M, et al. eHealth interventions targeting nutrition, physical activity, sedentary behavior, or obesity in adults: a scoping review of systematic reviews. Obes Rev 2021 Oct 23;22(10):e13295. [CrossRef] [Medline]
- Tsay CH, Kofinas AK, Trivedi SK, Yang Y. Overcoming the novelty effect in online gamified learning systems: an empirical evaluation of student engagement and performance. J Comput Assist Learn 2019 Dec 04;36(2):128-146. [CrossRef]
- Reinwand DA, Schulz DN, Crutzen R, Kremers SP, de Vries H. Who follows eHealth interventions as recommended? a study of participants' personal characteristics from the experimental arm of a randomized controlled trial. J Med Internet Res 2015;17(5):e115 [FREE Full text] [CrossRef] [Medline]
- Van der Mispel C, Poppe L, Crombez G, Verloigne M, De Bourdeaudhuij I. A self-regulation-based eHealth intervention to promote a healthy lifestyle: investigating user and website characteristics related to attrition. J Med Internet Res 2017 Jul 11;19(7):e241 [FREE Full text] [CrossRef] [Medline]
- Kelders SM, Van Gemert-Pijnen JEWC, Werkman A, Nijland N, Seydel ER. Effectiveness of a Web-based intervention aimed at healthy dietary and physical activity behavior: a randomized controlled trial about users and usage. J Med Internet Res 2011 Apr;13(2):e32 [FREE Full text] [CrossRef] [Medline]
- Pratap A, Neto EC, Snyder P, Stepnowsky C, Elhadad N, Grant D, et al. Indicators of retention in remote digital health studies: a cross-study evaluation of 100,000 participants. NPJ Digit Med 2020 Feb 17;3(1):21 [FREE Full text] [CrossRef] [Medline]
- Wurst R, Maliezefski A, Ramsenthaler C, Brame J, Fuchs R. Effects of incentives on adherence to a web-based intervention promoting physical activity: naturalistic study. J Med Internet Res 2020 Jul 30;22(7):e18338 [FREE Full text] [CrossRef] [Medline]
- Reinwand DA, Crutzen R, Elfeddali I, Schneider F, Schulz DN, Smit E, et al. Impact of educational level on study attrition and evaluation of web-based computer-tailored interventions: results from seven randomized controlled trials. J Med Internet Res 2015 Oct 07;17(10):e228 [FREE Full text] [CrossRef] [Medline]
- Verheijden MW, Jans MP, Hildebrandt VH, Hopman-Rock M. Rates and determinants of repeated participation in a web-based behavior change program for healthy body weight and healthy lifestyle. J Med Internet Res 2007 Jan 22;9(1):e1 [FREE Full text] [CrossRef] [Medline]
- Meyerowitz-Katz G, Ravi S, Arnolda L, Feng X, Maberly G, Astell-Burt T. Rates of attrition and dropout in app-based interventions for chronic disease: systematic review and meta-analysis. J Med Internet Res 2020 Sep 29;22(9):e20283 [FREE Full text] [CrossRef] [Medline]
- Davis MJ, Addis ME. Predictors of attrition from behavioral medicine treatments. Ann Behav Med 1999 Dec;21(4):339-349. [CrossRef] [Medline]
- Murray E, White IR, Varagunam M, Godfrey C, Khadjesari Z, McCambridge J. Attrition revisited: adherence and retention in a web-based alcohol trial. J Med Internet Res 2013 Aug 30;15(8):e162 [FREE Full text] [CrossRef] [Medline]
- Peels DA, Bolman C, Golsteijn RHJ, De Vries H, Mudde AN, van Stralen MM, et al. Differences in reach and attrition between Web-based and print-delivered tailored interventions among adults over 50 years of age: clustered randomized trial. J Med Internet Res 2012 Dec 17;14(6):e179 [FREE Full text] [CrossRef] [Medline]
- Schroé H, Van der Mispel C, De Bourdeaudhuij I, Verloigne M, Poppe L, Crombez G. A factorial randomised controlled trial to identify efficacious self-regulation techniques in an e- and m-health intervention to target an active lifestyle: study protocol. Trials 2019 Jun 10;20(1):340 [FREE Full text] [CrossRef] [Medline]
- Schroé H, Van Dyck D, De Paepe A, Poppe L, Loh WW, Verloigne M, et al. Which behaviour change techniques are effective to promote physical activity and reduce sedentary behaviour in adults: a factorial randomized trial of an e- and m-health intervention. Int J Behav Nutr Phys Act 2020 Oct 07;17(1):127 [FREE Full text] [CrossRef] [Medline]
- Schwarzer R, Lippke S, Luszczynska A. Mechanisms of health behavior change in persons with chronic illness or disability: the Health Action Process Approach (HAPA). Rehabil Psychol 2011 Aug;56(3):161-170. [CrossRef] [Medline]
- Zhang C, Zhang R, Schwarzer R, Hagger MS. A meta-analysis of the health action process approach. Health Psychol 2019 Jul;38(7):623-637. [CrossRef] [Medline]
- Deterding S, Dixon D, Khaled R, Nacke L. From game design elements to gamefulness: defining gamification. In: MindTrek '11: Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments. New York, NY: Association for Computing Machinery; 2011 Presented at: MindTrek '11: Academic MindTrek 2011; September 28-30; Tampere, Finland p. 9-15 URL: https://uwaterloo.ca/scholar/sites/ca.scholar/files/lnacke/files/From_game_design_elements_to_gamefulness-_defining_gamification.pdf [CrossRef]
- Looyestyn J, Kernot J, Boshoff K, Ryan J, Edney S, Maher C. Does gamification increase engagement with online programs? A systematic review. PLoS One 2017 Mar;12(3):e0173403 [FREE Full text] [CrossRef] [Medline]
- Thomas S, Reading J, Shephard RJ. Revision of the Physical Activity Readiness Questionnaire (PAR-Q). Can J Sport Sci 1992 Dec;17(4):338-345. [Medline]
- Vandelanotte C, De Bourdeaudhuij I, Philippaerts R, Sjöström M, Sallis J. Reliability and validity of a computerized and Dutch version of the International Physical Activity Questionnaire (IPAQ). J Phys Activity Health 2005 Jan;2(1):63-75. [CrossRef]
- Wijndaele K, DE Bourdeaudhuij I, Godino JG, Lynch BM, Griffin SJ, Westgate K, et al. Reliability and validity of a domain-specific last 7-d sedentary time questionnaire. Med Sci Sports Exerc 2014 Jun;46(6):1248-1260. [CrossRef] [Medline]
- Beatty PC, Willis GB. Research synthesis: the practice of cognitive interviewing. Pub Opin Q 2007 Jun 05;71(2):287-311. [CrossRef]
- Christodoulou C, Junghaenel DU, DeWalt DA, Rothrock N, Stone AA. Cognitive interviewing in the evaluation of fatigue items: results from the patient-reported outcomes measurement information system (PROMIS). Qual Life Res 2008 Dec 12;17(10):1239-1246 [FREE Full text] [CrossRef] [Medline]
- Johnston M, Dixon D, Hart J, Glidewell L, Schröder C, Pollard B. Discriminant content validity: a quantitative methodology for assessing content of theory-based measures, with illustrative applications. Br J Health Psychol 2014 May 15;19(2):240-257. [CrossRef] [Medline]
- Poppe L, Crombez G, De Bourdeaudhuij I, Van der Mispel C, Shadid S, Verloigne M. Experiences and opinions of adults with type 2 diabetes regarding a self-regulation-based eHealth intervention targeting physical activity and sedentary behaviour. Int J Environ Res Public Health 2018 May 10;15(5):954 [FREE Full text] [CrossRef] [Medline]
- Poppe L, Van der Mispel C, Crombez G, De Bourdeaudhuij I, Schroé H, Verloigne M. How users experience and use an eHealth intervention based on self-regulation: mixed-methods study. J Med Internet Res 2018 Oct 01;20(10):e10412 [FREE Full text] [CrossRef] [Medline]
- Wangberg SC, Bergmo TS, Johnsen JK. Adherence in internet-based interventions. Patient Prefer Adherence 2008 Feb 02;2:57-65 [FREE Full text] [Medline]
- Poppe L, Van der Mispel C, De Bourdeaudhuij I, Verloigne M, Shadid S, Crombez G. Users' thoughts and opinions about a self-regulation-based eHealth intervention targeting physical activity and the intake of fruit and vegetables: a qualitative study. PLoS One 2017 Dec 21;12(12):e0190020 [FREE Full text] [CrossRef] [Medline]
- Sharpe EE, Karasouli E, Meyer C. Examining factors of engagement with digital interventions for weight management: rapid review. JMIR Res Protoc 2017 Oct 23;6(10):e205 [FREE Full text] [CrossRef] [Medline]
- Henriksen A, Haugen Mikalsen M, Woldaregay AZ, Muzny M, Hartvigsen G, Hopstock LA, et al. Using fitness trackers and smartwatches to measure physical activity in research: analysis of consumer wrist-worn wearables. J Med Internet Res 2018 Mar 22;20(3):e110. [CrossRef] [Medline]
- Degroote L, De Bourdeaudhuij I, Verloigne M, Poppe L, Crombez G. The accuracy of smart devices for measuring physical activity in daily life: validation study. JMIR Mhealth Uhealth 2018 Dec 13;6(12):e10972 [FREE Full text] [CrossRef] [Medline]
- Mariano B. Towards a global strategy on digital health. Bull World Health Organ 2020 Apr 01;98(4):231. [CrossRef]
- Wei Y, Zheng P, Deng H, Wang X, Li X, Fu H. Design features for improving mobile health intervention user engagement: systematic review and thematic analysis. J Med Internet Res 2020 Dec 09;22(12):e21687 [FREE Full text] [CrossRef] [Medline]
- Szinay D, Jones A, Chadborn T, Brown J, Naughton F. Influences on the uptake of and engagement with health and well-being smartphone apps: systematic review. J Med Internet Res 2020 Mar 23:1 [FREE Full text] [CrossRef] [Medline]
- Lippke S, Ziegelmann JP, Schwarzer R. Stage-specific adoption and maintenance of physical activity: testing a three-stage model. Psychol Sport Exerc 2005 Sep;6(5):585-603. [CrossRef]
- Lippke S, Schwarzer R, Ziegelmann JP, Scholz U, Schüz B. Testing stage-specific effects of a stage-matched intervention: a randomized controlled trial targeting physical exercise and its predictors. Health Educ Behav 2010 Aug;37(4):533-546. [CrossRef] [Medline]
- Lippke S, Ziegelmann JP. Understanding and modeling health behavior: the multi-stage model of health behavior change. J Health Psychol 2006 Jan 01;11(1):37-50. [CrossRef] [Medline]
- Wienert J, Kuhlmann T, Storm V, Reinwand D, Lippke S. Latent user groups of an eHealth physical activity behaviour change intervention for people interested in reducing their cardiovascular risk. Res Sports Med 2019 Jul 26;27(1):34-49. [CrossRef] [Medline]
- Maher J, Conroy D. Habit strength moderates the effects of daily action planning prompts on physical activity but not sedentary behavior. J Sport Exerc Psychol 2015 Feb;37(1):97-107. [CrossRef] [Medline]
- Schwarzer R, Warner L, Fleig L, Gholami M, Salvatore S, Cianferotti L, et al. Psychological mechanisms in a digital intervention to improve physical activity: a multicentre randomized controlled trial. Br J Health Psychol 2018 May 19;23(2):296-310. [CrossRef] [Medline]
- Littell JH, Girvin H. Stages of change. A critique. Behav Modif 2002 Apr 26;26(2):223-273. [CrossRef] [Medline]
- Maes S, Karoly P. Self-regulation assessment and intervention in physical health and illness: a review. Appl Psychol 2005 Apr;54(2):267-299. [CrossRef]
- Baumel A, Muench FJ. Effort-optimized intervention model: framework for building and analyzing digital interventions that require minimal effort for health-related gains. J Med Internet Res 2021 Mar 12;23(3):e24905 [FREE Full text] [CrossRef] [Medline]
- Glasgow RE, Nelson CC, Kearney KA, Reid R, Ritzwoller DP, Strecher VJ, et al. Reach, engagement, and retention in an Internet-based weight loss program in a multi-site randomized controlled trial. J Med Internet Res 2007 May;9(2):e11 [FREE Full text] [CrossRef] [Medline]
- Dalton D, Ortegren M. Gender differences in ethics research: the importance of controlling for the social desirability response bias. J Bus Ethics 2011 Apr 11;103(1):73-93. [CrossRef]
- Wojtowicz M, Day V, McGrath PJ. Predictors of participant retention in a guided online self-help program for university students: prospective cohort study. J Med Internet Res 2013;15(5):e96 [FREE Full text] [CrossRef] [Medline]
- Yardley L, Spring BJ, Riper H, Morrison LG, Crane DH, Curtis K, et al. Understanding and promoting effective engagement with digital behavior change interventions. Am J Prev Med 2016 Nov;51(5):833-842. [CrossRef] [Medline]
- Christensen H, Griffiths KM, Farrer L. Adherence in internet interventions for anxiety and depression. J Med Internet Res 2009 Apr;11(2):e13 [FREE Full text] [CrossRef] [Medline]
- Brouwer W, Oenema A, Raat H, Crutzen R, de Nooijer J, de Vries NK, et al. Characteristics of visitors and revisitors to an Internet-delivered computer-tailored lifestyle intervention implemented for use by the general public. Health Educ Res 2010 Aug;25(4):585-595. [CrossRef] [Medline]
Abbreviations
BCT: behavior change technique |
HAPA: health action process approach |
MVPA: moderate-to-vigorous physical activity |
OR: odds ratio |
PA: physical activity |
SB: sedentary behavior |
Edited by L Buis; submitted 20.05.21; peer-reviewed by A Mustafa, B Chaudhry, H Mehdizadeh; comments to author 30.07.21; revised version received 01.09.21; accepted 20.12.21; published 31.01.22
Copyright©Helene Schroé, Geert Crombez, Ilse De Bourdeaudhuij, Delfien Van Dyck. Originally published in JMIR mHealth and uHealth (https://mhealth.jmir.org), 31.01.2022.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on https://mhealth.jmir.org/, as well as this copyright and license information must be included.