JMIR Publications


We are scheduled to perform a server upgrade on Thursday, November 30, 2017 between 4 and 6 PM Eastern Time.

Please refrain from submitting support requests related to server downtime during this window.

JMIR mHealth and uHealth

Advertisement

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 20.07.17 in Vol 5, No 7 (2017): July

This paper is in the following e-collection/theme issue:

    Original Paper

    Improving Adherence to Web-Based and Mobile Technologies for People With Psychosis: Systematic Review of New Potential Predictors of Adherence

    1Institute of Psychiatry, Psychology & Neuroscience, Department of Psychology, King's College London, London, United Kingdom

    2Division of Psychopathology and Clinical Intervention, Department of Psychology, University of Zurich, Zurich, Switzerland

    3Department of Psychology, Aberystwyth University, Ceredigion, United Kingdom

    4South London and Maudsley NHS Foundation Trust, London, United Kingdom

    Corresponding Author:

    Clare Killikelly, DClinPsy, PhD

    Institute of Psychiatry, Psychology & Neuroscience

    Department of Psychology

    King's College London

    PO 78, Institute of Psychiatry Psychology and Neuroscience, 4, Windsor Walk

    London, SE5 8AF

    United Kingdom

    Phone: 44 044 635 7309

    Fax:44 044 635 7319

    Email:


    ABSTRACT

    Background: Despite the boom in new technologically based interventions for people with psychosis, recent studies suggest medium to low rates of adherence to these types of interventions. The benefits will be limited if only a minority of service users adhere and engage; if specific predictors of adherence can be identified then technologies can be adapted to increase the service user benefits.

    Objective: The study aimed to present a systematic review of rates of adherence, dropout, and approaches to analyzing adherence to newly developed mobile and Web-based interventions for people with psychosis. Specific predictors of adherence were also explored.

    Methods: Using keywords (Internet or online or Web-based or website or mobile) AND (bipolar disorder or manic depression or manic depressive illness or manic-depressive psychosis or psychosis or schizophr* or psychotic), the following databases were searched: OVID including MedLine, EMBASE and PsychInfo, Pubmed and Web of Science. The objectives and inclusion criteria for suitable studies were defined following PICOS (population: people with psychosis; intervention: mobile or Internet-based technology; comparison group: no comparison group specified; outcomes: measures of adherence; study design: randomized controlled trials (RCT), feasibility studies, and observational studies) criteria. In addition to measurement and analysis of adherence, two theoretically proposed predictors of adherence were examined: (1) level of support from a clinician or researcher throughout the study, and (2) level of service user involvement in the app or intervention development. We provide a narrative synthesis of the findings and followed the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines for reporting systematic reviews.

    Results: Of the 20 studies that reported a measure of adherence and a rate of dropout, 5 of these conducted statistical analyses to determine predictors of dropout, 6 analyzed the effects of specific adherence predictors (eg, symptom severity or type of technological interface) on the effects of the intervention, 4 administered poststudy feedback questionnaires to assess continued use of the intervention, and 2 studies evaluated the effects of different types of interventions on adherence. Overall, the percentage of participants adhering to interventions ranged from 28-100% with a mean of 83%. Adherence was greater in studies with higher levels of social support and service user involvement in the development of the intervention. Studies of shorter duration also had higher rates of adherence.

    Conclusions: Adherence to mobile and Web-based interventions was robust across most studies. Although 2 studies found specific predictors of nonadherence (male gender and younger age), most did not specifically analyze predictors. The duration of the study may be an important predictor of adherence. Future studies should consider reporting a universal measure of adherence and aim to conduct complex analyses on predictors of adherence such as level of social presence and service user involvement.

    JMIR Mhealth Uhealth 2017;5(7):e94

    doi:10.2196/mhealth.7088

    KEYWORDS



    Introduction

    E-mental health interventions, defined as “the use of information and communication technology to support or improve mental health care” [1,2], have been proposed as promising alternatives to traditional interventions. Proposed benefits include ease of use, accessibility, and the potential to be less stigmatizing [3-5]. This may be particularly appealing for service users with psychosis who tend to have high relapse rates yet limited access to psychological therapies [4,6]. Psychosis is a debilitating mental health disorder that includes symptoms such as hallucinations, delusions, disorganized thoughts, and speech, as well as diminished emotional expression and lack of volition [7]. Dropout and nonadherence rates for traditional psychological and psychopharmacological interventions are high for people with psychosis. “Dropout” is defined as noncompletion of the study protocol or the study assessments, and “adherence” is defined as the extent to which a participant experiences or engages with a mobile or Internet-based intervention [8]. Dropout rates of 25% for people with psychosis [9-11] and 30-57% for people with first episode psychosis (FEP) are commonly found [12]. Some have suggested that e-mental health technologies may provide a more acceptable therapy format than traditional face-to-face therapy [13]. Rates of adherence across different types of e-mental health interventions for people with psychosis have not been systematically examined.

    A recent review of 12 studies highlighted that a specific examination of adherence, the extent to which a participant engages with an intervention, would be helpful for the field of e-mental health [14]. The study demonstrated that service users with psychosis varied in their engagement with the technological interventions; some showed regular or intermittent use and approximately 25-30% of participants did not engage or dropped out [14]. We seek to update this 2013 review for two main reasons. First, since 2013, there has been a dramatic increase in peer-reviewed publications examining Web-based or mobile technologies for a variety of mental health conditions. When reviewing the publication rate of e-mental health papers over the past 20 years, 57% were published in the last 5 years and the number of publications tripled between 2009-2014 [15]. Higgins and Green (2011) recommend that review updates should be carried out every 2 years, especially in a rapidly growing field [24]. Second, examining use and adherence to these new technologies is increasingly important as the benefits are limited if service users do not use them.

    In order to obtain an overview of the rates of adherence, two types of adherence rates were collected: (1) mean percentage of the intervention completed and (2) percent of participants that complete the intervention [16]. Previous systematic reviews have developed four main approaches to examining adherence to mobile or Web-based interventions for treatment of depression and anxiety [8,16] (see Table 1 for an overview). The first is to examine factors that contribute to dropout from a study; for example, a comparison of baseline symptomology or demographic factors in participants who stay in the study and those who drop out. The second is to conduct statistical analyses, including correlational or regression analyses within a study to identify potential predictors of adherence. Specific service user factors (eg, demographics and clinical severity) and intervention factors (eg, week 1 vs week 2 of intervention) are most commonly explored. The third is to use questionnaires to retrospectively examine participants’ experiences of adherence and perspectives on continued use. The fourth approach is to experimentally manipulate factors within a study to impact upon adherence; for example, to compare different technological interfaces, frequency of use, or behavioral interventions.

    Table 1. Four approaches to studying adherence.
    View this table

    In addition to these four approaches to studying adherence, we evaluated two theoretically proposed predictors of adherence: (1) level of social presence or contact with a researcher, clinician or peer, and (2) servicer user involvement in the development of the intervention or app. The level of social presence or contact refers to the frequency and quality of clinician, researcher, or peer presence or contact throughout the intervention [14]. Several studies have identified that contact and support from clinicians or peers in the form of telephone, email, Web-based forums, or e-chats can help improve adherence to mobile and Internet-based interventions; people with psychosis may particularly benefit from this support [17,18]. Mohr et al [19] outline a “supportive accountability model” whereby a supportive social presence may positively influence accountability, expectations, and bond during a mobile or Web-based intervention. This predictor has some credibility as Day et al [20] found that for acute inpatients with psychosis, a positive relationship with a clinician was related to adherence to medication and a positive attitude toward treatment. In addition, LeClerc et al [11] established that a good therapeutic alliance improved adherence to psychosocial treatment. This review conducted a preliminary examination of the level of social presence and support that is offered in each intervention.

    The second potential predictor of adherence is the level of service user involvement in the development of the intervention. This has been highlighted as vital for effectiveness and adherence to interventions [21]. The sense of involvement in the project may promote self-efficacy and therefore accountability to the intervention [19]. Recently, Wykes and Brown [21] emphasized the importance of providing service users with choice, for example, the choice of digital or face-to-face interventions, or a combination of the modalities [22]. Choice leads to a greater feeling of control; this may tap into intrinsic motivation that is important for adherence to interventions [19]. This review highlights any studies that involve service users in the development and improvement of the interventions and the potential impact on adherence. This review updates Alvarez-Jimenz et al’s [14] findings; we examined rates of adherence to mobile or Internet-based interventions, trials, or observational studies for people with psychosis.


    Methods

    This systematic review was conducted following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines and recommendations for conducting and reporting systematic reviews (see Multimedia Appendix 1) [23].

    Eligibility Criteria

    The following PICOS criteria [24] were adopted for study inclusion: (1) population: adults (18-65 years); at least 75% of participants have a diagnosis of schizophrenia spectrum disorder according to diagnostic and statistical manual of mental disorders (DSM)-IV or the international classification of diseases (ICD)-10; (2) interventions, trials, or observational studies involving Web-based, mobile, e-technology or Web-based interfaces enabling peer-to-peer contact, patient-to-expert communication, or interactive psycho education or therapy; flexible, accessible monitoring, self-help, and symptom management; (3) comparison group: none were specified; (4) outcomes: at least one measure of adherence or dropout; and (5) study design: as this study aimed to provide an overview of the current state of the field, generous inclusion criteria were adopted. Types of studies included all primary group studies including RTCs; cross-sectional, longitudinal, and comparison studies with and without a control group; cross-over trials, case controls or cohort studies; observational studies with experience sampling components (ESM); and feasibility or acceptability studies. The following exclusion criteria were included: (1) publications written in a language other than English, (2) conference abstracts and theses not published in a peer-reviewed journal, and (3) book chapters.

    Information Sources and Search Strategy

    The following databases were systematically searched from August 2013 to November 2016: OVID including MedLine, EMBASE and PsychInfo, Pubmed and Web of Science. The following terms were used in the keyword search of abstracts and titles: (Internet or online or Web-based or website or mobile) AND (bipolar disorder or manic depression or manic depressive illness or manic-depressive psychosis or psychosis or schizophr* or psychotic). Additionally, hand-searching was performed on 5 key journals (Schizophrenia Bulletin, Schizophrenia Research, Journal of Medical Internet Research, Telemedicine and E-health, and Psychiatric Services) along with the reference lists of included primary studies. The term “adherence” was purposely not included in the search terms as most studies do not include references to reported adherence in the title or abstract [16].

    Figure 1. Preferred reporting items for systematic reviews and meta-analyses (PRISMA) flowchart.
    View this figure

    Study Selection

    Titles and abstracts of articles were scanned independently by 2 researchers (CK and ZH). Articles deemed potentially eligible were retrieved in full and independently reviewed (CK and ZH). Disagreement between researchers was dealt with by consensus with a senior member of the research team (TW).

    Data Collection Process

    A standard form was used to extract data from selected studies to create 7 results tables. Tables 2-7 comprise: (1) randomized intervention studies, (2) feasibility or acceptability studies, and (3) observational studies. Tables 2-4 include the following study characteristics: study source, sample size, gender, age, diagnosis, study design, purpose of intervention, and control group. Tables 5-7 include characteristics of interventions: levels of adherence, dropout, type of social presence, service user involvement, and measurement of participant feedback.

    Assessment of Methodological Quality and Procedures

    The study quality was assessed separately for RCTs, feasibility studies, and observational studies (nonrandomized studies). The RCTs and feasibility studies were separately assessed using the clinical trials assessment measure (CTAM) [25]. The CTAM was designed to assess trial quality specifically in trials of psychological interventions for mental health. It contains 15 items grouped into six areas that are important for assessing bias in psychological interventions that include sample size, recruitment method, allocation to treatment, assessment of outcome, control groups, and description of treatments and analysis. Each study is rated out of a total of 100. This scale has good interrater reliability (.96) and high concurrent validity (.97) [26]. Observational studies were assessed using the Downs and Black scale [27]. This scale comprises 27 questions assessing key areas of methodological quality for nonrandomized studies for systematic reviews. It includes questions on reporting, external validity, bias, confounding, and power. This scale was modified slightly for this study. The question on power (see question 27 of the scale) was simplified to a rating of 1 or 0 following the practice in other reviews [2,28]. Each study is rated out of a total of 28 points. Scores are classified in the following ranges: excellent 26-28, good 20-25, fair 15-19, and poor less than 15. Two reviewers (CK and ZH) independently assessed the quality for all of the included studies. All of the first authors of the included articles were contacted to approve their CTAM or Downs and Black rating and if necessary provide further information to ensure that the quality of the study was not confused with the quality of the reporting.


    Results

    Study Selection

    The search strategy returned 2639 titles and abstracts. After removal of 797 duplicates, 1842 titles and abstracts were screened and 108 full text papers were assessed for inclusion. In total, 20 studies met the inclusion criteria (see summary in Figure 1; PRISMA flowchart).

    Study Characteristics

    Study characteristics are summarized in Tables 2-4. Six were randomized controlled interventions; 7 were feasibility, acceptability studies; and 7 were observational studies. In total, 656 participants with a diagnosis of schizophrenia spectrum disorders and a mean age ranging from 20-48 years participated. Sixteen studies included individuals with schizophrenia or schizo-affective disorder, 1 study included people with first episode psychosis, 1 included individuals with a dual diagnosis of schizophrenia and substance misuse, and 2 included people with nonaffective psychosis.

    Table 2. Study characteristics: randomized controlled trials with pre and post outcomes and control group.
    View this table
    Table 3. Study characteristics: feasibility studies.
    View this table
    Table 4. Study characteristics: observational and experience sampling method studies.
    View this table
    Table 5. Characteristics of interventions and rates of adherence: randomized controlled trials with pre and post outcomes and control group.
    View this table
    Table 6. Characteristics of interventions and rates of adherence: feasibility studies.
    View this table
    Table 7. Characteristics of interventions and rates of adherence: observational or experience sampling method studies.
    View this table

    Quality Assessment

    Trial quality assessment scores are summarized in Tables 8 and 9. The mean study quality score for the RCTs on the CTAM was 77.3 (range 62-88). Five [31-35] of the RCT studies were deemed to be of adequate trial quality (rating of 65+), except for Palmier-Claus et al [29], which received ratings of 62. As expected due to the lack of randomization, feasibility studies (n=7) had a lower mean score (44.7). The mean quality rating for the observational studies, was 20 and ranged from 17- 24. Three studies fell into the “good” classification range and 4 were “fair.”

    Table 8. Clinical trials assessment measure (2004), assessment for randomized controlled trials, and feasibility studies.
    View this table
    Table 9. Trial quality characteristics for nonrandomized controlled trials: Downs and Black (1998) ratings.
    View this table

    Adherence: Types of Measurement Across Studies

    The most common measures of adherence were percent of intervention completed by participants and percentage of participants completing the intervention. Figure 2 displays the types of adherence measure used and the level of adherence for each study. For the 12 studies reporting mean % of the intervention completed by participants, adherence ranged from 70.7-98.0% with a mean of 83.4%. For the 8 studies reporting the percentage of participants completing the intervention, adherence ranged from 28- 100% with a mean of 74.3%. All of the studies also listed the number of participants that dropped out of the study. This ranged from 0-55% with a mean of 12.3% dropout across both observational and intervention studies.

    Figure 2. Adherence across all studies: mean percent of entries completed in each study followed by percentage of participants completing the intervention.
    View this figure
    Approach 1: Analysis of Dropout

    See Tables 5-7 for details of rates of dropout. Five studies analyzed the relationship between specific variables and dropout. In terms of the variables of age or gender and dropout, most of the studies found no relationship [29,41,42]; however, Van der Krieke et al [31] found that the dropouts tended to be younger and male. Hartley et al [41] and So et al [42] did not find a relationship between symptom severity and dropout; however, Palmier-Claus et al [29] (also reported in the original study [30]) found that higher severity on the positive and negative syndrome scale’ (PANSS) positive symptom subscale predicted dropout. Finally, Sanchez et al., [43] found that the level of cognitive functioning did not predict completion of the study. See Table 10 for a summary.

    Table 10. Summary of findings for predictors of dropout and adherence.
    View this table
    Approach 2: Analysis of Within Study Predictors of Adherence

    Six studies conducted within-study analyses to examine adherence predictors and found few significant predictors of adherence. Van der Krieke et al [31] analyzed the chronicity of symptoms and reported that service users with first episode psychosis used a Web-based decision aid autonomously more often than service users with chronic psychosis. For those who required assistance from the research team to complete the intervention, 56% were service users in long-term care. However, the report does not provide specific statistical data.

    In terms of intervention specific factors, Palmier-Claus et al [29,30] found no relationship between the length of time taken to complete an entry and the number of entries completed by an individual. They also examined number of entries completed across the number of weeks of the study. They found that more entries were completed in the first week than the second week of the intervention and participants rated more highly the question, “were there times when you felt like not answering?” during the second week.

    Approach 3: Poststudy Questionnaires on Participants’ Perspectives on Adherence

    11 studies retrospectively asked participants to provide questionnaire-based qualitative or quantitative feedback about their experience of the study or intervention. All the studies used different rating scales (eg, Treatment Experience Questionnaire in Smith et al [33], idiosyncratic quantitative feedback questionnaire in Palmier-Claus et al [29], and idiosyncratic SocialVille program rating in Nahum et al [36]) and therefore it is difficult to draw comparisons across studies. Four studies specifically asked whether participants would continue to use the intervention [33,35-37]; see Figure 3). For 4 studies, the mean percent of participants who agreed to continue to use the intervention was 73.1%.

    Figure 3. Percent of participants agreed to continued use of intervention.
    View this figure
    Approach 4: Analysis of Specific Intervention Manipulations and Effect on Adherence

    Two studies were designed to manipulate conditions that may have an impact on adherence. Palmier-Claus et al [29,30] compared two different types of interventions: SMS text-only (short message service, SMS) interface or a mobile phone–based graphical app. They assessed the acceptability and feasibility of each device and found that participants completed more data points when using the mobile phone interface (mean entries=16.5) compared with the SMS text-only interface (mean entries=13.5; P=.002). Schlosser et al [40] increased the frequency and intensity of contact from a research coach from once a week to 5 times a week. This led to improved rates of adherence, for example, number of logins increased from 3.51 days/week to 4.69 days/week.

    Interestingly two interventions found that adherence significantly affected the intervention efficacy. Smith et al [33] found that completing more training trials of a virtual reality job interview training correlated with fewer weeks searching before securing a job (P<.001) and greater self-confidence (P=.03).

    Ben-Zeev et al [38] analyzed symptom change throughout the intervention and any related association to adherence and found that change in participants’ Beck Depression Inventory (BDI) scores were significantly correlated with use of mobile intervention; less frequent use of the FOCUS mobile intervention was associated with a the greater the reduction in depression score. Change in PANSS scores was not associated with use of the FOCUS app.

    New Potential Predictors of Adherence

    Potential Predictor: Social Presence Analysis

    To assess Mohr et al’s [19] “supportive accountability” model (social presence leads to better adherence), we examined the amount of contact for each study and the level of adherence to the intervention. As there is heterogeneity across the studies, we provide a narrative synthesis. Across all 20 studies the mean number of contacts per week from a researcher or clinician was 4.4 and it ranged from 0-28 contacts per week. This included face-to-face, mobile, Web-based or telephone-based contacts.

    As presented in Figure 4, regardless of level of support there is still a moderate to high rate of adherence across all 20 studies. Interestingly, it appears that studies with very high contact have almost 10% higher rates of adherence (83.8%) than those with no support (71.1%), but studies with minimal contact also had high adherence ratings (82.5%). Anecdotally, the importance of social presence is confirmed from participant reports. Gleeson et al [37] found that 90% of participants cited the use of a Web-based facilitator contributed to their sense of safety when using the Web-based program. All participants either agreed or strongly agreed with statements such as they always felt supported by the Web-based facilitator and 60% reported an increase in feelings of social connectedness. Recently, Schlosser et al [40] found that increasing the frequency of contact with a research coach increased use of the mobile app PRIME significantly. They found that when service users were able to tailor the amount of social support they received, they engaged more with the app.

    Figure 4. Relationship between social presence and adherence, adherence rates are grouped by frequency of social contact per week from “very high” (20 or more contacts per week), “high” (5 to 10 contacts per week), “minimal” (1 to 3 contacts per week), or “no support” (no contact).
    View this figure
    Potential Predictor: Service User Involvement

    Of the 20 studies included, only 3 described service user involvement in terms of the development or initial piloting of the intervention.

    Coproduction, meaning the collaboration of service users and researchers, in the beginning phases of intervention development has a potential influence on participants’ perception and adherence to the intervention. Ben-Zeev et al [38] used feedback and recommendations from a pilot with service users to develop a mobile intervention, FOCUS, to facilitate real-time mobile illness self-management. They found that participants rated the intervention highly with 90% acceptability and the mean percent of entries completed was 86.5%. Gleeson et al’s [37] HORYZONs program was developed with a service user focus group. It was found that 95% of participants used the social media component, 60% completed the therapy modules, and 75% reported a positive experience with the program. Schlosser et al [40] used an iterative service user feedback process called user centered design (UCD) process. After using the mobile app for 1 week, service users were consulted by means of in-depth interviews about their experience and identified key areas for improvement. The recommended changes were incorporated into the design of the device and this led to a 2 to 3-fold increase in use of the app in week 2. Service users also rated the app at 8 out of 10 in terms of satisfaction. In this case, service users were directly involved in the design, development, and implementation of the new device. When compared with adherence ratings (mean rate of adherence across studies that used different types of adherence ratings) to feasibility studies or RCTs that did not involve service users (mean adherence rate of 78%), service user involvement was associated with higher adherence (mean of 89%), though this is a small number of studies (n=3).

    Additional Potential Predictor: Duration of Study

    Interestingly, a comparison of the duration of the study (number of days participants are expected to be active in the study) and levels of adherence (averaged across both types of adherence ratings) revealed that the studies with the shortest duration had the highest mean rates of adherence (see Figure 5). The duration of the ESM-based studies ranged from 1 day to 14 days and the mean rate of adherence for these studies was 82.7%. Conversely, the duration of the RCT studies ranged from 6-161 days with a mean adherence rating of 76.4%; the duration of feasibility studies ranged from 7 -84 days with a mean adherence rating of 79.7%.

    Figure 5. Adherence ratings and the mean duration of the study (number of days) grouped by study type.
    View this figure

    Discussion

    Principal Findings

    This is the first review to document rates of adherence and to explore predictors of adherence to mobile and Web-based interventions for people with psychosis. Overall, from the examination of the four approaches to studying adherence across these diverse studies, we conclude that adherence to mobile and Web-based interventions is not necessarily predicted by service user specific factors such as age, symptoms, or gender. However, people with FEP may prefer an intervention that they can independently access [31]. Additionally, adherence is moderate to high across specific intervention factors such as amount of time to complete an entry and across different study designs. However, service users may prefer the mobile phone interface and may adhere more in the first week of an intervention [29]. This review has important implications for the acceptability and use of current interventions and the development of new ones. For example, offering service users choice in terms of the duration of the intervention and also the mode of delivery may have an important influence on adherence. Some service users may prefer a mobile app whereas others prefer a Web-based platform. Two potential new predictors of adherence were explored: (1) more frequent social support and (2) service user involvement in the intervention development. Providing service users with more input and control may add to the value and use of these interventions.

    The Measurement of Adherence

    Overall, adherence rates (whether measuring mean percentage of the intervention completed or percent of participants that complete the intervention) to mobile and Web-based interventions for people with psychosis are in line with adherence rates for similar technology-based interventions for other mental health disorders. Rates of adherence to interventions for depression and anxiety are approximately 66% for self-care interventions [16], and a median 56% for a computerized cognitive behavioral therapy (CBT) intervention [44]. Rates for completion of a Web-based site for personality disorder ranged from 80-100% completion; social phobia reported 70-90% completion and the only post-traumatic stress disorder (PTSD) intervention reported completing rate of 64%. In terms of adherence across different types of interventions (eg, face-to-face; medication-based interventions), completion rates of a one-to-one CBT intervention for psychosis was 55% [45] and 68.3% for a one-to-one CBT intervention for FEP [46]. Overall, the current review found moderate to high levels of adherence to Web-based or mobile interventions for psychosis with a range of 60-100% and a mean of 83%.

    In terms of the four approaches to studying adherence, the studies in this review varied in terms of the within-study predictors that are associated with adherence, questionnaires used to assess participants’ perspectives on factors impacting adherence, and whether or not they conducted any experimental manipulations to impact on adherence.

    Predictors of Adherence and Dropout

    Only 2 studies found specific predictors of adherence: less chronic symptoms [31] and a higher rate of adherence was found in the first intervention week than the second [29,30]. Although other predictors of adherence were examined (age, gender, cognition, negative symptoms, and persecutory delusions), none were found to have a significant effect. Two studies also found significant predictors of dropout: severity of symptoms [39], younger age, and male gender [31].

    Complex analyses, such as the multiple regression analysis performed by Palmier-Claus et al [39] of specific predictors such as service-user factors (symptoms, socioeconomic factors, interpersonal factors, and cognitive factors) along with e-mental health intervention factors (complexity of the interface, cost, and access) should be a priority for future studies. This will inform which service-user group may adhere to different type of interventions.

    One interesting area of future research would be to examine the duration, frequency, and intensity of the intervention and the effect that this may have on adherence. Studies that last for several months may have more variable adherence than those that last only 1 week. Additionally, longer adherence is not always synonymous with better outcomes. Palmier-Claus et al [29] found that the longer participants used the app, the greater the increase in their depression symptoms. This has important implications for future research; it could be that people will stop using the app as they improve and should therefore be given the opportunity to stop using the app when they have exceeded the benefit. Ultimately, it may be most effective to allow service users choice of the duration, frequency, or intensity of interventions. With supportive guidance, service users may best be able to decide whether or not a technology is helpful and supportive in their recovery.

    Poststudy Questionnaires

    Several studies used participant feedback questionnaires, however, they were all different; some were previously published but most were idiosyncratic and this variability also hindered comparison. A standard questionnaire specifically for Web-based and mobile interventions could provide detailed and comparable information on the service user perspective and experience. Additionally, more independent data collection, perhaps from service user researchers not associated with the study, may provide a more unbiased and critical view of the interventions (eg, [47]). The use of posttrial feedback should be a priority for future research studies.

    Experimental Manipulation

    Only 2 studies specifically manipulated variables in an attempt to influence adherence or use of the intervention. Both successfully improved adherence to the intervention (eg, mobile phone rather than text message based delivery; higher frequency supportive contact). Experimental manipulation of variables is vital particularly in terms of the types of technologies service users would prefer, the content of interventions and the level of independence, or clinician involvement in use of the intervention.

    New Predictors of Adherence

    “Support” in this review was defined liberally as any type of contact with a clinician or researcher involved in the study. Of the 20 studies, 14 reported some level of clinician or researcher contact. This ranged from very limited initial interaction with a researcher to multiple daily support calls from a dedicated mobile interventionist. It should be noted that 7 of the studies were designed as observation studies with ESM components. In this case, researcher or clinician contact may only occur if service users stop filling in the data. Additionally, ESM studies are usually very short so there is less time for absolute dropout. As evidenced by our comparison with adherence ratings grouped by the duration of the study, ESM studies tended to be the shortest studies with the highest adherence ratings.

    At present, it is difficult to draw clear conclusions about the importance of support, as only 2 studies specifically reported data on the effect of the Web-based interventionists [37]. However, as demonstrated by Schlosser et al [40], when the amount of coaching support was increased during the second half of the intervention, it led to increased engagement. In the future, it would be interesting for studies to experimentally manipulate the level of support and then measure the impact on adherence, or correlate the ratings of therapeutic alliance in the intervention and the level of adherence. This will clarify the impact of social presence.

    Alvarez-Jimenez et al [14] and Wykes and Brown [21] recommend that service user involvement in intervention development might be an important predictor of adherence. However, in the current dataset, only 3 studies included service users in the development of the intervention, so it is difficult to draw conclusions about the impact on adherence. However, adherence to these interventions was very high (84.9%, 86.5%, and 95%). This is an important area requiring future study.

    Quality of Studies

    As expected, the RCT studies were rated more highly (77.3%) than feasibility studies (44.7%). All of the studies had interventions carried out by independent assessors and had adequate handling and assessment of dropouts. Only 4 studies had outcome assessments conducted by assessors blind to group allocation. In terms of observational studies, these studies were classified as either fair or good in terms of the quality. Few studies (n=4) used a method of blind rating of outcomes. This is particularly important when assessing service user satisfaction with the intervention, as researcher involvement may unintentionally bias the ratings. Finally, it is difficult to compare study quality across feasibility, RCT, and observational studies. Currently there is no measure to assess the quality of feasibility studies. The CTAM and Downs and Black scales provide a useful reference point; however, direct comparisons are not possible. In the future, RCTs should be developed from the feasibility studies discussed here, to provide further, high quality support for these initial findings.

    Strengths and Limitations of the Review and Recommendations

    One of the main limitations of this study is the difficulty of comparing rates of adherence across studies with different interventions and different outcomes. Although most studies provided data either as percent of individuals completing an intervention or the mean percentage of an intervention completed, these two measures may not provide as accurate information when directly combined. A universal measure of adherence should be adopted in addition to more detailed information on the quantity or quality of adherence. For example, Simco et al [16] recommended including not just the percentage of an intervention completed but the number of exercises per week or log-ins per week to get a more qualitative perspective on use. Along these lines, it will be important for future reviews to separate and compare the modes of delivery in their analysis of baseline adherence levels. For example, the baseline rate of adherence to a mobile phone intervention may be different than for a Web-based intervention; comparisons across and within modes of delivery may provide insight into the types of technologies that are preferred. Finally, it will be important for future reviews to carefully document and unpick any potential risks of harm that service users may experience when using these remote technologies. Reviews should provide an unbiased account of both the benefits and disadvantages of remote interventions, for example, as highlighted by the finding by Ben-Zeev et al (2014b) that participants’ BDI scores were significantly correlated with use of mobile intervention; less frequent use of the FOCUS mobile intervention was associated with a greater reduction in depression score. This is an important finding that should guide further use of this intervention (eg, Ben-Zeev et al, 2016). Any potential negative effects should be carefully explored and documented.

    The review provides a comprehensive, up-to-date review of adherence across a variety of intervention types and platforms. The strengths include assessing a broad range of different novel technological interventions from text message-based to Web-based to virtual reality-based programs. This allowed us to demonstrate that adherence across different types of studies and a diverse range of interventions is moderate to high. Although the choice between face-to-face and remote intervention was not examined, this result at least demonstrates potential clinical utility. This review is timely as we included up-to-date literature from the past 3 years to ensure that the reader is informed of the most recent developments. The review also provides an innovative exploration of theoretically proposed predictors of adherence. This is the first review of its kind to explore the importance of service user involvement and support in facilitating adherence.

    We conclude that specific service user factors such as age or symptom severity may not have a significant influence on adherence; however, the experience of the service user in terms of the development of these technologies and interventions may be an important factor that requires care and consideration.

    Acknowledgments

    The study was supported by the Biomedical Research Centre for Mental Health, South London, and Maudsley NHS Foundation Trust. Professor Wykes would also like to cite the support from her NIHR Senior Investigator award.

    Conflicts of Interest

    None declared.

    Multimedia Appendix 1

    PRISMA checklist.

    PDF File (Adobe PDF File), 62KB

    References

    1. Ben-Zeev D. Technology-based interventions for psychiatric illnesses: improving care, one patient at a time. Epidemiol Psychiatr Sci 2014 Dec;23(4):317-321 [FREE Full text] [CrossRef] [Medline]
    2. van der Krieke L, Wunderink L, Emerencia AC, de Jonge P, Sytema S. E-mental health self-management for psychotic disorders: state of the art and future perspectives. Psychiatr Serv 2014 Jan 01;65(1):33-49. [CrossRef] [Medline]
    3. Ben-Zeev D, Kaiser S, Krzos I. Remote “hovering” with individuals with psychotic disorders and substance use: feasibility, engagement, and therapeutic alliance with a text-messaging mobile interventionist. J Dual Diagn 2014;10(4):197-203 [FREE Full text] [CrossRef] [Medline]
    4. Álvarez-Jiménez M, Gleeson J, Bendall S, Lederman R, Wadley G, Killackey E, et al. Internet-based interventions for psychosis: a sneak-peek into the future. Psychiatr Clin North Am 2012 Sep;35(3):735-747. [CrossRef] [Medline]
    5. Ventura J, Wilson SA, Wood RC, Hellemann GS. Cognitive training at home in schizophrenia is feasible. Schizophr Res 2013 Feb;143(2-3):397-398. [CrossRef] [Medline]
    6. Ben-Zeev D, Wang R, Abdullah S, Brian R, Scherer EA, Mistler LA, et al. Mobile behavioral sensing for outpatients and inpatients with schizophrenia. Psychiatr Serv 2016 May 01;67(5):558-561 [FREE Full text] [CrossRef] [Medline]
    7. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. Arlington, VA: American Psychiatric Association; 2013.
    8. Christensen H, Griffiths K, Farrer L. Adherence in internet interventions for anxiety and depression. J Med Internet Res 2009 Apr 24;11(2):e13 [FREE Full text] [CrossRef] [Medline]
    9. Kimhy D, Vakhrusheva J, Liu Y, Wang Y. Use of mobile assessment technologies in inpatient psychiatric settings. Asian J Psychiatr 2014 Aug;10:90-95 [FREE Full text] [CrossRef] [Medline]
    10. Sendt KV, Tracy DK, Bhattacharyya S. A systematic review of factors influencing adherence to antipsychotic medication in schizophrenia-spectrum disorders. Psychiatry Res 2015 Jan 30;225(1-2):14-30. [CrossRef] [Medline]
    11. Leclerc E, Noto C, Bressan RA, Brietzke E. Determinants of adherence to treatment in first-episode psychosis: a comprehensive review. Rev Bras Psiquiatr 2015;37(2):168-176 [FREE Full text] [CrossRef] [Medline]
    12. Stowkowy J, Addington D, Liu L, Hollowell B, Addington J. Predictors of disengagement from treatment in an early psychosis program. Schizophr Res 2012 Apr;136(1-3):7-12. [CrossRef] [Medline]
    13. Firth J, Cotter J, Torous J, Bucci S, Firth J, Yung A. Mobile phone ownership and endorsement of “mhealth” among people with psychosis: a meta-analysis of cross-sectional studies. Schizophr Bull 2016 Mar;42(2):448-455 [FREE Full text] [CrossRef] [Medline]
    14. Alvarez-Jimenez M, Alcazar-Corcoles M, González-Blanch C, Bendall S, McGorry P, Gleeson J. Online, social media and mobile technologies for psychosis treatment: a systematic review on novel user-led interventions. Schizophr Res 2014 Jun;156(1):96-106. [CrossRef] [Medline]
    15. Firth J, Torous J, Yung A. Ecological momentary assessment and beyond: The rising interest in e-mental health research. J Psychiatr Res 2016 Sep;80:3-4 [FREE Full text] [CrossRef] [Medline]
    16. Simco R, McCusker J, Sewitch M. Adherence to self-care interventions for depression or anxiety: a systematic review. Health Educ J Internet 2014 Jan 21;73(6):714-730 [FREE Full text]
    17. Kimhy D, Vakhrusheva J, Khan S, Chang RW, Hansen MC, Ballon JS, et al. Emotional granularity and social functioning in individuals with schizophrenia: an experience sampling study. J Psychiatr Res 2014 Jun;53:141-148 [FREE Full text] [CrossRef] [Medline]
    18. Mohr D, Siddique J, Ho J, Duffecy J, Jin L, Fokuo J. Interest in behavioral and psychological treatments delivered face-to-face, by telephone, and by internet. Ann Behav Med 2010 Aug;40(1):89-98 [FREE Full text] [CrossRef] [Medline]
    19. Mohr D, Cuijpers P, Lehman K. Supportive accountability: a model for providing human support to enhance adherence to eHealth interventions. J Med Internet Res 2011 Mar 10;13(1):e30 [FREE Full text] [CrossRef] [Medline]
    20. Day JC, Bentall RP, Roberts C, Randall F, Rogers A, Cattell D, et al. Attitudes toward antipsychotic medication: the impact of clinical variables and relationships with health professionals. Arch Gen Psychiatry 2005 Jul;62(7):717-724. [CrossRef] [Medline]
    21. Wykes T, Brown M. Over promised, over-sold and underperforming? - e-health in mental health. J Ment Health 2016 Feb 6;25(1):1-4 [FREE Full text]
    22. Brenner CJ, Ben-Zeev D. Affective forecasting in schizophrenia: comparing predictions to real-time ecological momentary assessment (EMA) ratings. Psychiatr Rehabil J 2014 Dec;37(4):316-320. [CrossRef] [Medline]
    23. Moher D, Liberati A, Tetzlaff J, Altman D. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Int J Surg 2010 Jan;8(5):336-341 [FREE Full text] [CrossRef]
    24. Higgins J, Green S. Cochrane. 2011. Cochrane handbook for systematic reviews of interventions, version 5.1.0   URL: http://training.cochrane.org/handbook [accessed 2017-06-09] [WebCite Cache]
    25. Tarrier N, Wykes T. Is there evidence that cognitive behaviour therapy is an effective treatment for schizophrenia? a cautious or cautionary tale? Behav Res Ther 2004 Dec;42(12):1377-1401. [CrossRef] [Medline]
    26. Wykes T, Steel C, Everitt B, Tarrier N. Cognitive behavior therapy for schizophrenia: effect sizes, clinical models, and methodological rigor. Schizophr Bull 2008 May;34(3):523-537 [FREE Full text] [CrossRef] [Medline]
    27. Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health 1998 Jun;52(6):377-384 [FREE Full text] [Medline]
    28. Samoocha D, Bruinvels D, Elbers N, Anema J, van der Beek AJ. Effectiveness of web-based interventions on patient empowerment: a systematic review and meta-analysis. J Med Internet Res 2010 Jun 24;12(2):e23 [FREE Full text] [CrossRef] [Medline]
    29. Palmier-Claus J, Rogers A, Ainsworth J, Machin M, Barrowclough C, Laverty L, et al. Integrating mobile-phone based assessment for psychosis into people’s everyday lives and clinical care: a qualitative study. BMC Psychiatry 2013 Jan 23;13:34. [CrossRef] [Medline]
    30. Ainsworth J, Palmier-Claus J, Machin M, Barrowclough C, Dunn G, Rogers A, et al. A comparison of two delivery modalities of a mobile phone-based assessment for serious mental illness: native smartphone application vs text-messaging only implementations. J Med Internet Res 2013 Apr 05;15(4):e60 [FREE Full text] [CrossRef] [Medline]
    31. van der Krieke L, Emerencia A, Boonstra N, Wunderink L, de JP, Sytema S. A web-based tool to support shared decision making for people with a psychotic disorder: randomized controlled trial and process evaluation. J Med Internet Res 2013 Oct 07;15(10):e216 [FREE Full text] [CrossRef] [Medline]
    32. Kurtz MM, Mueser KT, Thime WR, Corbera S, Wexler BE. Social skills training and computer-assisted cognitive remediation in schizophrenia. Schizophr Res 2015 Mar;162(1-3):35-41 [FREE Full text] [CrossRef] [Medline]
    33. Smith MJ, Fleming MF, Wright MA, Roberts AG, Humm LB, Olsen D, et al. Virtual reality job interview training and 6-month employment outcomes for individuals with schizophrenia seeking employment. Schizophr Res 2015 Aug;166(1-3):86-91 [FREE Full text] [CrossRef] [Medline]
    34. Beebe L, Smith K, Phillips C. A comparison of telephone and texting interventions for persons with schizophrenia spectrum disorders. Issues Ment Health Nurs 2014 May;35(5):323-329. [CrossRef] [Medline]
    35. Moritz S, Schröder J, Klein JP, Lincoln TM, Andreou C, Fischer A, et al. Effects of online intervention for depression on mood and positive symptoms in schizophrenia. Schizophr Res 2016 Aug;175(1-3):216-222. [CrossRef] [Medline]
    36. Nahum M, Fisher M, Loewy R, Poelke G, Ventura J, Nuechterlein K, et al. A novel, online social cognitive training program for young adults with schizophrenia: a pilot study. Schizophr Res Cogn 2014 Mar 01;1(1):e11-e19 [FREE Full text] [CrossRef] [Medline]
    37. Gleeson J, Lederman R, Wadley G, Bendall S, McGorry P, Alvarez-Jimenez M. Safety and privacy outcomes from a moderated online social therapy for young people with first-episode psychosis. Psychiatr Serv 2014 Apr 01;65(4):546-550. [CrossRef] [Medline]
    38. Ben-Zeev D, Brenner C, Begale M, Duffecy J, Mohr D, Mueser K. Feasibility, acceptability, and preliminary efficacy of a smartphone intervention for schizophrenia. Schizophr Bull 2014 Nov;40(6):1244-1253 [FREE Full text] [CrossRef] [Medline]
    39. Palmier-Claus J, Ainsworth J, Machin M, Dunn G, Barkus E, Barrowclough C, et al. Affective instability prior to and after thoughts about self-injury in individuals with and at-risk of psychosis: a mobile phone based study. Arch Suicide Res 2013;17(3):275-287. [CrossRef] [Medline]
    40. Schlosser D, Campellone T, Kim D, Truong B, Vergani S, Ward C, et al. Feasibility of PRIME: a cognitive neuroscience-informed mobile app intervention to enhance motivated behavior and improve quality of life in recent onset schizophrenia. JMIR Res Protoc 2016 Apr 28;5(2):e77 [FREE Full text] [CrossRef] [Medline]
    41. Hartley S, Haddock G, Vasconcelos ES, Emsley R, Barrowclough C. An experience sampling study of worry and rumination in psychosis. Psychol Med 2014 Jun;44(8):1605-1614. [CrossRef] [Medline]
    42. So SH, Peters ER, Swendsen J, Garety PA, Kapur S. Detecting improvements in acute psychotic symptoms using experience sampling methodology. Psychiatry Res 2013 Nov 30;210(1):82-88. [CrossRef] [Medline]
    43. Sanchez A, Lavaysse L, Starr J, Gard D. Daily life evidence of environment-incongruent emotion in schizophrenia. Psychiatry Res 2014 Dec 15;220(1-2):89-95 [FREE Full text] [CrossRef] [Medline]
    44. Waller R, Gilbody S. Barriers to the uptake of computerized cognitive behavioural therapy: a systematic review of the quantitative and qualitative evidence. Psychol Med 2009 May;39(5):705-712. [CrossRef] [Medline]
    45. Startup M, Jackson MC, Startup S. Insight and recovery from acute psychotic episodes: the effects of cognitive behavior therapy and premature termination of treatment. J Nerv Ment Dis 2006 Oct;194(10):740-745. [CrossRef] [Medline]
    46. Gleeson JF, Cotton S, Alvarez-Jimenez M, Wade D, Gee D, Crisp K, et al. A randomized controlled trial of relapse prevention therapy for first-episode psychosis patients. J Clin Psychiatry 2009 Apr;70(4):477-486. [Medline]
    47. Rose D. Service user produced knowledge. J Ment Heal 2008 Jan 6;17(5):447-451 [FREE Full text]


    Abbreviations

    CBT: cognitive behavioral therapy
    CTAM: clinical trials assessment measure
    DSM-5: Diagnostic and Statistical Manual of Mental Disorders
    ESM: experiential sampling method
    FEP: first episode psychosis
    ICD-10: International Classification of Diseases
    PRISMA: preferred reporting items for systematic reviews and meta-analyses
    RCT: randomized controlled trial


    Edited by G Eysenbach; submitted 01.12.16; peer-reviewed by J Apolinário-Hagen, K Kannisto, J Torous; comments to author 19.01.17; revised version received 03.03.17; accepted 14.03.17; published 20.07.17

    ©Clare Killikelly, Zhimin He, Clare Reeder, Til Wykes. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 20.07.2017.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mhealth and uhealth, is properly cited. The complete bibliographic information, a link to the original publication on http://mhealth.jmir.org/, as well as this copyright and license information must be included.