Published on in Vol 10, No 3 (2022): March

Preprints (earlier versions) of this paper are available at, first published .
Quality of Mobile Apps for Care Partners of People With Alzheimer Disease and Related Dementias: Mobile App Rating Scale Evaluation

Quality of Mobile Apps for Care Partners of People With Alzheimer Disease and Related Dementias: Mobile App Rating Scale Evaluation

Quality of Mobile Apps for Care Partners of People With Alzheimer Disease and Related Dementias: Mobile App Rating Scale Evaluation

Original Paper

1Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI, United States

2Indiana University School of Medicine, Indianapolis, IN, United States

3Department of Health & Wellness Design, School of Public Health-Bloomington, Indiana University, Bloomington, IN, United States

Corresponding Author:

Nicole E Werner, PhD

Department of Industrial and Systems Engineering

University of Wisconsin-Madison

1513 University Avenue

Madison, WI, 53706

United States

Phone: 1 608 890 2578


Background: Over 11 million care partners in the United States who provide care to people living with Alzheimer disease and related dementias (ADRD) cite persistent and pervasive unmet needs related to their caregiving role. The proliferation of mobile apps for care partners has the potential to meet care partners’ needs, but the quality of apps is unknown.

Objective: This study aims to evaluate the quality of publicly available apps for care partners of people living with ADRD and identify design features of low- and high-quality apps to guide future research and user-centered app development.

Methods: We searched the US Apple App and Google Play stores with the criteria that included apps needed to be available in the US Google Play or Apple App stores, accessible to users out of the box, and primarily intended for use by an informal (family or friend) care partner of a person living with ADRD. We classified and tabulated app functionalities. The included apps were then evaluated using the Mobile App Rating Scale (MARS) using 23 items across 5 dimensions: engagement, functionality, aesthetics, information, and subjective quality. We computed descriptive statistics for each rating. To identify recommendations for future research and app development, we categorized rater comments on score-driving factors for each MARS rating item and what the app could have done to improve the item score.

Results: We evaluated 17 apps. We found that, on average, apps are of minimally acceptable quality. Functionalities supported by apps included education (12/17, 71%), interactive training (3/17, 18%), documentation (3/17, 18%), tracking symptoms (2/17, 12%), care partner community (3/17, 18%), interaction with clinical experts (1/17, 6%), care coordination (2/17, 12%), and activities for the person living with ADRD (2/17, 12%). Of the 17 apps, 8 (47%) had only 1 feature, 6 (35%) had 2 features, and 3 (18%) had 3 features. The MARS quality mean score across apps was 3.08 (SD 0.83) on the 5-point rating scale (1=inadequate to 5=excellent), with apps scoring highest on average on functionality (mean 3.37, SD 0.99) and aesthetics (mean 3.24, SD 0.92) and lowest on average on information (mean 2.95, SD 0.95) and engagement (mean 2.76, SD 0.89). The MARS subjective quality mean score across apps was 2.26 (SD 1.02).

Conclusions: We identified apps whose mean scores were more than 1 point below minimally acceptable quality, whereas some were more than 1 point above. Many apps had broken features and were rated as below acceptable for engagement and information. Minimally acceptable quality is likely to be insufficient to meet care partner needs. Future research should establish minimum quality standards across dimensions for care partner mobile apps. Design features of high-quality apps identified in this study can provide the foundation for benchmarking these standards.

JMIR Mhealth Uhealth 2022;10(3):e33863




Over 11 million care partners in the United States who provide care to people living with Alzheimer disease and related dementias (ADRD) are often untrained, underresourced, and unsupported to manage the cognitive, behavioral, and physical changes that characterize ADRD progression [1-3]. Therefore, care partners cite persistent and pervasive unmet needs related to all aspects of their caregiving role, including support for daily care, managing behavioral symptoms of dementia, self-care, resources and support services, health information management, care coordination and communication, and financial and legal planning [4-6]. The ability to address the unmet needs of care partners is a critical health challenge, as these unmet needs are associated with suboptimal psychological and physical outcomes for the care partner and the person living with ADRD [7-10].

National experts call for technologies to be powerful and novel interventions to support care partners [11]. For example, experts from the 2015 Alzheimer Disease Research Summit recommended to “develop new technologies that enhance the delivery of clinical care, care partner support, and in-home monitoring” and “test the use of technology to overcome the workforce limitations in the care of older adults with dementia as well as providing care partner support and education” [11]. The 2018 Research Summit called for “innovative digital data collection platforms” and “pervasive computing assessment methods” [12].

Mobile apps can answer these calls by enabling unique data capture and visualization, multichannel communication, and integration of powerful decision support on increasingly ubiquitous and scalable devices (eg, smartphones). Advancing technological capabilities also increase the potential of mobile apps to provide much-needed individualized, just-in-time support that can adapt to changing needs across the course of the disease [13]. Reviews of mobile apps for care partners report that they are a feasible and acceptable intervention [14] and can reduce ADRD care partner stress and burden [15].

The mere availability of apps is not sufficient to improve health outcomes; these apps must be designed to support and accommodate user needs and abilities, a process called user-centered design (UCD). More formally, UCD is:

an approach to interactive systems development that aims to make systems usable and useful by focusing on the users, their needs and requirements, and by applying human factors/ergonomics, usability knowledge, and techniques. This approach enhances effectiveness and efficiency, improves human well-being, user satisfaction, accessibility and sustainability, and counteracts possible adverse effects of use on human health, safety and performance.

UCD provides a scientifically sound, practice-based mechanism for developing mobile apps for care partners of people living with ADRD that are highly feasible and more likely to improve care partner outcomes [17]. Conversely, if apps are not designed using UCD, they are more likely to be of low quality, cause more harm than good, incur avoidable waste of financial and human resources, not provide the needed support, and compound the existing burden on care partners [13,16,18-21].

Despite the potential of mobile apps to meet care partners’ needs and improve outcomes using UCD and other industry-standard design practices, the actual quality of mobile apps for care partners—that is, how usable, engaging, valid, acceptable, accessible, aesthetically pleasing, and useful they are to the user—is currently unknown. A recent study by Choi et al [22] used the Mobile App Rating Scale (MARS) to assess app quality across ADRD-related apps focused on self-care management for people living with ADRD. They found that, on average, the evaluated apps met the MARS criteria for minimally acceptable quality, quality scores were higher for those developed by health care–related versus non–health care–related developers, and apps scored lower on average regarding how engaging they were to the user [22]. Although this study included some apps with care partners as the intended primary user, the inclusion and exclusion criteria focused on the person living with ADRD, which limited the inclusion of apps targeted at care partners as the intended end user.

It is critical to evaluate the quality of mobile apps for ADRD care partners for several reasons [17]. First, quality assessment ensures that mobile apps produce benefits and do not have unintended health consequences for care partners or persons living with ADRD; for example, they do not increase care partner stress and burden. Second, quality evaluation can provide insights into whether mobile apps will be used and whether use will withstand the test of time; that is, they will not be abandoned. Third, quality evaluation is important to ensure that research-based mobile apps are sustainable outside academic research settings, meaning they can achieve commercial success among competitors. Fourth, quality evaluation can safeguard against commercial products that may not deliver on their advertised potential.


Thus, the aim of this study is to (1) evaluate the quality of publicly available apps for care partners of people living with ADRD and (2) identify the design features of low- and high-quality apps to guide future research and user-centered app development.


We conducted a multirater evaluation of the quality of mobile apps for caregivers of people living with ADRD available on the US market by applying the MARS [23]. The MARS was created to be an easy-to-use and objective tool for researchers and developers to evaluate the quality of mobile apps across multiple dimensions. We chose to use the MARS because it is a validated rating scale for mobile app quality, includes a multicomponent evaluation of quality, has clear instructions and a uniform scale, and has been used successfully across multiple health domains, including pain management and ADRD [22,24,25].

Data Collection

App Identification and Selection

We searched the US Apple App and Google Play stores in March 2021 using multiple variations of the terms “caregiver,” “carer,” “care,” “caretaker,” “dementia,” and “Alzheimer disease.” To be included in the analysis, an app needed to be (1) available in US Google Play or Apple App stores; (2) directly accessible to users out of the box (ie, without a separate agreement with an insurer, health care delivery organization, and enrolling in a clinical trial); and (3) primarily intended for use by an informal (family or friend) care partner or care partners of a person with dementia of any severity, stage, or etiology. Four members of the research team independently searched both app stores to identify eligible apps based on the app name and brief description and identified 50 unique apps. Next, 3 members of the research team applied the inclusion criteria to the compiled list of apps by reviewing the full app description and downloading and exploring the app components. One research team member served as the arbiter by reviewing each app for inclusion and documenting the reason for inclusion or exclusion. The arbiter presented their inclusion decisions to the full research team for a consensus. Primary reasons for app exclusion were not having the caregiver as the primary user (eg, apps for the person living with ADRD), needing to sign up for a clinical trial or be part of a specific health system to access the app, and not being specific to ADRD care (eg, targeted for caregivers of people with any condition). We identified 17 unique apps that met our inclusion criteria, 8 (47%) of which were available in both the iOS and Android versions. For apps that were available on both iOS and Android, we randomly selected whether we would evaluate the iOS or Android version. An expert rater also reviewed the version that was not selected to assess quality differences, and no quality differences were identified between platforms for any of the apps.

App Classification

For each included app, we captured descriptive and technical information, such as name, ratings, version history, language, and functionality. We classified the app’s purpose and functionality based on the app store description and available functionality within the app.

MARS Evaluation

The MARS includes 23 items across 5 dimensions: engagement, functionality, aesthetics, information, and subjective quality [23]. Each item was scored on a 5-point scale, from inadequate (score=1) to excellent (score=5) or not applicable.

Our MARS evaluation team included 7 research team members: 3 experts in UCD and ADRD caregiving and 4 trainees in these areas. The MARS training process began with the full team independently reviewing the published MARS guide, including instructions, definitions, and rating scales. Next, we conducted 3 team-based training sessions to improve consensus on the MARS ratings. During the training sessions, we evaluated each app as a team, item by item, with a discussion of each item rating to build consensus on how to interpret the items and the criteria for each score within an item. During the team rating sessions, we discussed score anchors and annotated the MARS rating sheet based on consensus anchors. Between team training sessions, team members practiced applying the ratings discussed in the previous session and created additional annotations based on the team consensus discussion, which were then shared with the full team at the subsequent meeting.

Next, each app was rated using the MARS by at least 2 independent raters. To apply the MARS, each trained rater downloaded the app to a testing phone, paid fees, and tested the app to ensure that all components of the app were used. The rater then completed the 23 MARS rating items in order, app by app. In addition to the required MARS rating procedures, raters also documented for each item the score-driving factors for that item and what the app could have done to improve the score. This was done to support our aim of guiding future research and app development.

Two members of the research team reviewed all the scores and identified the items for which the original 2 raters had disagreements in their scores. For items with a disagreement score, an expert rater (JCB, RJH, or NEW) was used as the tiebreaker. The goal of the expert rater as a tiebreaker was to determine which of the scores they agreed with are based on the MARS training and their expertise in evaluating health information technologies. However, if the expert rater disagreed with both scores, the tie-breaking score could be different from the original 2 raters with clear justification. Expert raters were senior members of the research team with doctoral training in UCD, a combined 6 years designing and evaluating dementia caregiving technologies, and a combined 13 years evaluating health information technologies. We calculated the percentage agreement for each rating dyad and the overall agreement rate.

Data Analysis

The ratings were entered into a cloud-based Microsoft Excel spreadsheet, and descriptive statistics were computed for each rating. First, we computed the mean score for each of the quality dimensions (engagement, functionality, aesthetics, information, and subjective quality) for each individual app as the sum of the item scores in each dimension divided by the items in the dimension. Next, we calculated the app quality mean score for each app as the sum of the dimension mean scores divided by the number of dimensions. We calculated the total mean score for each dimension across all apps as the sum of each app’s dimension mean score divided by the total number of apps. We computed the overall app subjective mean score as the sum of the mean scores divided by the total number of apps.

To identify recommendations for future research and app development, 2 expert members of the research team categorized rater comments on the score-driving factors for each item and what the app could have done to improve the score for that item. They then met to discuss the categories and reach a consensus.

Ethics Approval

This study did not involve human subjects.

App Classification

We evaluated 17 apps (n=7, 41%, iOS only; n=2, 12%, Android only; and n=8, 47%, both iOS and Android). Before the expert-based score reconciliation process, across 6 rating dyads, the raters provided the exact same rating on a 1 to 5 scale in 43% of ratings; rater dyads agreed within 1 point in 83% of cases (detailed agreement and disagreement rates of each rating dyad are given in Multimedia Appendix 1). All apps except one were available at no cost for the most basic version, and no apps required an additional cost to upgrade to advanced features or additional content. Apps had affiliations with commercial companies 47% (8/17), universities 24% (4/17), health systems 18% (3/17), governments 12% (2/17), and nongovernmental organizations 6% (1/17). Of the 17 apps evaluated, 14 (82%) were available in English only; 1 (6%) was available in English, Korean, and Spanish; 1 (6%) was available in English and Japanese; and 1 (6%) was available in English and Portuguese. Full descriptions and technical details of the apps are provided in Table 1.

We identified 8 general feature categories supported by the apps (Table 2). These categories included the ability of the app to provide the following: (1) education—the provision of relevant, appropriate content that increases care partner knowledge and self-efficacy to perform their role and make informed decisions (12/17, 71%); (2) interactive training—reciprocal exchange of information for care partner development and learning (3/17, 18%); (3) documentation—storage or recording of information for later retrieval (3/17, 18%); (4) tracking of symptoms (2/17, 12%); (5) care partner community—a platform or feature created for the exchange of social support among care partners (3/17, 18%); (6) interaction with clinical experts (1/17, 6%); (7) care coordination—the organization and distribution of patient care activities among all involved participants (2/17, 12%); and (8) activities for the person living with ADRD (2/17, 12%). Of the 17 apps, 8 (47%) had only 1 feature, 6 (35%) had 2 features, and 3 (18%) had 3 features. Most apps (12/17, 71%) provided information, and for 29% (5/17) of apps, providing information was the only feature. Of the 17 apps, some features such as interaction with clinicians or tracking symptoms were offered by only 1 (6%) or 2 (12%) apps.

Table 1. Apps evaluated in the study and their descriptive and technical details.
App namePlatformCategoryDeveloperYear of last updateCountryLanguagePurposeAffiliations
Accessible Alzheimer\'s and Dementia CareiOSHealth and fitnessACHGLOBAL, IncN/AaUnited StatesEnglishProvide valuable informationCommercial
Alzheimer\'s and Dementia CareAndroidHealth and fitnessAccessible home health care2019United StatesEnglishProvide informationCommercial
Alzheimer’s Daily CompanioniOSb AndroidLifestyleHome Instead Senior Care2013United StatesEnglishBuild care partner confidenceCommercial
Alzheimer’s ManageriOSHealth and fitnessPoint of Care LLC2021United StatesEnglishTrack and monitorCommercial
Care4DementiaiOSMedicalUniv of New South WalesN/AAustraliaEnglishProvide informationUniversity
Clear Dementia CareiOS AndroidMedicalNorthern Health and Social Care Trust2021United StatesEnglishProvide information and supportGovernment Health System
CogniCareiOSb AndroidHealth and fitnessCongniHealth Ltd2021United KingdomEnglish, JapaneseImprove quality of lifeCommercial University
Dementia AdvisoriOSb AndroidHealth and fitnessSinai Health System2019CanadaEnglishImprove communicationGovernment Health System
Dementia Caregiver SolutionsiOScHealthLorenzo Gentile2016CanadaEnglishProvide expert adviceCommercial
Dementia Guide ExpertiOSb AndroidEducationUniversity of Illinois2020United StatesEnglish, Korean, SpanishEducate and empowerUniversity
Dementia Stages Ability ModeliOSb AndroidEducationPositive Approach, LLC2020United StatesEnglishHelp learn characteristics and care of GEMS stagesCommercial
Dementia TalkiOSHealth and fitnessSinai Health System- Reitman Centre2019United StatesEnglishTrack and monitorHealth System
DementiAssistAndroidMedicalBaylor Scott and White Health2015United StatesEnglishProvide insightsUniversity
DemKonnectiOSb AndroidMedicalNightingales Medical Trust2020United KingdomEnglishProvide care partner connectionsNGOd
Inspo-Alzheimer’s CaregivingiOSb AndroidSocial networkingInspo Labs2020United StatesEnglishCreate safe supportive communityN/A
Remember Me-CaregiveriOSProductivityDaniel Leal2020United StatesEnglish, PortugueseShare care responsibilityN/A
Respite MobileiOSHealth and fitnessADC initiatives LLC2021United StatesEnglishProvide activities for people with ADRDeCommercial

aN/A: not available.

bIndicates platform reviewed.

cIndicates cost of use.

dNGO: nongovernmental organization.

eADRD: Alzheimer disease and related dementias.

Table 2. Feature categories of the evaluated apps (N=17).
AppsEducation (n=12, 71%)Interactive training (n=3, 18%)Documentation (n=3, 18%)Tracking symptoms (n=2, 12%)Care partner community (n=3, 18%)Interaction with clinical expert (n=1, 6%)Care coordination (n=2, 12%)Activities for person with dementia (n=2, 12%)
Accessible Alzheimer\'s and Dementia Care

Alzheimer\'s and Dementia Care

Alzheimer\'s Daily Companion

Alzheimer\'s Manager


Clear Dementia Care


Dementia Advisor

Dementia Caregiver Solutions

Dementia Guide Expert

Dementia Stages Ability Model

Dementia Talk



Inspo-Alzheimer’s Caregiving

Remember Me-Caregiver

Respite Mobile

MARS Evaluation

The MARS app quality mean score across all apps was 3.08 (SD 0.83) on the 5-point rating scale (from 1=inadequate to 5=excellent), with apps scoring highest on average on functionality (mean 3.37, SD 0.99) and aesthetics (mean 3.24, SD 0.92) and lowest on average on information (mean 2.95, SD 0.95) and engagement (mean 2.76, SD 0.89; Table 3).

Table 3. Mean scores on the Mobile App Rating Scale (MARS) rating categories with category definitions and subjective evaluation data, including app store number of ratings, app store average ratings, and the MARS subjective quality score.
AppsQuality mean score, meanEngagement, meanFunctionality, meanAesthetics, meanInformation, meanSubjective quality score, meanApp store number ratingsApp store average rating (out of 5 stars)
Accessible Alzheimer’s and Dementia Care1.261.201.501.001.331.0015
Alzheimer’s and Dementia Care2.
Alzheimer’s Daily Companion2.501.603.252.672.501.25164.6
Alzheimer’s Manager2.982.603.503.002.832.6713
Clear Dementia Care3.853.804.253.334.003.2515
Dementia Advisor4.
Dementia Caregiver Solutions3.112.603.503.003.332.0015
Dementia Guide Expert2.662.402.752.333.173.2515
Dementia Stages Ability Model4.263.304.754.674.333.25155
Dementia Talk3.
Inspo-Alzheimer’s Caregiving4.
Remember Me-Caregiver1.881.502.502.331.171.00NRNR
Respite Mobile3.353.
Overall, mean (SD)3.08 (0.83)2.76 (0.89)3.37 (0.99)3.24 (0.92)2.95 (0.95)2.26 (1.02)N/AbN/A

aNR: not rated.

bN/A: not applicable.

The MARS subjective quality mean score across all apps was 2.26 (SD 1.02), with mean scores ranging from 1 to 4.5. The mean score for the question, “Would you recommend the app to people who might benefit from it?” was 2.59 (SD 1.42).

The MARS app quality mean score of 2.94 (SD 0.93) for apps with a commercial affiliation was slightly below the minimally acceptable quality and slightly above the minimally acceptable quality 3.26 (SD 0.57) for apps with noncommercial affiliations (ie, universities, governments, health systems, and nongovernmental organizations). The MARS subjective quality mean score (SD) was below the minimally acceptable quality for apps with both commercial affiliation (mean 1.96, SD 0.83) and noncommercial affiliations (mean 2.64, SD 1.11).

Table 3 provides the mean scores on the MARS quality rating dimensions and subjective evaluation data, including the MARS subjective quality score, app store number of ratings, and app store average ratings for all evaluated apps.

Score-Driving Factors

Table 4 lists the most frequently identified design qualities that led to low or high MARS scores for each MARS dimension. Among factors contributing to low scores, a common one was broken functionality, leading to crashes, error messages, and unresponsiveness, noted in 59% (10/17) of the apps. Among factors contributing to high scores, a common quality included aesthetics where adequate use of multimedia for content presentation, clear and consistent user interface layouts, and high-quality graphics were noted in 53% (9/17) of the apps.

Table 4. Mobile App Rating Scale (MARS) dimensions, categories within those dimensions, and examples of design features that were score drivers for low and high scores.
MARS dimension and categoriesExamples of low-score driversExamples of high-score drivers

EntertainmentEntertaining content such as games, chat, videos, and forums that do not function; extensive and overwhelming content; and very little contentUse of multimedia (eg, combination of text, video, audio, images, and animations)

InterestText only with no images, large blocks of text, frequent system failure, constantly linking to outside website, and no ability to customize experienceVariety of content, features, and color throughout the app

CustomizationLimited, inoperable, or missing customization featuresVariety of customization options (eg, privacy settings, preference selection, notifications, and favorites)

InteractivityInteractive content such chat, graphs, and forums does not function; must click what you need every time (the app does not retain information); and community forum but no active usersFeedback systems (eg, confirmations, error messages, and validations) and variety of data visualization with charts, in-app messaging, and features for community building

Target groupSmall font, no ability to zoom, provides only general information, and no privacy settingsContent relevance and usefulness of information

PerformanceFrequent error messages and crashes, frequently unresponsive or slow, and includes inactive hyperlinksResponsiveness and efficient transitions throughout the app

Ease of useTakes a lot of time to figure out how to use, functions difficult to learn to use the complicated app architecture, and no instructions providedClarity and intuitiveness of app functions and learnability, operability, and app instructions

NavigationMenu options change within the app, clicking a link takes you to the incorrect function, consistently sending to an outside website with broken links, no back button provided, and the Menu options do not functionLogic, consistency, and visual cues matched users’ expectations; external sources within the app; and minimalist design

Gestural designGestures differ from expectation in terms of phone gesturesLogical, consistent, anticipated gestures, links, and buttons

LayoutBlocks of text and inconsistent layout across pagesClear and simple user interface layout

GraphicsLow quality (blurry) and no graphicsQuality, high-resolution graphics

Visual appealNo color, no graphics, no multimedia, and inconsistent text sizes and colorsCreative, impactful, and thoughtful use of color

Accuracy of app descriptionDescribes content that does not function or is not availableFeatures and functions aligned with the app description

GoalsGoals not stated, goals not achievable because app functions are broken or unresponsive, and no ability to measure goal attainmentGoals stated explicitly with measurable or trackable metrics

Quality of informationSources not cited, sources cited are questionable, links provided to sources are broken, and information disorganized or difficult to locateInformation provided from trusted, cited sources; language used written with end users or target demographic in mind; and information relevant to users

Quantity of informationExtensive and overwhelming amount of information and very little informationSufficient and comprehensive range of information

Visual informationNo visual information availableLogical use of videos, multimedia, and helpful images to provide clarity

CredibilityNo sources cited and commercial entity selling other productsCreated by a legitimate, verified entity, including hospital, center, government, university, or council

Evidence baseNot tested for effectiveness in improving person living with ADRDa or care partner outcomesN/Ab

aADRD: Alzheimer disease and related dementias.

bN/A: not available.

Principal Findings

The objectives of our study were to (1) evaluate the quality of publicly available apps for care partners of people living with ADRD and (2) identify the design features of low- and high-quality apps to guide future research and app development. Our findings show that across all apps, the average MARS quality rating was just above the minimally acceptable cut-off of 3.00 (mean 3.08, SD 0.83; range 1.26-4.26), and the average MARS subjective quality rating of all the apps was less than acceptable (mean 2.26, SD 1.02; range 1.00-4.50). We also identified apps whose individual mean scores were more than 1 point below the minimal acceptable quality, whereas some were more than 1 point above. Furthermore, most of the apps we assessed had broken features and were rated as below acceptable quality for the MARS dimensions of engagement and information quality.

Of the 17 mobile apps, our analysis identified 3 (18%) with a rating of good or higher quality (MARS quality mean score >4). Furthermore, Dementia Advisor scored greater than 4 (ie, indicating good quality) on both the MARS quality mean score and the subjective quality mean score. In contrast to most apps that focus on providing education through text and videos, Dementia Advisor provides interactive training on a wide variety of scenarios with feedback to improve learning. The app was simple and intuitive, without the need for instructions or significant time to learn to use the app features. All features of the app were functional, and the progress through the training scenarios was tracked by the app.

We found that most apps focused on passively delivering educational content. Providing education is important, as care partners report persistent unmet needs related to understanding ADRD as a disease process, including diagnosis, prognosis, and disease progression; long-term care and financial and legal planning; and management of cognitive and behavioral symptoms [4,5,26,27]. However, the extent of the effectiveness of passive learning content (eg, reading an article or watching a video) provided by these apps is unknown and may be limited as opposed to engaged active learning approaches that foster information retention [28-30]. In addition, care partners also reported the need for training, support for coordination across the caregiving network, connection to relevant resources, and social support [4,5,27,31-33]. Some apps did attempt to address care partners’ need for social support by offering forums, chats, and community features. However, we found that these features were often not functional or did not have active participation from users, limiting the app’s ability to fulfill their promise of social support. Furthermore, the apps were limited in functionality to support coordination across the caregiving network, with only 2 apps supporting coordination with other care partners and only 1 connecting care partners with clinicians. Overall, the limited functionality provided across most apps raises questions about their potential to improve care partner outcomes, as several recent systematic reviews and meta-analyses suggested that effective care partner interventions provide multiple components and social support [34-44].

Overall, the apps scored higher on functionality and aesthetics than on engagement and information quality. The apps, on average, scored just above minimally acceptable for functionality (mean 3.37, SD 0.99), which includes app performance, ease of use, navigation, and gestural design. Functionality is important for care partners because it reflects the potential of the app to meet basic care partner needs in terms of app usability. This score is a point lower than that indicated in the MARS rating reported in a 2020 study by Choi et al [22], which used the MARS to assess the quality of all ADRD-related apps, including those focused on care partners and those focused on the person living with ADRD. It is possible that the higher scores found by Choi et al [22] reflect a higher quality of apps designed for people living with ADRD, as a recent study by Guo et al [45] on rating mobile apps for people living with ADRD reported a similarly high functionality score.

On average, the apps scored as just above minimally acceptable for aesthetics (mean 3.24), including layout, graphics, and visual appeal. This is similar to the aesthetic scores reported by Choi et al [22] and lower than the average aesthetics score reported by Guo et al [45]. Aesthetics is an important dimension of quality that allows apps to stand out in the marketplace. Aesthetics can also facilitate emotionally positive experiences, which can improve user perceptions of the app [46,47].

However, on average, engagement, which included entertainment, customization, interactivity, and fit to the target group, was slightly below acceptable quality (mean 2.76). Similarly, the findings of both Choi et al [22] and Guo et al [45] reported that apps scored lowest on engagement, reporting just-below minimally acceptable quality and above minimally acceptable quality, respectively. These findings further confirm previous research that evaluated 8 commercially available apps for ADRD care partners and found that the majority provided mostly text-based information [48]. Below acceptable engagement scores are concerning, as engagement issues can lead to technology abandonment, reduced acceptance, or failure to use the app to its full potential [49,50]. For care partners, engagement may be critical, as they often experience high levels of demands associated with their caregiver role [31,51,52]. As demonstrated in other populations with chronic health conditions [53,54], engagement is important to sustain care partners’ attention when their attention is drawn to the many other demands they experience daily.

Information quality, which included information quantity, visual information, credibility, goals, and app description, also scored, on average, slightly lower than minimally acceptable (mean 2.95, SD 0.95). This is a point lower than the information quality score reported by Choi et al [22]. It is possible that this score difference could be because of information quality differences of the apps designed for people living with ADRD as their target users, which is further supported by a similar high score reported by Guo et al [45]. Information is a critical component to meeting care partners’ unmet needs, and low information quality may increase the likelihood of technology abandonment [55]. For example, recent research found that when care partners search for information and cannot meet the information need at that time, they often abandon the information behavior [18]. Furthermore, low-quality information is likely to reduce perceived usefulness, which has been shown to be a key factor influencing caregivers’ intention to adopt mobile health apps [56]. Lower scores on information are also concerning, as this score reflects that apps are often not tested for effectiveness in improving people living with ADRD or care partner outcomes, reducing the ability to safeguard against products that may not deliver on their advertised potential. Specifically, of the 17 apps, 7 (41%) had a mean information quality score that ranged from 1.17 to 2.50 and 11 had a mean subjective quality score that ranged from 1.00 to 2.50. The scoring of both dimensions indicates inadequate quality, which potentially heightens the risk of technology abandonment and loss of the intended impact for target users. Furthermore, apps often state goals without any way to measure or track goal attainment; therefore, there are no clear pathways provided to evaluate whether the stated goals are achievable.

Although most apps met the MARS requirement for minimal acceptability, it may not be sufficient to meet the needs of care partners of people living with dementia. Research on older adults’ technology acceptance indicates that they have a higher standard for technology acceptance [57,58]. As many care partners are older adults, raising the bar for acceptable mobile app quality may be critical to sustained care partner use. Furthermore, care partners experience high demands related to their caregiving role and managing complex symptoms and progressive decline and often experience suboptimal health outcomes such as high levels of burden, depression, and anxiety. Therefore, mobile apps may confer some level of risk and need to be held at a high standard so that they do not add burden or increase the risk of suboptimal health outcomes. In addition, an average score at the level of minimal acceptability may mask serious quality violations on one dimension that are counterbalanced by higher-than-average scores on other dimensions. For example, the above average–rated app Respite Mobile (mean MARS quality score 3.35) had a low information quality score (2.27) counterbalanced by particularly high scores on aesthetics (4.0) and functionality (4.25). Thus, minimum standards across dimensions may need to be imposed to avoid harm from counterbalanced weaknesses.

Overall, our ratings of the apps mirror some of those produced from a similar study by Choi et al [22], who also found app engagement scores to be lower than acceptable quality and further highlighted that their scoring differed based on the types of developers (ie, health care–related vs non–health care–related) and intended purpose (ie, awareness, assessment, and disease management). We lacked an appropriate sample to statistically compare differences between developer types. However, we similarly found that for overall mean scores, those developed by commercial entities were just below the minimally acceptable quality, whereas those developed by noncommercial entities were just above the minimally acceptable quality. This comparison further confirms our suggestion to establish higher standardized criteria for health information technology to meet the needs of the care partners of people living with dementia.

Considering the variability in app quality and the failure of many apps to attain acceptable overall and dimension-specific quality ratings, there is a need to adopt quality-focused design and development approaches. One such approach is UCD, introduced earlier and characterized by design driven by a foundational understanding of user needs, direct or indirect input from end users in the design process, and rigorous testing with representative samples of intended end users [16]. In participatory forms of UCD, sometimes called co-design, care partners can also actively contribute to design, leading to a higher likelihood that user needs and abilities are supported and accommodated [59]. UCD approaches can also be used to facilitate engagement through gamification and persuasive design. Furthermore, UCD-based emotional design can increase the quality of aesthetics and functionality [46,47].


The results of this study should be considered in light of certain limitations. Not all the raters in our study were experts in technology design. However, we had 3 expert raters who conducted training and acted as arbiters for inclusion decisions and MARS rating. In addition, as per the MARS approach, the raters were not users themselves. To enhance our understanding of the quality of mobile apps for care partners of people living with dementia, future studies should include user testing, such as usability testing and other user tests, alongside expert ratings. Furthermore, we did not rate apps that were available only to study participants. However, the apps we rated are currently available on the market to all users and not limited to the study inclusion and exclusion criteria and participation timelines. Related to this, we were able to rate only what we could access. This means that apps that malfunctioned during log-ins or were only available to customers of a specific health system were not reviewed.

We also identified the limitations of the MARS that should be considered. First, the MARS assumes a typical user and does not address diverse personas, such as users with diverse ages, physical and cognitive abilities, race, ethnicities, and urbanicity or rurality. Second, applying the MARS item definitions is somewhat subjective, and the definitions are not connected to norms, such as a database of prior MARS evaluations. We addressed this limitation through training by reconciling differences in the interpretation of definitions through discussions and consensus building. Third, the MARS does not include certain aspects of design that contribute to app quality, such as security, the design process used, data standards, and accessibility compliance.


In evaluating the quality of publicly available apps for care partners of people living with ADRD, we found that apps, on average, are of minimally acceptable quality. Although we identified apps both above and below the minimally acceptable quality, many apps had broken features and were rated as below acceptable quality for engagement and information quality. Minimally acceptable quality is likely insufficient to meet the needs of care partners without potentially causing harm by increasing burden and stress. Future research should establish minimum quality standards across dimensions for mobile apps for care partners. The design features of high-quality apps identified in this study can provide the foundation for benchmarking these standards.


This work was supported by the National Institutes of Health, National Institute on Aging under grants R01 AG056926 and R21 AG062966 (RJH and JCB), 1R21AG072418 (RJH and NEW), and R41AG069607 (NEW and PL). The authors would like to thank Jessica Lee, Manoghna Vuppalapati, and Addison Latterell for their contributions to the MARS rating process.

Conflicts of Interest

RJH provides paid scientific advising to Cook Medical, LLC. None of the apps reviewed are developed or owned by Cook or its subsidiaries.

Multimedia Appendix 1

Agreement and disagreement rates of each dyad before the expert-based score reconciliation process.

DOCX File , 14 KB

  1. Whitlatch CJ, Orsulic-Jeras S. Meeting the informational, educational, and psychosocial support needs of persons living with dementia and their family caregivers. Gerontologist 2018;58(suppl_1):S58-S73. [CrossRef] [Medline]
  2. Maust D, Leggett A, Kales HC. Predictors of unmet need among caregivers of persons with dementia. Am J Geriatr Psychiatry 2017;25(3):S133-S134. [CrossRef]
  3. Monica MM, Díaz-Santos M, Vossel K. 2021 Alzheimer's disease facts and figures. Alzheimer's Association. 2021.   URL: [accessed 2022-03-14]
  4. Soong A, Au ST, Kyaw BM, Theng YL, Tudor Car L. Information needs and information seeking behaviour of people with dementia and their non-professional caregivers: a scoping review. BMC Geriatr 2020;20(1):61 [FREE Full text] [CrossRef] [Medline]
  5. McCabe M, You E, Tatangelo G. Hearing their voice: a systematic review of dementia family caregivers' needs. Gerontologist 2016;56(5):e70-e88. [CrossRef] [Medline]
  6. Killen A, Flynn D, De Brún A, O'Brien N, O'Brien J, Thomas AJ, et al. Support and information needs following a diagnosis of dementia with Lewy bodies. Int Psychogeriatr 2016;28(3):495-501. [CrossRef] [Medline]
  7. Yaffe K, Fox P, Newcomer R, Sands L, Lindquist K, Dane K, et al. Patient and caregiver characteristics and nursing home placement in patients with dementia. JAMA 2002;287(16):2090-2097. [CrossRef] [Medline]
  8. Coen RF, Swanwick GR, O'Boyle CA, Coakley D. Behaviour disturbance and other predictors of carer burden in Alzheimer's disease. Int J Geriatr Psychiatry 1997;12(3):331-336. [Medline]
  9. Clyburn LD, Stones MJ, Hadjistavropoulos T, Tuokko H. Predicting caregiver burden and depression in Alzheimer's disease. J Gerontol B Psychol Sci Soc Sci 2000;55(1):S2-13. [CrossRef] [Medline]
  10. Schulz R, O'Brien AT, Bookwala J, Fleissner K. Psychiatric and physical morbidity effects of dementia caregiving: prevalence, correlates, and causes. Gerontologist 1995;35(6):771-791. [CrossRef] [Medline]
  11. Recommendations from the NIH AD research summit 2015. National Institute on Aging. 2015.   URL: [accessed 2022-03-11]
  12. Recommendations from the NIH AD research summit 2018. National Institute of Aging. 2018.   URL: [accessed 2022-03-11]
  13. Werner NE, Stanislawski B, Marx KA, Watkins DC, Kobayashi M, Kales H, et al. Getting what they need when they need it. Identifying barriers to information needs of family caregivers to manage dementia-related behavioral symptoms. Appl Clin Inform 2017;8(1):191-205 [FREE Full text] [CrossRef] [Medline]
  14. Rathnayake S, Moyle W, Jones C, Calleja P. mHealth applications as an educational and supportive resource for family carers of people with dementia: an integrative review. Dementia (London) 2019;18(7-8):3091-3112. [CrossRef] [Medline]
  15. Lorca-Cabrera J, Grau C, Martí-Arques R, Raigal-Aran L, Falcó-Pegueroles A, Albacar-Riobóo N. Effectiveness of health web-based and mobile app-based interventions designed to improve informal caregiver's well-being and quality of life: a systematic review. Int J Med Inform 2020;134:104003. [CrossRef] [Medline]
  16. ISO 9241-210:2019: ergonomics of human-system interaction - Part 210: human-centred design for interactive systems. International Organization for Standardization. 2019.   URL: [accessed 2022-03-11]
  17. Boustani M, Unützer J, Leykum LK. Design, implement, and diffuse scalable and sustainable solutions for dementia care. J Am Geriatr Soc 2021;69(7):1755-1762. [CrossRef] [Medline]
  18. Rutkowski RA, Ponnala S, Younan L, Weiler DT, Bykovskyi AG, Werner NE. A process-based approach to exploring the information behavior of informal caregivers of people living with dementia. Int J Med Inform 2021;145:104341 [FREE Full text] [CrossRef] [Medline]
  19. McCurdie T, Taneva S, Casselman M, Yeung M, McDaniel C, Ho W, et al. mHealth consumer apps: the case for user-centered design. Biomed Instrum Technol 2012;Suppl:49-56. [CrossRef] [Medline]
  20. Lyon AR, Koerner K. User-centered design for psychosocial intervention development and implementation. Clin Psychol (New York) 2016;23(2):180-200 [FREE Full text] [CrossRef] [Medline]
  21. Tippey KG, Weinger MB. User-centered design means better patient care. Biomed Instrum Technol 2017;51(3):220-222. [CrossRef] [Medline]
  22. Choi SK, Yelton B, Ezeanya VK, Kannaley K, Friedman DB. Review of the content and quality of mobile applications about Alzheimer's disease and related dementias. J Appl Gerontol 2020;39(6):601-608 [FREE Full text] [CrossRef] [Medline]
  23. Stoyanov SR, Hides L, Kavanagh DJ, Zelenko O, Tjondronegoro D, Mani M. Mobile app rating scale: a new tool for assessing the quality of health mobile apps. JMIR Mhealth Uhealth 2015;3(1):e27 [FREE Full text] [CrossRef] [Medline]
  24. Terhorst Y, Philippi P, Sander LB, Schultchen D, Paganini S, Bardus M, et al. Validation of the mobile application rating scale (MARS). PLoS One 2020;15(11):e0241480 [FREE Full text] [CrossRef] [Medline]
  25. Salazar A, de Sola H, Failde I, Moral-Munoz JA. Measuring the quality of mobile apps for the management of pain: systematic search and evaluation using the mobile app rating scale. JMIR Mhealth Uhealth 2018;6(10):e10718 [FREE Full text] [CrossRef] [Medline]
  26. Peterson K, Hahn H, Lee AJ, Madison CA, Atri A. In the information age, do dementia caregivers get the information they need? Semi-structured interviews to determine informal caregivers' education needs, barriers, and preferences. BMC Geriatr 2016;16(1):164 [FREE Full text] [CrossRef] [Medline]
  27. Esandi N, Nolan M, Alfaro C, Canga-Armayor A. Keeping things in balance: family experiences of living with Alzheimer's disease. Gerontologist 2018;58(2):e56-e67. [CrossRef] [Medline]
  28. Haidet P, Morgan RO, O'Malley K, Moran BJ, Richards BF. A controlled trial of active versus passive learning strategies in a large group setting. Adv Health Sci Educ Theory Pract 2004;9(1):15-27. [CrossRef] [Medline]
  29. Miller JL, Paciga KA, Danby S, Beaudoin-Ryan L, Kaldor T. Looking beyond swiping and tapping: review of design and methodologies for researching young children’s use of digital technologies. Cyberpsychology 2017;11(3):15-27. [CrossRef]
  30. Zosh JM, Lytle SR, Golinkoff RM, Hirsh-Pasek K. Putting the education back in educational apps: how content and context interact to promote learning. In: Barr R, Linebarger DN, editors. Media exposure during infancy and early childhood: the effects of content and context on learning and development. Cham, Switzerland: Springer; 2017:259-282.
  31. Ponnala S, Block L, Lingg AJ, Kind AJ, Werner NE. Conceptualizing caregiving activities for persons with dementia (PwD) through a patient work lens. Appl Ergon 2020;85:103070 [FREE Full text] [CrossRef] [Medline]
  32. Neubert L, Gottschalk S, König HH, Brettschneider C. Dementia care-giving from a family network perspective in Germany: a typology. Health Soc Care Community 2022;30(2):579-591. [CrossRef] [Medline]
  33. Friedman EM, Kennedy DP. Typologies of dementia caregiver support networks: a pilot study. Gerontologist 2021;61(8):1221-1230 [FREE Full text] [CrossRef] [Medline]
  34. Chiu T, Marziali E, Colantonio A, Carswell A, Gruneir M, Tang M, et al. Internet-based caregiver support for Chinese Canadians taking care of a family member with Alzheimer disease and related dementia. Can J Aging 2009;28(4):323-336. [CrossRef] [Medline]
  35. Glueckauf RL, Ketterson TU, Loomis JS, Dages P. Online support and education for dementia caregivers: overview, utilization, and initial program evaluation. Telemed J E Health 2004;10(2):223-232. [CrossRef] [Medline]
  36. Bateman DR, Srinivas B, Emmett TW, Schleyer TK, Holden RJ, Hendrie HC, et al. Categorizing health outcomes and efficacy of mHealth apps for persons with cognitive impairment: a systematic review. J Med Internet Res 2017;19(8):e301 [FREE Full text] [CrossRef] [Medline]
  37. Godwin KM, Mills WL, Anderson JA, Kunik ME. Technology-driven interventions for caregivers of persons with dementia: a systematic review. Am J Alzheimers Dis Other Demen 2013;28(3):216-222 [FREE Full text] [CrossRef] [Medline]
  38. Martínez-Alcalá CI, Pliego-Pastrana P, Rosales-Lagarde A, Lopez-Noguerola JS, Molina-Trinidad EM. Information and communication technologies in the care of the elderly: systematic review of applications aimed at patients with dementia and caregivers. JMIR Rehabil Assist Technol 2016;3(1):e6 [FREE Full text] [CrossRef] [Medline]
  39. Marziali E, Garcia LJ. Dementia caregivers' responses to 2 internet-based intervention programs. Am J Alzheimers Dis Other Demen 2011;26(1):36-43 [FREE Full text] [CrossRef] [Medline]
  40. Lewis ML, Hobday JV, Hepburn KW. Internet-based program for dementia caregivers. Am J Alzheimers Dis Other Demen 2010;25(8):674-679 [FREE Full text] [CrossRef] [Medline]
  41. van der Roest HG, Meiland FJ, Jonker C, Dröes RM. User evaluation of the DEMentia-specific Digital Interactive Social Chart (DEM-DISC). A pilot study among informal carers on its impact, user friendliness and, usefulness. Aging Ment Health 2010;14(4):461-470. [CrossRef] [Medline]
  42. Hopwood J, Walker N, McDonagh L, Rait G, Walters K, Iliffe S, et al. Internet-based interventions aimed at supporting family caregivers of people with dementia: systematic review. J Med Internet Res 2018;20(6):e216 [FREE Full text] [CrossRef] [Medline]
  43. Etxeberria I, Salaberria K, Gorostiaga A. Online support for family caregivers of people with dementia: a systematic review and meta-analysis of RCTs and quasi-experimental studies. Aging Ment Health 2021;25(7):1165-1180. [CrossRef] [Medline]
  44. Deeken F, Rezo A, Hinz M, Discher R, Rapp MA. Evaluation of technology-based interventions for informal caregivers of patients with dementia-a meta-analysis of randomized controlled trials. Am J Geriatr Psychiatry 2019;27(4):426-445. [CrossRef] [Medline]
  45. Guo Y, Yang F, Hu F, Li W, Ruggiano N, Lee HY. Existing mobile phone apps for self-care management of people with Alzheimer disease and related dementias: systematic analysis. JMIR Aging 2020;3(1):e15290 [FREE Full text] [CrossRef] [Medline]
  46. Norman DA. Emotional design: why we love (or hate) everyday things. New York, NY: Basic Civitas Books; 2004.
  47. Kujala S, Miron-Shatz T. Emotions, experiences and usability in real-life mobile phone use. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2013 Presented at: CHI '13; April 27-May 2, 2013; Paris, France p. 1061-1070. [CrossRef]
  48. Wozney L, Freitas de Souza LM, Kervin E, Queluz F, McGrath PJ, Keefe J. Commercially available mobile apps for caregivers of people with Alzheimer disease or other related dementias: systematic search. JMIR Aging 2018;1(2):e12274 [FREE Full text] [CrossRef] [Medline]
  49. Edney S, Ryan JC, Olds T, Monroe C, Fraysse F, Vandelanotte C, et al. User engagement and attrition in an app-based physical activity intervention: secondary analysis of a randomized controlled trial. J Med Internet Res 2019;21(11):e14645 [FREE Full text] [CrossRef] [Medline]
  50. Zhao Z, Arya A, Whitehead A, Chan G, Etemad SA. Keeping users engaged through feature updates: a long-term study of using wearable-based exergames. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017 Presented at: CHI '17; May 6-11, 2017; Denver, CO p. 1053-1064. [CrossRef]
  51. Waligora KJ, Bahouth MN, Han HR. The self-care needs and behaviors of dementia informal caregivers: a systematic review. Gerontologist 2019;59(5):e565-e583. [CrossRef] [Medline]
  52. Ponnala S, Werner NE. Exploring informal caregiver workload using a macroergonomics lens on multiple resources. Proc Hum Factors Ergon Soc Annu Meet 2021;64(1):856-860. [CrossRef]
  53. Dennison L, Morrison L, Conway G, Yardley L. Opportunities and challenges for smartphone applications in supporting health behavior change: qualitative study. J Med Internet Res 2013;15(4):e86 [FREE Full text] [CrossRef] [Medline]
  54. Stawarz K, Preist C, Coyle D. Use of smartphone apps, social media, and web-based resources to support mental health and well-being: online survey. JMIR Ment Health 2019;6(7):e12546 [FREE Full text] [CrossRef] [Medline]
  55. Song T, Deng N, Cui T, Qian S, Liu F, Guan Y, et al. Measuring success of patients' continuous use of mobile health services for self-management of chronic conditions: model development and validation. J Med Internet Res 2021;23(7):e26670 [FREE Full text] [CrossRef] [Medline]
  56. Mendez KJ, Budhathoki C, Labrique AB, Sadak T, Tanner EK, Han HR. Factors associated with intention to adopt mHealth apps among dementia caregivers with a chronic condition: cross-sectional, correlational study. JMIR Mhealth Uhealth 2021;9(8):e27926 [FREE Full text] [CrossRef] [Medline]
  57. Mitzner TL, Savla J, Boot WR, Sharit J, Charness N, Czaja SJ, et al. Technology adoption by older adults: findings from the PRISM trial. Gerontologist 2019;59(1):34-44 [FREE Full text] [CrossRef] [Medline]
  58. Harris MT, Rogers WA. Understanding acceptance of healthcare technology by older adults: implications for adoption. Innov Aging 2019;3(Suppl 1):S929. [CrossRef]
  59. Bratteteig T, Wagner I. Disentangling participation: power and decision-making in participatory design. Cham, Switzerland: Springer; 2014.

ADRD: Alzheimer disease and related dementias
MARS: Mobile App Rating Scale
UCD: user-centered design

Edited by L Buis; submitted 28.09.21; peer-reviewed by K Yin, H Mehdizadeh, SM Ayyoubzadeh, S Rostam Niakan Kalhori ; comments to author 30.12.21; revised version received 16.01.22; accepted 16.02.22; published 29.03.22


©Nicole E Werner, Janetta C Brown, Priya Loganathar, Richard J Holden. Originally published in JMIR mHealth and uHealth (, 29.03.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.