Published on in Vol 13 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/69637, first published .
Cognitive Training Mobile Apps for Older Adults With Cognitive Impairment: App Store Search and Quality Evaluation

Cognitive Training Mobile Apps for Older Adults With Cognitive Impairment: App Store Search and Quality Evaluation

Cognitive Training Mobile Apps for Older Adults With Cognitive Impairment: App Store Search and Quality Evaluation

1School of Medicine, Huzhou Key Laboratory of Precise Prevention and Control of Major Chronic Diseases, Huzhou University, Erhuan East Road 759, Longquan Street, Wuxing District, Huzhou, China

2Center for Whole-Person Research, AdventHealth Whole-Person Research, Orlando, FL, United States

3Department of General Medicine, Community Health Service Center of Renhuangshan, Huzhou, China

Corresponding Author:

Lina Wang, PhD


Background: As the population ages, cognitive impairment is becoming increasingly prevalent. Mobile apps offer a scalable platform for delivering cognitive training interventions. However, their variable quality and lack of rigorous evaluation underscore the need for further research to guide optimization and ensure their effective application in improving cognitive health outcomes.

Objective: This study aimed to evaluate the characteristics and quality of cognitive training apps designed for older adults with cognitive impairment.

Methods: A comprehensive search of the Google Play Store and Apple App Store was conducted using predefined terms and inclusion criteria, with the search completed on July 13, 2024. Eligible apps were assessed for quality by two independent reviewers using the Mobile App Rating Scale (MARS), with interrater reliability evaluated via quadratic weighted kappa (К). The Kruskal-Wallis test analyzed differences in MARS scores across subgroups for each dimension, and Spearman correlation was applied to examine the relationship between user star ratings and overall mean scores.

Results: A total of 4822 potential apps were identified, of which 24 met eligibility criteria. Among these, 13 (54%) were available on both platforms, 5 (21%) were exclusive to the Google Play Store, and 6 (25%) to the Apple App Store. Notably, 5 (20.8%) apps offered user-tailored training modules and 8 (33%) involved medical professionals in development. Interrater agreement was high (k=0.88; 95% CI, 0.80‐0.95). Global quality scores based on the MARS dimensions ranged from 2.38 to 4.13, with a mean (SD) of 3.57 (0.43) across 24 apps, indicating generally acceptable quality. The functionality dimension received the highest score, while engagement scored the lowest. Brain HQ and Peak had scores above 4 and were rated as good, whereas Memory Trainer, Cognitive Skill Training, and Ginkgo Memory & Brain Training scored below 3 and were rated as insufficient. Spearman correlation showed no significant association between mean score and app rating.

Conclusions: Current cognitive training apps for older adults with cognitive impairment demonstrate moderate quality with considerable variability. Improvements are needed in the engagement and information dimensions. Future development should prioritize enhancing user engagement, incorporating personalized features, and involving health care professionals and experts to align with evidence-based guidelines.

JMIR Mhealth Uhealth 2025;13:e69637

doi:10.2196/69637

Keywords



The global aging population presents a significant public health challenge due to the increasing prevalence of dementia. Aging is the primary risk factor for dementia, which affects an estimated 50 million individuals worldwide, with projections reaching 152 million by 2050 [1]. Age-related processes, including neurofibrillary tangles, amyloid-beta plaque deposition, and cerebrovascular changes, contribute to the pathogenesis of Alzheimer disease and other dementias [2]. Therefore, efforts to address cognitive decline in older adults are critical to mitigating the growing global burden of dementia.

Cognitive interventions have emerged as a promising strategy for maintaining cognitive function in older adults, particularly those with cognitive impairment. The popularity and development of digital cognitive training technology as one of the key applications of mobile health (mHealth) app has further facilitated the transfer of cognitive training from traditional clinical or laboratory settings to everyday life. Among these, cognitive training has shown potential in mitigating cognitive decline by engaging in specific exercises targeting various cognitive domains, such as memory, attention, processing speed, and executive function, to stimulate and enhance the connectivity and efficiency of brain neural networks [3]. A systematic review by Gates et al [4] found that cognitive training resulted in significant improvements in memory and attention in older adults with mild cognitive impairment, with long-term benefits persisting beyond the intervention period. Lampit et al [5] demonstrated that cognitive training in cognitively healthy older adults, particularly computerized interventions, resulted in moderate improvements in global cognition and specific domains such as processing speed and memory. Notably, a systematic search of the scientific literature was conducted in the Web of Science using a topic search with the terms “cognitive training” and “cognitive impairment.” The results show a steady increase in publications over the past 2 decades, as illustrated in Figure 1. Annual output rose from 58 papers in 2000 to over 1,000 per year after 2020, peaking at 1,091 in 2021. Despite slight declines in 2022 and 2023, the cumulative total reached 9,978 by 2023, reflecting sustained and growing academic interest in cognitive training for cognitive impairment.

Figure 1. Trends in annual and cumulative number of publications on cognitive training (2000‐2023).

In recent years, digital interventions have attracted growing attention as promising strategies to support cognitive health among older adults with cognitive impairment. A range of modalities, including mHealth apps, digital art therapy, and digital storytelling, have been explored with the aim of enhancing cognitive engagement and promoting rehabilitation [6-9]. Systematic and narrative reviews underscore both the potential benefits of these approaches and notable challenges related to accessibility, user engagement, and clinical validation [6,10]. Among these interventions, mHealth technologies have demonstrated particular promise by improving access to care, enabling real-time monitoring, and facilitating personalized health interventions [11,12]. Cognitive training apps delivered through mobile platforms offer greater accessibility, convenience, and cost-effectiveness compared to other cognitive training methods such as paper-and-pencil tasks, computer-based programs, or virtual reality–based interventions [13-15]. A growing number of cognitive training apps are now available, providing diverse exercises aimed at enhancing memory, attention, and executive function. Studies have shown that mHealth-based cognitive training apps can maintain or improve cognitive performance, especially when incorporating features such as adaptive difficulty levels, gamification, and feedback mechanisms to promote sustained engagement [16,17]. However, despite these advances, concerns remain regarding the methodological rigor, scientific validity, and evidence base underpinning many commercially available cognitive training apps. Recent reviews have highlighted substantial variability in app quality and the limited methodological rigor and standardization in existing evaluations of cognitive training apps [18,19]. Furthermore, it remains unclear to what extent current cognitive training apps have integrated recent innovations, such as individualized digital therapies and user-centered design principles. Therefore, a systematic and comprehensive quality evaluation of cognitive training apps is critically needed to inform future development and ensure these digital interventions truly meet the complex needs of specific population.

In the field of cognitive impairment, quality assessments of caregiver-related apps have been conducted [20]. Meanwhile, recent reviews have evaluated the content and quality of cognitive training apps available to the general population [18,19]; however, these evaluations often did not specifically target individuals with cognitive impairment. In addition, a recent scoping review by Silva et al [21] emphasized the substantial heterogeneity in intervention approaches among existing cognitive training apps and the lack of a standardized framework for app quality assessment. In order to optimize mobile apps, effectively meet user needs, and promote cognitive health, it is essential to establish a reliable metric for assessing app quality. Various tools have been developed to evaluate the quality of mHealth apps, such as the User Version of the Mobile App Rating Scale (uMARS) developed by Stoyanov et al [22], the Evaluation Tool for Mobile and Web-Based eHealth Interventions (ENLIGHT) developed by Baumel et al [23], and the System Usability Scale (SUS) developed by Brooke [24]. However, each of these instruments presents specific limitations. uMARS is primarily designed to capture feedback from general users, ENLIGHT focuses on behavioral change and lacks emphasis on broader usability and design elements, while SUS is limited to evaluating system usability alone. In contrast, the Mobile App Rating Scale (MARS), developed by Leanne’s research team, offers a simple, objective, and reliable tool for evaluating the overall quality of mobile apps. It assesses five core dimensions: engagement, functionality, aesthetics, information quality, and subjective quality, with clearly defined descriptors for each rating anchor to ensure consistency and accuracy [25]. MARS has been widely applied in assessing various health-related apps, including targeting genitourinary tumors, inflammatory bowel disease, psoriasis, and heart failure, with demonstrated reliability and applicability in related research contexts [26-29].

Given these gaps in the literature, the aim of this study was to identify and evaluate publicly available cognitive training apps designed for older adults with cognitive impairment. Specifically, this study aims to (1) identify and summarize existing cognitive training apps and their core features and (2) assess the quality of these apps using the MARS. It was hypothesized that existing cognitive training apps would exhibit variable quality and might not fully meet the specific needs of older adults with cognitive impairment. The findings from this study are expected to inform researchers, developers, and clinicians, contributing to the development of more effective, user-centered digital interventions for cognitive health promotion in this vulnerable population.


Study Design

Although the search strategy differed slightly from traditional methods, the study adhered to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses standards) Checklist present in Checklist 1 [30], and was consistent with recent reviews of mobile apps that applied similar methods [26,31]. A neuropsychologist from a large medical center and a certified software engineer oversaw and reviewed the research process and findings.

App Search Strategy

A comprehensive search of the Google Play Store and Apple App Store was conducted up to July 13, 2024, using a 2-step approach, developed in collaboration with a librarian. In step 1, both app stores were searched with 67 relevant key terms (eg, “cognitive impairment,” “cognitive decline,” “brain,” “cognitive functions,” “cognitive therapy,” “cognitive training,” etc.). In step 2, string keywords were applied, combining multiple forms of cognitive impairment (eg, “cognitive dysfunction,” “cognitive declines,” “mental deterioration,” etc.) with terms such as “cognitive therapy,” “cognitive training,” “brain training,” etc. This approach accounted for differences in search algorithms: Apple’s optimized for single keywords and Google Play’s for string keywords. The details were presented in Multimedia Appendix 1. Relevant apps were identified based on the following inclusion criteria: (1) English language, (2) relevance to the subject matter, (3) free to download, (4) available for individual use, and (5) normal functionality.

App Content Analysis

The primary content of the 24 representative apps was assessed by the two reviewers (LW and JP). These apps were required to focus primarily on cognitive training, targeting the enhancement and improvement of memory, attention, language, comprehension, and other cognitive functions.

App Quality Evaluation

The quality of the included apps was evaluated using the MARS, a widely used tool developed by Stoyanov et al [25] in 2015 for evaluating mHealth apps. MARS has demonstrated high internal consistency (Cronbach α coefficient=0.90) and interrater reliability [25]. The scale consists of 23 items covering objective quality dimensions, including engagement, functionality, aesthetics and information, and subjective dimension. Specifically, engagement assesses whether the app is fun, interesting, customizable, interactive, and targeted; functionality assesses whether the app is easy to learn, easy to use, and logical; aesthetics assesses the graphic design, overall visual appeal, color scheme, and stylistic consistency of the app; information assesses whether the app contains high-quality information from credible sources; and the subjective quality dimension reflects user satisfaction, application adoption, and continuity of use.

Each item was scored on a 1-5 scale, with a “not applicable” option available. Following the MARS guidelines, the scoring procedure involved: (1) average score for each dimension for a single app is calculated by summing the scores of all items under that dimension and dividing by the total number of items in the dimension; (2) average score of each dimension across all apps was calculated by summing the dimension’s average scores for all apps and dividing by the number of apps; (3) overall quality score for a single app is determined by averaging the average scores of the four objective dimensions (engagement, functionality, aesthetics, and information), excluding the subjective quality dimension to maintain an unbiased evaluation; and (4) average overall quality score across all apps is calculated by summing the overall quality scores of all apps and dividing by the total number of apps. An average MARS score of ≥3 points (out of 5) is considered to be of “acceptable” quality [25].

Data Collection

Two independent reviewers (LW and JP), both with medical backgrounds and specializing in cognitive impairment in older adults, conducted the app evaluations. Prior to the assessment, both reviewers received formal training in the use of the MARS to ensure consistency and accuracy. Each app was used for at least 10 minutes during evaluation to allow sufficient interaction with its functionalities. Apps were randomly assigned and independently evaluated without communication between reviewers to minimize potential bias. Discrepancies greater than one point on any subscale were resolved through discussion; if consensus could not be reached, a third reviewer (CD) was consulted to adjudicate the final score. Interrater reliability between the two reviewers was assessed using the intraclass correlation coefficient (ICC), and the agreement was good (ICC 0.965 [95% CI 0.92‐0.985]).

Data Analysis

The two reviewers (LW and JP) reconfirmed the dates and entered the scores into a cloud-based Microsoft Excel spreadsheet, followed by the computation of descriptive statistics for each rating. The data analysis procedure consisted of the following steps: (1) calculating the mean score for each quality dimension for each app and calculating its variance, (2) adding the mean scores for the engagement, functionality, aesthetics, and information dimensions and dividing them by the average of the number of dimensions as the MARS final score and calculating its variance, (3) calculating the overall mean score for each dimension for all apps and calculating its variance, and (4) calculating the overall mean score by summing and dividing by the total number of apps to calculate the overall average score of all apps and calculate its variance. In addition, the apps were categorized into three groups based on the overall mean score: less than 3 (Group 1), 3-4 (Group 2), and greater than 4 (Group 3). The Kruskal-Wallis test was used to compare median scores across the three groups for each of the four MARS dimensions: engagement, functionality, aesthetics, and information, identifying significant variations among the subgroups. The Spearman correlation coefficient was used to assess the correlation between the star rating score and the overall mean score.

Ethical Considerations

This study was exempt from institutional review board approval, as no risk to human participants was involved, and was registered at PROSPERO (International Prospective Register of Systematic Reviews, number CRD42024602240). This study involved a systematic search and quality assessment of publicly available mobile applications, conducted using the validated Mobile App Rating Scale (MARS). The study did not include the recruitment of human participants, collection of personal or health-related information, or any form of intervention involving individuals. All information was obtained from open-access app store listings and publicly available app descriptions. According to established ethical guidelines, including those outlined by institutional review boards (IRBs) in China and internationally (the Declaration of Helsinki, Council for International Organizations of Medical Sciences [CIOMS] guidelines), research that does not involve human participants or identifiable personal data is generally exempt from ethical review. Our study aligns with these criteria. Consistent with this, numerous high-impact, peer-reviewed studies involving app evaluations have also reported similar exemptions from ethical review, for example in references [32-36]. Based on this precedent and our institution’s policy, we confirm that no ethical approval was required for this study.


App Selection

A total of 4822 apps were retrieved from the Google Play Store and Apple App Store (1004 from Google Play Store and 3818 from Apple App Store). After preliminary screening, 146 apps were assessed for eligibility with duplicate and obviously irrelevant apps were removed. Following a detailed review, 31 apps were included, but 7 apps were found to be nonfunctioning, leaving 24 apps that met the inclusion criteria and were eventually included for evaluation as shown in Figure 2.

Figure 2. The process of apps selection. MARS: Mobile App Rating Scale.

Characteristics and Purposes of Included Apps

Of these 24 apps, 13 (54%) are available on both platforms, 5 (21%) are exclusive to the Google Play Store, and 6 (25%) are accessible on the Apple App Store. Table 1 outlines the content characteristics of each app. The primary focus of the included apps is cognitive training aimed at improving memory, attention, language comprehension, and other cognitive functions. In total, 5 apps (20.8%) allow users to select training modules tailored to their individual needs. Overall, 8 apps (33.3%) indicate the involvement of medical and related professionals in their development process, as stated in the app profiles or within the apps.

Table 1. Description of the apps included in the study.
App namePlatformDeveloperAge group (years)Involvement of health careFocus
(targeted areas of the app)
Memory TraineriOSSvyatoslav Ivashchenko4 years or olderUnknownMemory
BraveiOSUniversity of Hong Kong17 years or olderUnknownMCIa education and related physical training
Dementia Learning GameiOSBom Soft4 years or olderUnknownConcentration, observation,and memory
MindmateiOSMindmate Ltd12 years or olderUnknownProblem-solving, attention, and lifestyle support
Cognitive Skill TrainingiOSKazuaki Matayoshi4 years or olderUnknownArithmetic, memory, and attention
RemindingiOSSync International Pty Ltd12 years or olderUnknownGame-based cognitive training
MindpalAndroidElektron Labs IncAll agesUnknownLanguage, memory, attention, and problem-solving
ElevateAndroidElevate LabsAll agesYesAttention, language, memory, processing speed, and math
NeurobicsAndroidPeoresnada.comAll agesUnknownMental, attention, calculation, and memory, analysis
Train Your Brain Memory GamesAndroidSenior GamesAll agesYesMemory-focused cognitive stimulation
AbrainAndroidAbrain LabsAll agesUnknownMemory, attention, reaction, and math
CognishapeiOS and AndroidCognishape12 years or olderYesAttention, memory, creativity, and problem-solving
Q4 Active - Brain HealthiOS and AndroidGenius Gyms LLC4 years or olderYesCognitive enhancement and decline prevention
Brain HQiOS and AndroidPosit Science4 years or olderYesCognition, function, and brain plasticity
AlzlifeiOS and AndroidAlzheimer’s Light17 years or olderUnknownSensory stimulation and cognitive games
Ginkgo Memory & Brain TrainingiOS and AndroidGinkgo Academy4 years or olderUnknownCognition and memory retention
Recover BrainiOS and AndroidImagiration LLC4 years or olderYesMemory, language, comprehension and executive
LumosityiOS and AndroidLumos Labs, Inc.4 years or olderUnknownAttention, flexibility, and problem-solving
Constant Therapy: Brain RehabiOS and AndroidConstant Therapy, Inc12 years or olderYesLanguage, memory, attention, reading,
math, and comprehension
MemorieiOS and AndroidNeeuro Pte Ltd4 years or olderUnknownAttention, memory, decision-marking, and spatial perception
Focus - Train Your BrainiOS and AndroidTellmewow4 years or olderUnknownMemory, coordination, and attention
ImpulseiOS and AndroidGmrd Apps Limited4 years or olderUnknownMemory, attention, and concentration
PeakiOS and AndroidSynaptic Labs4 years or olderYesBrain assessment, memory, attention, and language
NeuronationiOS and AndroidSynaptikon Gmbh4 years or olderUnknownMemory and concentration

aMCI: mild cognitive impairment

MARS Evaluation of Included Apps

The 24 included apps were evaluated by two reviewers, with substantial interrater agreement (Quadratic Weighted κ=0.88, 95% CI 0.80‐0.95), indicating a high level of agreement between the 2 reviewers. Based on the MARS evaluation, apps quality was assessed across four dimensions: engagement, functionality, aesthetics, and information, shown in Table 2. The overall mean score was 3.57 (SD 0.43), indicating that the apps were generally acceptable. Furthermore, 2 apps, Brain HQ and Peak, received scores above 4 and were rated as good, while 3 apps, Memory Trainer, Cognitive Skill Training, and Ginkgo Memory & Brain Training, scored below 3 and were considered insufficient. Among the 4 dimensions, functionality had the highest mean score, while engagement scored the lowest. In addition, the Spearman correlation showed no significant relationship between the overall mean score and star rating (R=−0.331, P=.18).

Table 2. The mean (SD) scores of each Mobile App Rating Scale dimension for included apps.
App nameEngagementFunctionalityAestheticsInformationOverall scoreStar rating score
Memory Trainer1.4 (0.49)5 (0)2 (0)1.75 (0.43)2.54 (1.44)a
Brave3 (1.1)4.75 (0.43)4 (0.82)4.2 (0.4)3.99 (0.63)
Dementia Learning Game3.2 (1.17)4.25 (0.43)3.67 (0.47)3.5 (0.5)3.65 (0.38)
Mindmate3.2 (0.98)4.5 (0.5)3.67 (0.47)3.8 (0.4)3.79 (0.47)5
Cognitive Skill Training2.6 (1.02)3.5 (0.5)3 (0.82)2.75 (0.43)2.96 (0.34)4.5
Reminding2.6 (0.49)4 (0)3.33 (0.47)3.5 (0.5)3.36 (0.5)
Mindpal3.6 (0.49)4.5 (0.5)3.67 (0.47)3.6 (0.49)3.84 (0.38)4.5
Elevate3.2 (0.4)4.5 (0.5)3.33 (0.47)3.33 (0.47)3.59 (0.53)4.6
Neurobics3.6 (0.49)4.5 (0.5)3.33 (0.47)3.6 (0.49)3.76 (0.44)4
Train Your Brain Memory Games3 (0.89)4.25 (0.83)4 (0)3.25 (0.43)3.63 (0.52)4.5
Abrain3.4 (0.49)4.25 (0.43)3.33 (0.47)3.4 (0.49)3.60 (0.38)4.5
Cognishape2.4 (0.49)4.25 (0.43)3 (0.82)3.25 (0.43)3.23 (0.67)4.1
Q4 Active-Brain Health2.6 (0.49)4 (0.71)3.33 (0.47)3 (0)3.23 (0.51)
Brain HQ4 (0.63)4.5 (0.5)4 (0)4 (0.58)4.13 (0.22)3.7
Alzlife3.8 (1.17)4 (0)3.67 (0.47)3.67 (0.47)3.78 (0.14)4.2
Ginkgo Memory& Brain Training2 (0)2.25 (0.83)2.67 (0.47)2.6 (0.49)2.38 (0.27)4.6
Recover Brain3.6 (0.49)4.5 (0.5)4 (0)3.8 (0.4)3.98 (0.33)4.2
Lumosity3.2 (0.98)4.25 (0.43)4 (0)3.43 (0.49)3.72 (0.42)4.2
Constant Therapy: Brain Rehab3.4 (0.49)4.5 (0.5)4 (0)3.6 (0.49)3.88 (0.42)4.15
Memorie3 (1.26)4.25 (0.43)3.33 (0.47)3.6 (0.49)3.55 (0.46)
Focus - Train Your Brain3.6 (0.49)4.5 (0.5)3.67 (0.47)3.4 (0.49)3.79 (0.42)4.8
Impulse3.4 (1.36)4.25 (0.43)4 (0)3.4 (0.8)3.76 (0.37)4.5
Peak4 (0.89)4.75 (0.43)4 (0)3.6 (0.49)4.09 (0.42)4.3
Neuronation2.6 (0.49)4.5 (0.5)3.33 (0.47)3.2 (0.4)3.41 (0.69)4.55
Total (SD)3.1 (0.61)4.27 (0.51)3.51 (0.49)3.38 (0.48)3.57 (0.43)

aNot applicable.

The included apps were categorized into three groups based on their overall mean scores: less than 3 (Group 1), between 3 and 4 (Group 2), and greater than 4 (Group 3). Detailed descriptions of these groupings are provided in Multimedia Appendix 2. The H value refers to the Kruskal–Wallis test statistic that indicates the degree of difference in rank sums among groups; higher values suggest greater group differences. Figure 3 shows the differences in MARS dimension scores between groups. No significant difference was observed in the functionality dimension (P=.21); however, differences in all other dimensions were statistically significant, with the engagement dimension showing the largest difference (P=.01). Pairwise comparisons revealed the following: (1) engagement: significant differences were observed between all groups, with score differences of 1.18 (Group 1 vs Group 2, P=.02), 0.82 (Group 2 vs Group 3, P=.04), and 2.00 (Group 1 vs Group 3, P=.001); (2) aesthetics: significant differences were found between Group 1 versus Group 2 (score difference=1.05, P=.01) and Group 1 versus Group 3 (score difference=1.44, P=.003), but not between Group 2 versus Group 3; (3) information: significant differences were noted between Group 1 versus Group 2 (score difference=1.13, P=.01) and Group 1 versus Group 3 (score difference=1.43, P=.01), with no difference between Group 2 versus Group 3.

Figure 3. Group differences in MARS dimension scores. MARS: Mobile App Rating Scale.

Bivariate Spearman correlation was calculated to test the relationship between the overall mean score and the star rating score. The results show that the overall mean score and the star rating score are not statistically significantly correlated (R=−0.331; P=.18).


Principal Findings

Compared to previous studies that reviewed cognitive training apps primarily for the general population [18,19], this study is the first to systematically evaluate apps specifically designed for individuals with cognitive impairment, a population with distinct cognitive and usability needs. By applying the MARS to assess 24 apps across multiple quality dimensions, this study provides a comprehensive, standardized evaluation of app strengths and limitations. Overall, the apps demonstrated an acceptable level of quality. The functionality dimension received the highest scores, indicating strong usability and practicality, whereas the engagement dimension scored the lowest, reflecting limited user interactivity. Significant variability was observed across apps in engagement, aesthetics, and information quality. Furthermore, the findings highlight critical deficiencies in user-centered design and the limited involvement of health care professionals in app development. These insights identify key areas for improvement and offer important guidance for developers and health care providers seeking to optimize cognitive health interventions for individuals with cognitive impairment.

The quality of the 24 cognitive training apps was assessed using the MARS, with an average score of 3.57 (SD 0.43). Most apps (79.2%) were rated as acceptable, while only a small proportion (8.3%) received a good rating. Among the 4 MARS dimensions, functionality scored the highest, reflecting intuitive navigation, reliable performance, and device integration. In contrast, engagement received the lowest scores, followed by information quality, indicating a lack of features that sustain user interest and provide credible, evidence-based content. This quality pattern has also been reported in evaluations of other mHealth apps. Carrouel et al [37] evaluated mental health apps and found that although most demonstrated sound technical performance, many lacked meaningful user engagement and evidence-based content. Similarly, Martinon et al [38] reported that nutrition-related apps were generally usable but underperformed in terms of personalization and information quality. These similarities suggest a broader trend in mHealth app development, where technical functionality often exceeds performance in user-centered and content-related dimensions. As earlier research has pointed out, low engagement scores in health-related apps are often linked to a lack of personalized and adaptive content, which is particularly important for diverse populations such as older adults with cognitive impairments [39]. Furthermore, low engagement is often linked to poor user retention. In one study on diabetes management apps, low engagement scores were associated with higher dropout rates [40]. Therefore, incorporating personalized features and adaptive content, such as gamification, user incentives, and social interaction tools, is essential to enhancing engagement and ensuring sustained use of cognitive training apps.

This study further revealed that group comparisons of the 4 MARS dimensions showed no significant differences in functionality; however, significant differences were observed in engagement, aesthetics, and information across app groups. The consistent functionality suggests that most cognitive training apps meet baseline standards for technical performance, reliability, and ease of use. In contrast, the observed differences in engagement, aesthetics, and information highlight key areas where apps diverge in their ability to meet user needs. Variations in engagement likely reflect differing levels of investment in user-focused features [41]. Differences in aesthetics and information may indicate disparities in design quality and content accuracy, with higher-scoring apps often incorporating professional input and evidence-based content [42,43]. Addressing these gaps by enhancing engagement features, improving design, and involving professionals during development could enhance app effectiveness and user satisfaction. Notably, limited involvement of health care professionals during app development for older adults with cognitive impairment, as observed in this study, is consistent with previous findings [27,44]. Only 8 of the included apps (33%) explicitly mentioned health care professional involvement. This reflects a broader challenge in the mHealth field [45], where insufficient engagement from both health care professionals and patients often results in apps that fail to provide reliable, evidence-based information or meet user needs effectively. Evidence indicates that the quality and trustworthiness of mHealth apps are strongly linked to the involvement of health care professionals, who ensure clinical accuracy and adherence to evidence-based practices [46]. Similarly, excluding patients from the development process frequently leads to features misaligned with real-world needs, reducing engagement and limiting the apps’ effectiveness in supporting health management [47].

The findings of this study have important implications for both app developers and health care providers. Several key deficiencies were identified across many of the reviewed apps, including user engagement, information quality, aesthetic design, and the limited involvement of health care professionals, which underscores the need for closer collaboration with health care professionals to ensure content accuracy, adherence to medical guidelines, and alignment with real-world health care needs [46,48]. Similarly, incorporating patient feedback during the development process can help tailor cognitive training apps to better meet the cognitive, physical, and sensory needs of older adults with cognitive impairment, thereby improving accessibility and engagement. Furthermore, the results of this study also underscore the importance of user interface design, balancing aesthetic appeal with intuitive functionality can significantly enhance usability, particularly in cognitively impaired populations. On the technical side, integrating technical features such as adaptive difficulty levels and real-time feedback could enhance personalization and maintain user motivation over time. Overall, these insights underscore the value of a cross-disciplinary approach in which developers, clinicians, patients, designers, and technical teams collaborate to produce mHealth solutions that are user-centered, clinically credible, and sustainable in real-world use.

This study identified key limitations in existing cognitive training apps for individuals with cognitive impairments, particularly in visual presentation, such as dense text blocks and a lack of instructional images and videos, which are particularly challenging for this population. As cognitive impairments often lead to age-related visual degradation, including decreased visual acuity, reduced field of vision, and diminished color recognition, these issues exacerbate difficulties in using digital technologies, especially for cognitive training [49]. Previous studies indicated that cognitive impairments also introduce unique visual challenges, such as slower visual processing speed and difficulties in spatial perception, which hinder the effective use of apps [50,51]. To improve the usability and effectiveness of cognitive training apps for older adults with cognitive impairment, it is essential to incorporate user-centered design strategies that accommodate both cognitive and sensory limitations. Interface design should emphasize simplicity and clarity through the use of high-contrast visuals, large fonts and icons, linear navigation structures, and minimal interaction steps to reduce operational complexity and cognitive load. Readability can be further enhanced by optimizing text presentation with adjustable font sizes, low content density, and voice-assisted reading or playback features to support information processing and minimize visual fatigue. Furthermore, incorporating multimodal support, such as auditory prompts, visual cues, haptic feedback, and the inclusion of icons, images, and instructional videos can enhance task comprehension and user engagement. In particular, the integration of auditory and visual stimuli has been shown to improve performance and reduce cognitive burden [52]. To further support visual processing and maintain attention, optimizing visual presentation (such as enhancing contrast and minimizing screen glare) is also important, as higher contrast improves readability while glare reduction contributes to visual comfort and uninterrupted use [53,54]. In addition, personalized cognitive training modules that adapt to users’ cognitive profiles, including their impairment levels, response speed, learning preferences, and progress over time, may help maintain motivation and optimize training outcomes. Finally, enhancing feedback mechanisms is also critical. Real-time audio cues, digital guidance, and multisensory reinforcement, including auditory and tactile feedback, can promote sustained attention and support memory retention [55]. These strategies will facilitate the improvement of usability and effectiveness in cognitive training apps for older adults with cognitive impairments, enhancing engagement and supporting better cognitive performance.

Limitations

Several limitations should be considered in this study. First, data were sourced primarily from major app stores (Google Play Store and Apple App Store), which may have excluded apps available on regional or specialized platforms, limiting the comprehensiveness of the findings. Second, although the MARS provided a standardized evaluation framework, inherent subjectivity in the scoring process may have affected consistency, particularly in the evaluation of visual design. Third, the study focused exclusively on apps identified using specific cognitive training-related keywords, resulting in a limited sample size and potentially reducing the generalizability of the findings. Fourth, the dynamic nature of app stores posed challenges to the study’s timeliness, as some apps may have been removed or updated after data collection. Finally, while MARS is a robust tool, it does not comprehensively address data privacy concerns or sustained app usage outcomes. Future research should address these limitations by expanding data sources, incorporating user experience and longitudinal data, and exploring additional domains of app quality.

Conclusions

This study evaluated the quality of cognitive training mobile apps for older adults with cognitive impairment using the MARS. The findings highlight the promise of these apps as accessible tools for cognitive health management while revealing notable deficiencies in quality, usability, and evidence-based functionality. As mHealth technologies evolve, rigorous and standardized evaluation frameworks and longitudinal studies are needed to validate their efficacy and ensure safety in real-world settings. Such efforts are critical to fully realize the potential of these technologies in supporting cognitive health in aging populations.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 72174061, No.71704053); China Scholarship Council foundation (NO. 202308330251), and Health Science and Technology Project of Zhejiang Provincial Health Commission (2022KY370, 2024KY1662).

Authors' Contributions

LW contributed to conceptualization, methodology, and writing-original draft. JP and CD were involved in conceptualization, methodology, and data curation. AG and AH were responsible for supervision and validation. HT and XW managed writing-review and editing. CZ handed conceptualization and methodology. LW was involved in conceptualization, methodology, validation, writing-review and editing, and funding acquisition.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Search Strategy Process.

DOCX File, 21 KB

Multimedia Appendix 2

Grouping.

XLSX File, 12 KB

Checklist 1

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist.

DOCX File, 277 KB

  1. Livingston G, Huntley J, Sommerlad A, et al. Dementia prevention, intervention, and care: 2020 report of the Lancet Commission. Lancet. Aug 8, 2020;396(10248):413-446. [CrossRef] [Medline]
  2. Jack CR Jr, Bennett DA, Blennow K, et al. NIA-AA research framework: Toward a biological definition of Alzheimer’s disease. Alzheimers Dement. Apr 2018;14(4):535-562. [CrossRef] [Medline]
  3. Chapman SB, Aslan S, Spence JS, et al. Neural mechanisms of brain plasticity with complex cognitive training in healthy seniors. Cereb Cortex. Feb 2015;25(2):396-405. [CrossRef] [Medline]
  4. Gates NJ, Vernooij RW, Di Nisio M, et al. Computerised cognitive training for preventing dementia in people with mild cognitive impairment. Cochrane Database Syst Rev. Mar 13, 2019;3(3):CD012279. [CrossRef] [Medline]
  5. Lampit A, Hallock H, Valenzuela M. Computerized cognitive training in cognitively healthy older adults: a systematic review and meta-analysis of effect modifiers. PLOS Med. Nov 2014;11(11):e1001756. [CrossRef] [Medline]
  6. Bateman DR, Srinivas B, Emmett TW, et al. Categorizing health outcomes and efficacy of mHealth apps for persons with cognitive impairment: a systematic review. J Med Internet Res. Aug 30, 2017;19(8):e301. [CrossRef] [Medline]
  7. Maggio MG, Luca A, Calabrò RS, et al. Can mobile health apps with smartphones and tablets be the new frontier of cognitive rehabilitation in older individuals? A narrative review of a growing field. Neurol Sci. Jan 2024;45(1):37-45. [CrossRef] [Medline]
  8. Shojaei F, Bergvist ES, Shih PC. Exploring the impact of digital art therapy on people with dementia: a framework and research-based discussion. In: Shaikh A, Alghamdi A, Tan Q, El Emary IMM, editors. Advances in Emerging Information and Communication Technology ICIEICT 2023 Signals and Communication Technology. Springer; 2024:133-143. [CrossRef]
  9. Zhu D, Al Mahmud A, Liu W, et al. Digital storytelling for people with cognitive impairment using available mobile apps: systematic search in app stores and content analysis. JMIR Aging. Oct 24, 2024;7:e64525. [CrossRef] [Medline]
  10. Shojaei F, Osorio Torres J, Shih PC. Exploring the integration of technology in art therapy: insights from interviews with art therapists. Art Ther (Alex). 2024:1-7. [CrossRef]
  11. World Health Organization. Seventy-first World Health Assembly: Provisional agenda item 124 (A71/20). 2018. URL: https://apps.who.int/gb/ebwha/pdf_files/WHA71/A71_20-en.pdf [Accessed 2025-06-03]
  12. Silva BMC, Rodrigues J, de la Torre Díez I, et al. Mobile-health: a review of current state in 2015. J Biomed Inform. Aug 2015;56:265-272. [CrossRef] [Medline]
  13. Coyle H, Traynor V, Solowij N. Computerized and virtual reality cognitive training for individuals at high risk of cognitive decline: systematic review of the literature. Am J Geriatr Psychiatry. Apr 2015;23(4):335-359. [CrossRef] [Medline]
  14. Chan ATC, Ip RTF, Tran JYS, et al. Computerized cognitive training for memory functions in mild cognitive impairment or dementia: a systematic review and meta-analysis. NPJ Digit Med. Jan 3, 2024;7(1):1. [CrossRef] [Medline]
  15. Chan JYC, Kwong JSW, Wong A, et al. Comparison of computerized and paper-and-pencil memory tests in detection of mild cognitive impairment and dementia: a systematic review and meta-analysis of diagnostic studies. J Am Med Dir Assoc. Sep 2018;19(9):748-756. [CrossRef] [Medline]
  16. Vermeir JF, White MJ, Johnson D, et al. The effects of gamification on computerized cognitive training: systematic review and meta-analysis. JMIR Serious Games. Aug 10, 2020;8(3):e18644. [CrossRef] [Medline]
  17. Koivisto J, Malik A. Gamification for older adults: a systematic literature review. Gerontologist. Sep 13, 2021;61(7):e360-e372. [CrossRef] [Medline]
  18. Bang M, Jang CW, Kim HS, et al. Mobile applications for cognitive training: content analysis and quality review. Internet Interv. Sep 2023;33:100632. [CrossRef] [Medline]
  19. Cha SM. Mobile application applied for cognitive rehabilitation: a systematic review. Life (Basel). Jul 18, 2024;14(7):891. [CrossRef] [Medline]
  20. Werner NE, Brown JC, Loganathar P, et al. Quality of mobile apps for care partners of people with Alzheimer disease and related dementias: mobile app rating scale evaluation. JMIR Mhealth Uhealth. Mar 29, 2022;10(3):e33863. [CrossRef] [Medline]
  21. Silva AF, Silva RM, Murawska-Ciałowicz E, et al. Cognitive training with older adults using smartphone and web-based applications: a scoping review. J Prev Alzheimers Dis. 2024;11(3):693-700. [CrossRef] [Medline]
  22. Stoyanov SR, Hides L, Kavanagh DJ, et al. Development and Validation of the User Version of the Mobile Application Rating Scale (uMARS). JMIR Mhealth Uhealth. Jun 10, 2016;4(2):e72. [CrossRef] [Medline]
  23. Baumel A, Faber K, Mathur N, et al. Enlight: a comprehensive quality and therapeutic potential evaluation tool for mobile and web-based eHealth interventions. J Med Internet Res. Mar 21, 2017;19(3):e82. [CrossRef] [Medline]
  24. Brooke J. SUS: a quick and dirty usability scale. In: Jordan PW, Thomas B, Weerdmeester BA, McClelland IL, editors. Usability Evaluation in Industry. Taylor and Francis; 1996:189-194. [CrossRef]
  25. Stoyanov SR, Hides L, Kavanagh DJ, et al. Mobile app rating scale: a new tool for assessing the quality of health mobile apps. JMIR Mhealth Uhealth. Mar 11, 2015;3(1):e27. [CrossRef] [Medline]
  26. Amor-García MÁ, Collado-Borrell R, Escudero-Vilaplana V, et al. Assessing apps for patients with genitourinary tumors using the Mobile Application Rating Scale (MARS): systematic search in app stores and content analysis. JMIR Mhealth Uhealth. Jul 23, 2020;8(7):e17609. [CrossRef] [Medline]
  27. Gerner M, Vuillerme N, Aubourg T, et al. Review and analysis of German mobile apps for inflammatory bowel disease management using the Mobile Application Rating Scale: systematic search in app stores and content analysis. JMIR Mhealth Uhealth. May 3, 2022;10(5):e31102. [CrossRef] [Medline]
  28. Lull C, von Ahnen JA, Gross G, et al. German mobile apps for patients with psoriasis: systematic search and evaluation. JMIR Mhealth Uhealth. May 26, 2022;10(5):e34017. [CrossRef] [Medline]
  29. Diaz-Skeete YM, McQuaid D, Akinosun AS, et al. Analysis of apps with a medication list functionality for older adults with heart failure using the Mobile App Rating Scale and the IMS Institute for Healthcare Informatics Functionality score: evaluation study. JMIR Mhealth Uhealth. Nov 2, 2021;9(11):e30674. [CrossRef] [Medline]
  30. Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. Mar 29, 2021;372:n71. [CrossRef] [Medline]
  31. Narrillos-Moraza Á, Gómez-Martínez-Sagrera P, Amor-García MÁ, et al. Mobile apps for hematological conditions: review and content analysis using the Mobile App Rating Scale. JMIR Mhealth Uhealth. Feb 16, 2022;10(2):e32826. [CrossRef] [Medline]
  32. Firth J, Torous J, Nicholas J, et al. The efficacy of smartphone-based mental health interventions for depressive symptoms: a meta-analysis of randomized controlled trials. World Psychiatry. Oct 2017;16(3):287-298. [CrossRef] [Medline]
  33. Torous J, Nicholas J, Larsen ME, Firth J, Christensen H. Clinical review of user engagement with mental health smartphone apps: evidence, theory and improvements. Evid Based Ment Health. Aug 2018;21(3):116-119. [CrossRef] [Medline]
  34. Salazar A, de Sola H, Failde I, Moral-Munoz JA. Measuring the quality of mobile apps for the management of pain: systematic search and evaluation using the mobile app rating scale. JMIR Mhealth Uhealth. Oct 25, 2018;6(10):e10718. [CrossRef] [Medline]
  35. Qu C, Sas C, Daudén Roquet C, Doherty G. Functionality of top-rated mobile apps for depression: systematic search and evaluation. JMIR Ment Health. Jan 24, 2020;7(1):e15321. [CrossRef] [Medline]
  36. Duarte-Díaz A, Perestelo-Pérez L, Gelabert E, et al. Efficacy, safety, and evaluation criteria of mHealth interventions for depression: systematic review. JMIR Ment Health. Sep 27, 2023;10:e46877. [CrossRef] [Medline]
  37. Carrouel F, du Sartz de Vigneulles B, Bourgeois D, et al. Mental health mobile apps in the French App Store: assessment study of functionality and quality. JMIR Mhealth Uhealth. Oct 12, 2022;10(10):e41282. [CrossRef] [Medline]
  38. Martinon P, Saliasi I, Bourgeois D, et al. Nutrition-related mobile apps in the French app stores: assessment of functionality and quality. JMIR Mhealth Uhealth. Mar 14, 2022;10(3):e35879. [CrossRef] [Medline]
  39. Grua EM, Sanctis M, Malavolta I, et al. Social sustainability in the e-health domain via personalized and self-adaptive mobile apps. In: Calero C, editor. Software Sustainability. Springer Nature; 2021:301-328. [CrossRef]
  40. Priesterroth L, Grammes J, Holtz K, et al. Gamification and behavior change techniques in diabetes self-management apps. J Diabetes Sci Technol. Sep 2019;13(5):954-958. [CrossRef] [Medline]
  41. Hibbard JH, Stockard J, Mahoney ER, et al. Development of the patient activation measure (PAM): conceptualizing and measuring activation in patients and consumers. Health Serv Res. Aug 2004;39(4 Pt 1):1005-1026. [CrossRef] [Medline]
  42. Kuck N, Dietel FA, Nohr L, et al. A smartphone app for the prevention and early intervention of body dysmorphic disorder: development and evaluation of the content, usability, and aesthetics. Internet Interv. Apr 2022;28:100521. [CrossRef] [Medline]
  43. Ke Z, Qian W, Wang N, et al. Improve the satisfaction of medical staff on the use of home nursing mobile APP by using a hybrid multi-standard decision model. BMC Nurs. May 9, 2024;23(1):302. [CrossRef] [Medline]
  44. Hundert AS, Huguet A, McGrath PJ, et al. Commercially available mobile phone headache diary apps: a systematic review. JMIR Mhealth Uhealth. Aug 19, 2014;2(3):e36. [CrossRef] [Medline]
  45. Chudyk AM, Ragheb S, Kent D, et al. Patient engagement in the design of a mobile health app that supports enhanced recovery protocols for cardiac surgery: development study. JMIR Perioper Med. Nov 30, 2021;4(2):e26597. [CrossRef] [Medline]
  46. Hamilton AD, Brady RRW. Medical professional involvement in smartphone “apps” in dermatology. Br J Dermatol. Jul 2012;167(1):220-221. [CrossRef] [Medline]
  47. Duffy A, Christie GJ, Moreno S. The challenges toward real-world implementation of digital health design approaches: narrative review. JMIR Hum Factors. Sep 9, 2022;9(3):e35693. [CrossRef] [Medline]
  48. Visvanathan A, Hamilton A, Brady RRW. Smartphone apps in microbiology--is better regulation required? Clin Microbiol Infect. Jul 2012;18(7):E218-E220. [CrossRef] [Medline]
  49. Colombo G, Minta K, Grübel J, et al. Detecting cognitive impairment through an age-friendly serious game: The development and usability of the Spatial Performance Assessment for Cognitive Evaluation (SPACE). Comput Human Behav. Nov 2024;160:108349. [CrossRef]
  50. Vergani L, Marton G, Pizzoli SFM, et al. Training cognitive functions using mobile apps in breast cancer patients: systematic review. JMIR Mhealth Uhealth. Mar 19, 2019;7(3):e10855. [CrossRef] [Medline]
  51. Wildenbos GA, Peute L, Jaspers M. Aging barriers influencing mobile health usability for older adults: a literature based framework (MOLD-US). Int J Med Inform. Jun 2018;114:66-75. [CrossRef] [Medline]
  52. Iarlori S, Monteriú A, Perpetuini D, et al. Editorial: Affective computing and mental workload assessment to enhance human-machine interaction. Front Neuroergon. 2024;5:1412744. [CrossRef] [Medline]
  53. Bright K, Egger (is-design GmbH) V. Using visual contrast for effective, inclusive environments. IDJ. Dec 8, 2008;16(3):178-189. URL: http://www.jbe-platform.com/content/journals/1569979x/16/3 [CrossRef]
  54. Lee CC, Chen CH, Chien WC, Wu FG. Improving pedestrians’ navigation safety at night by enhancing legibility of foreground and background information on the display. Int J Ind Ergon. Mar 2023;94:103383. [CrossRef]
  55. Geurts JAP, van Vugt TAG, Arts JJC. Use of contemporary biomaterials in chronic osteomyelitis treatment: clinical lessons learned and literature review. J Orthop Res. Feb 2021;39(2):258-264. [CrossRef] [Medline]


ENLIGHT: Evaluation Tool for Mobile and Web-Based eHealth Interventions
ICC: intraclass correlation coefficient
MARS: Mobile App Rating Scale
mHealth: mobile health
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
PROSPERO : International Prospective Register of Systematic Reviews
SUS: System Usability Scale
uMARS: User Version of the Mobile App Rating Scale


Edited by Nazlena Mohamad Ali; submitted 04.12.24; peer-reviewed by Ayush Bhattacharya, Fereshtehossadat Shojaei, Florence Carrouel; final revised version received 05.05.25; accepted 12.05.25; published 04.07.25.

Copyright

© Leyi Wu, Jiajuan Pan, Chuwen Dou, An Gu, An Huang, Hong Tao, Xiaoyan Wang, Chen Zhang, Lina Wang. Originally published in JMIR mHealth and uHealth (https://mhealth.jmir.org), 4.7.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on https://mhealth.jmir.org/, as well as this copyright and license information must be included.