Original Paper
Abstract
Background: Mobile health (mHealth) apps are increasingly leveraged to support community health workers (CHWs) in delivering high-quality care, particularly in low- and middle-income countries. However, despite the proliferation of mHealth tools, few have been implemented at scale, partly due to limited attention to usability and acceptability among end users. In sub-Saharan Africa, mHealth tools designed for CHWs often lack systematic evaluation using validated instruments tailored to local contexts. Without such assessments, it is difficult to ensure that these tools can be integrated effectively into CHW workflows and scaled sustainably.
Objective: This study aimed to adapt and validate existing mHealth usability and acceptability assessment tools to be contextually appropriate for CHWs in Rwanda. Specifically, we sought to ensure contextual appropriateness for CHWs supporting postoperative home follow-up for women after cesarean delivery. The resulting tool was designed for use in an implementation study of a novel CHW-led mHealth app.
Methods: This study was conducted in the Kirehe district, Rwanda, from October 2022 to March 2023. We adapted 2 established tools—the mHealth App Usability Questionnaire and selected items from the Practitioner Opinion (Acceptability) Scale—and added new items that reflect core functions of the CHW-focused mHealth app. All items were translated into Kinyarwanda and simplified to align with CHWs’ educational levels. We conducted a three-stage validation that consisted of (1) content validity testing with 8 local and international experts using a recommended content validity index threshold of >0.78; (2) face validity testing with 10 CHWs using a recommended face validity index threshold of ≥0.60; and (3) reliability testing using responses from 30 CHWs, with a Cronbach α coefficient of ≥0.70 indicating acceptable internal consistency.
Results: Of the 25 items assessed, 22 (88%) achieved a content validity index score of >0.78 for both clarity and relevance. The face validity index across all 22 items was 0.991, indicating strong comprehensibility and relevance to CHWs. Internal consistency was high: the Cronbach α was 0.86 for the mHealth App Usability Questionnaire items, 0.73 for the Practitioner Opinion (Acceptability) Scale items, and 0.87 for the newly developed questions. The final tool—named the Community Health Worker mHealth Usability and Acceptability Assessment Tool—included 22 items with strong content validity, face validity, and internal reliability.
Conclusions: This study presents a rigorously adapted and validated tool for assessing mHealth usability and acceptability among CHWs in Rwanda. The Community Health Worker mHealth Usability and Acceptability Assessment Tool can guide future evaluations of mHealth interventions in similar contexts and serve as a model for localizing mHealth assessment tools in low- and middle-income country settings to ensure fit-for-purpose implementation.
doi:10.2196/64916
Keywords
Introduction
Mobile health (mHealth) apps have emerged as important tools to support health care workers in delivering high-quality care, particularly in low- and middle-income countries. Over the past 2 decades, an increasing number of mHealth solutions have been developed to assist community health workers (CHWs) in performing critical tasks across a range of health interventions [,]. CHWs, who often have limited formal training, nonetheless play a vital role in screening, diagnosing, and managing health conditions at the community level []. mHealth apps targeting CHWs are typically designed to support specific workflows or interventions. However, to ensure that these tools are adopted and used effectively, it is essential to evaluate their usability and acceptability among intended users. Robust assessment can guide the refinement of interface design and core functionalities, ultimately improving user experience and the quality of care delivered. Despite the increasing deployment of CHW-focused mHealth tools, few published studies have quantitatively assessed their usability or acceptability in sub-Saharan Africa (SSA) [-], and to our knowledge, there are no validated quantitative tools that have been developed for this purpose.
There are several tools that assess the usability of digital interventions or mHealth apps among target users outside of CHWs. Examples of usability assessment questionnaires geared toward patients include the Mobile App Rating Scale, a tool to measure the quality of apps commonly used in the health domain []; the System Usability Scale, developed to assess the usability of health care innovations []; and the Health Information Technology Usability Evaluation Scale, a tool used to assess the usability of mHealth technologies in groups of adults living with chronic diseases []. Tools that assess usability among clinical providers are the Telehealth Usability Tool [] and the mHealth App Usability Questionnaire (MAUQ) []. The Telehealth Usability Tool was designed to evaluate the interface quality, reliability, and satisfaction of computer-based telehealth videoconferencing software, whereas the MAUQ was developed to evaluate mHealth apps.
Evaluation of user acceptability is also a critical aspect in the development of mHealth apps. The Practitioner Opinion (Acceptability) Scale (POAS) is an example of a tool designed to evaluate the acceptability of decision aids during the development process and early evaluation stages []. Other examples of acceptability tools to evaluate mHealth technology include the Acceptability E-scale, [], the technology acceptance model [], and the User Engagement Scale []. However, none of these were developed to assess acceptability among CHWs.
In this paper, we describe our process of adapting existing tools to develop and validate a quantitative tool to assess CHWs’ acceptability of an mHealth app and its usability. This validated tool was specifically developed to assess the usability and acceptability of a new mHealth app to support CHW-led comprehensive home-based follow-up care among women who had delivered via cesarean section in rural Rwanda.
Methods
Study Design
Between October 2022 and March 2023, we adapted and validated a tool to assess the usability and acceptability of a novel mHealth screening app in a sample of target end users (CHWs). The mHealth app supports comprehensive home-based follow-up care among women who have delivered via cesarean section. The usability and acceptability assessment tool we developed is referred to as the Community Health Worker mHealth Usability and Acceptability Assessment Tool (CHW-MUAAT).
Study Setting
This study was implemented in the Kirehe district located in Eastern Province, rural Rwanda. The Kirehe district is represented by Kirehe District Hospital, with 19 health centers and 2448 CHWs. This hospital operates under the Rwanda Ministry of Health and is supported by Inshuti Mu Buzima, a Rwandan sister organization of the US-based nongovernmental organization Partners In Health since 2005. This study was conducted as part of the study titled “mHealth - community health worker tool for comprehensive postcesarean follow-up in rural Rwanda,” which was being conducted in the same area.
Selection of the Existing Tools and Initial Questions
After reviewing multiple existing tools to assess usability and acceptability of mHealth or digital health interventions, our team decided to move forward with 2 existing tools due to the relevant information they provided to our study. To assess usability among CHWs, we adapted the MAUQ for stand-alone mHealth apps, which was developed and validated in English by Zhou et al []. The original MAUQ comes in 4 versions depending on the type of app (interactive or stand-alone) and the target users (patients or health care providers). Given that our mHealth app is stand-alone and geared toward CHWs, we adapted the MAUQ for stand-alone mHealth apps used by health care providers. This version of the MAUQ includes 14 questions. For acceptability, we selected the questions most relevant to our study from the POAS, validated by O’Connor and Cranney []; we started with 4 questions that were deemed relevant out of the 15 from this tool. In addition, we developed and added 7 usability questions; these questions were drafted by the core study team members to reflect core functions of the mHealth screening app under development. These functions were not captured in the MAUQ or POAS questions and underwent the validation steps described in subsequent sections. To assess the reading level of our assessment tool, we intentionally designed all items to meet a below 8th‑grade reading level, following widely accepted best practices in health communication. This decision was guided by recommendations from the National Institutes of Health (NIH), which emphasize using plain language, short and clear sentences, and familiar vocabulary in survey instruments to promote comprehension and reduce respondent burden. These NIH guidelines support creating materials that are accessible across diverse literacy levels and help minimize misunderstanding or response error (NIH Plain Language Guidance []).
Translation and Adaptation
After ensuring its comprehensibility with a reading level equivalent to sixth grade, the CHW mHealth usability and acceptability assessment tool underwent a cross-cultural adaptation process, which consisted of translation into Kinyarwanda, back translation, and harmonization, as described by Beaton et al []. A study team member fluent in Kinyarwanda and English translated the adapted questionnaire from English into Kinyarwanda. After translation, an independent translator with skills in the health domain and who did not have access to the original version of the assessment questions translated the questions back into English. The study team and translator met to review and complete a harmonization process to help identify any ambiguity, errors, or confusion in the translated version and ensure that the tool was appropriate for the cultural context.
Validity and Reliability Testing
Validity testing comprised 2 assessments: content validity testing and face validity testing, whereas reliability was evaluated through internal consistency analysis.
Content Validity
Content validity assessed the relevancy and clarity of the tool among content experts. Eight content experts, including app developers, study team members, and clinicians, were conveniently asked to rate the relevancy and clarity of items from the CHW-MUAAT on a scale from 1 to 4 (1=“not relevant” or “not clear”; 4=“very relevant” or “very clear”). Content validity was established using the content validity index (CVI). The CVI was calculated using the number of experts who agreed that the item was relevant or clear (scores of 3 or 4) divided by the total number of experts. The scores for each individual item are referred to as the “item CVI.” On the basis of recommended CVI thresholds from the literature [], items with an item CVI of >0.78 were deemed content valid and remained on the survey. Those with an item CVI score of ≤0.78 were removed.
Face Validity
Face validity was established by assessing the clarity and comprehensibility of the translated tool among target end users. In total, 10 target users (CHWs) from the Kirehe district were selected at random and asked to rate the clarity and comprehensibility of the translated items on the assessment tool on a scale from 1 to 4 (1=“not at all clear” or “not at all understandable”; 4=“very clear” or “very understandable”). The face validity index (FVI) was calculated by dividing the number of CHWs who agreed that the item was understandable (score of 3 or 4) by the total number of CHWs (n=10). On the basis of recommendations from the literature [], an FVI above 0.74 was considered excellent, an FVI of 0.60 to 0.74 was considered good, and an FVI of 0.54 to 0.59 was considered fair. Items with an FVI of ≥0.60 were considered acceptable and kept.
Reliability Testing
After the content and face validity testing, the adapted tool was used to assess the usability and acceptability of the mHealth app among 30 CHWs. The 30 CHWs received explanations on the use of the mHealth CHW app and were asked to respond to all questions on the app during 3 sample patient vignette role-plays led by a study team facilitator. After the sample role-play, the CHWs were asked to answer questions from the CHW-MUAAT. The full results of this assessment are presented elsewhere. For this study, we used the data from reliability testing to assess the internal consistency of the tool. The internal consistency of the tool was evaluated by calculating the Cronbach α coefficient []. A higher α value suggests greater internal consistency reliability, with an α value greater than 0.70 deemed to indicate good reliability [,].
Ethical Considerations
This tool adaptation and validation study falls under a larger study titled “mHealth - community health worker tool for comprehensive postcesarean follow-up in rural Rwanda,” which received human participant ethical review approval from the Rwanda National Ethics Committee (109/RNEC/2022) and the Institutional Review Board of the Harvard Faculty of Medicine (22-1025). For the reliability testing, all CHWs provided informed consent before participating in the mHealth usability and acceptability assessments and were able to opt out of participation in the study activities. Data used in measuring validity and reliability were deidentified. Participants were compensated with 6000 Rwandan francs (approximately US $4.15) for their time and travel related to the study.
Results
The first adapted tool included 14 questions from the MAUQ, 4 POAS questions, and 7 questions added by the study team, yielding a total of 25 questions. All questions were modified for a sixth-grade reading level and underwent cross-cultural translation. The final questions for this first adaptation are presented in .
| Source and original question | Adapted question to sixth-grade English reading level (back translation) | Adapted question to Kinyarwanda | |
| MAUQa | |||
| “The app was easy to use.” | “The app is easy to use.” | “Iyi apurikasiyo iroroshye kuyikoresha.” | |
| “It was easy for me to learn to use the app.” | “It is easy for me to learn to use the app.” | “Biranyorohera kwiga gukoresha iyi apurikasiyo y’ikoranabuhanga.” | |
| “The navigation was consistent when moving between screens.” | “The navigation is consistent when moving between screens.” | “Kugenda mpinduranya aho nkorera ni ibintu bihamye cyangwa bikorwa mu byryo bumwe mu gukoresha iyi apurikasiyo.” | |
| “The interface of the app allowed me to use all the functions (such as entering information, responding to reminders, viewing information) offered by the app.” | “I can use all the functions (such as entering information, responding to reminders, viewing information) in the app.” | “Nshobora gukoresha fongisiyo zose / ibice byose (nko kwinjiza amakuru, gusubiza ibijyanye no kwibutsa, kureba amakuru).” | |
| “Whenever I made a mistake using the app, I could recover easily and quickly.” | “Whenever I make a mistake on the app, it is easy and quick to correct it.” | “Igihe cyose nkoze ikosa cyangwa nibeshye mu gukoresha apurikasiyo, biroroshye kandi biruhuta kurikosora.” | |
| “I like the interface of the app.” | “I like the interface of the app.” | “Nakunze uburyo apurikasiyo igaragara mu kuyikoresha.” | |
| “The information in the app was well organized, so I could easily find the information I needed.” | “The information in the app is well organized, so I can easily find the information I need.” | “Amakuru yashyizwe neza muri apurikasiyo ku buryo nshobora kubonamo ayo nkeneye ku buryo bworoshye.” | |
| “The app adequately acknowledged and provided information to let me know the progress of my action.” | “The app does a good job of letting me know my progress.” | “Apurikasiyo imfasha kumenya aho ngeze nkora.” | |
| “The amount of time involved in using this app has been fitting for me.” | “The time it takes to use this app works well for me.” | “Umwanya bimfata mu gukoresha iyi apurikasiyo mu kazi urakwiye kuri njye.” | |
| “Overall, I am satisfied with this app.” | “Overall, I am satisfied with this app.” | “Muri rusange nyuzwe niyi apurikasiyo.” | |
| “The app improved my access to delivering healthcare services.” | “The app improves my ability to deliver healthcare services.” | “Apurikasiyo itezimbere ubushobozi bwanjye bwo gutanga serivisi z’ubuzima.” | |
| “The app helped me manage my patients’ health effectively.” | “The app helps me manage my patients’ health effectively.” | “Apurikasiyo imfasha mu kwita neza ku buzima bw’abarwayi banjye mu buryo buboneye.” | |
| “This app has all the functions and capabilities I expected it to have.” | “This app has all the functions and capabilities I expected it to have.” | “Iyi apurikasiyo ifite imikorere n’ubushobozi nateganyaga ko ifite.” | |
| “I could use the app even when the Internet connection was poor or not available.” | “I can use the app even when the Internet connection was poor or not available.” | “Nshobora gukoresha apurikasiyo nubwo umurongo wa interineti waba ari muke cyangwa ari ntawo.” | |
| POASb | |||
| “It will be easy for me to understand how to use the app.” | “It is easy for me to understand how to use the app.” | “Biranyoroheye kumva uburyo bwo gukoresha apurikasiyo.” | |
| “This strategy is compatible with the way I think things should be done.” | “The app is a good fit for the way I conduct visits to women after cesarean section.” | “Apurikasiyo ikwiranye neza n’uburyo bwo gusura abagore nyuma yo kubyara babazwe.” | |
| “Using this strategy will save me time.” | “The app will save me time during home visits to women who delivered by cesarean section.” | “Apurikasiyo izamfasha kuzigama igihe nakoreshaga mugihe cyo gusura mu rugo ababyeyi babyaye babazwe.” | |
| “There is a high probability that using this strategy may cause or result in more benefit than harm.” | “I think using this mHealth app will be more helpful than hurtful.” | “Ndatekereza ko gukoresha iyi apurikasiyo ya mHealth bizafasha cyane kuruta guteza ibyago.” | |
| Questions added by the study team members | |||
| —c | “This app would be useful for my home visits to women after cesarean section.” | “Iyi apurikasiyo yaba ingirakamaro mu gusura mu rugo ababyeyi nyuma yo kubyara babazwe.” | |
| — | “It is easy for me to log into the app with my username and password.” | “Biranyorohera kwinjira muri apurikasiyo ukoresheje izina ryanjye n’ijambo ryibanga.” | |
| — | “It is easy for me to find the correct patient using the patient name ID.” | “Biranyorohera kubona umurwayi nyawe nkoresheje ijambo riranga umurwayi.” | |
| — | “It is easy for me to take or retake a picture of the wound using the app.” | “Biranyorohera gufata ndetse no kongera gufata ifoto y\'igisebe nkoresheje apurikasiyo.” | |
| — | “It is easy for me to read the recommendations given by the app.” | “Biranyorohera gusoma amabwiriza yatanzwe na apurikasiyo.” | |
| — | “It is easy for me to communicate with women when using the app.” | “Biranyorohera kuvugana n\'mubyeyi mugihe nkoresheje apurikasiyo.” | |
| — | “I feel comfortable using the app during visit for follow-up of women who delivered by C-section.” | “Ndumva mbohokewe cyangwa nifitiye ikizere mu gukoresha iyi apurikasiyo mugihe cyo gusura mu rugo kugirango nkurikirane abagore babyaye babazwe.” | |
aMAUQ: mHealth App Usability Questionnaire.
bPOAS: Practitioner Opinion (Acceptability) Scale.
cNot applicable.
A total of 10 content experts participated in the content validity survey. In total, 88% (22/25) of the items received an item CVI score of >0.78 on both the clarity and relevance scales (). Three items had an item CVI lower than this threshold and were removed. The 3 items that were removed from the first adapted set of questions were “The app does a good job of letting me know my progress,” “The navigation is consistent between screens,” and “I can use all the functions (such as entering information, responding to reminders, and viewing information) in the app.”
All the remaining 22 items received individual FVI scores of >0.60 for clarity and relevance and, thus, were retained in the full final tool (). The FVI score for clarity across all questions was 0.991 and was the same for relevance. The full final tool is available in . The Cronbach α value was 0.86 for the 12 MAUQ usability questions, 0.73 for the 4 POAS acceptability questions, and 0.87 for the questions added by the study team, indicating the high internal consistency reliability of these questions.
| Item | Content validity testing with content expert panela | Face validity testing with CHWsb,c | |||
| Clarity I-CVId | Relevance I-CVI | Clarity FVIe | Comprehension FVI | ||
| “The app is easy to use.” | 1 | 1 | 1 | 1 | |
| “It is easy for me to learn to use the app.” | 0.9 | 1 | 1 | 1 | |
| “The navigation is consistent when moving between screens.” | 0.6 | 0.8 | —f | — | |
| “I can use all the functions (such as entering information, responding to reminders, viewing information) in the app.” | 0.6 | 0.9 | — | — | |
| “Whenever I make a mistake on the app, it is easy and quick to correct it.” | 0.9 | 1 | 1 | 1 | |
| “I like the interface of the app.” | 0.9 | 0.9 | 1 | 1 | |
| “The information in the app is well organized, so I can easily find the information I need.” | 0.9 | 1 | 1 | 1 | |
| “The app does a good job of letting me know my progress.” | 0.9 | 0.7 | — | — | |
| “The time it takes to use this app works well for me.” | 0.9 | 1 | 1 | 1 | |
| “Overall, I am satisfied with this app.” | 1 | 0.9 | 1 | 1 | |
| “The app improves my ability to deliver healthcare services.” | 1 | 1 | 1 | 1 | |
| “The app helps me manage my patients’ health effectively.” | 0.9 | 0.9 | 1 | 1 | |
| “This app has all the functions and capabilities I expected it to have.” | 1 | 1 | 1 | 1 | |
| “I can use the app even when the Internet connection was poor or not available.” | 0.8 | 1 | 0.8 | 0.8 | |
| “This app would be useful for my home visits to women after cesarean section.” | 1 | 0.9 | 1 | 1 | |
| “It is easy for me to log into the app with my username and password.” | 1 | 1 | 1 | 1 | |
| “It is easy for me to find the correct patient using the patient name ID.” | 1 | 1 | 1 | 1 | |
| “It is easy for me to take or retake a picture of the wound using the app.” | 1 | 1 | 1 | 1 | |
| “It is easy for me to read the recommendations given by the app.” | 1 | 1 | 1 | 1 | |
| “It is easy for me to communicate with women when using the app.” | 0.9 | 1 | 1 | 1 | |
| “I feel comfortable using the app during home visit for follow up of women who delivered by caesarean section.” | 1 | 1 | 1 | 1 | |
| “It is easy for me to understand how to use the app.” | 1 | 0.8 | 1 | 1 | |
| “The app is a good fit for the way I conduct visits to women after cesarean section.” | 0.8 | 0.9 | 1 | 1 | |
| “The app will save me time during home visits to women who delivered by cesarean section.” | 1 | 1 | 1 | 1 | |
| “I think using this mHealth app will be more helpful than hurtful.” | 0.9 | 0.9 | 1 | 1 | |
aIn total, 10 content experts from the study team participated in content validity testing. Items with a clarity item content validity index or relevance item content validity index of <0.78 were removed from the final tool.
bIn total, 10 community health workers participated in face validity testing. Items with a face validity index of <0.60 were removed from the final tool.
cCHW: community health worker.
dI-CVI: item content validity index.
eFVI: face validity index.
fNot applicable.
| Item number | Statement | “Strongly disagree” score | “Disagree” score | “Neutral” score | “Agree” score | “Strongly agree” score |
| 1 | “The app is easy to use.” | 1 | 2 | 3 | 4 | 5 |
| 2 | “It is easy for me to learn to use the app.” | 1 | 2 | 3 | 4 | 5 |
| 3 | “Whenever I make a mistake on the app, it is easy and quick to correct it.” | 1 | 2 | 3 | 4 | 5 |
| 4 | “I like the interface of the app.” | 1 | 2 | 3 | 4 | 5 |
| 5 | “The information in the app is well organized, so I can easily find the information I need.” | 1 | 2 | 3 | 4 | 5 |
| 6 | “The time it takes to use this app works well for me.” | 1 | 2 | 3 | 4 | 5 |
| 7 | “Overall, I am satisfied with this app.” | 1 | 2 | 3 | 4 | 5 |
| 8 | “The app improves my ability to deliver healthcare services.” | 1 | 2 | 3 | 4 | 5 |
| 9 | “The app helps me manage my patients’ health effectively.” | 1 | 2 | 3 | 4 | 5 |
| 10 | “This app has all the functions and capabilities I expected it to have.” | 1 | 2 | 3 | 4 | 5 |
| 11 | “Once I have logged in, I can use the app even when the Internet connection was poor or not available.” | 1 | 2 | 3 | 4 | 5 |
| 12 | “This app would be useful for my home visits to women after cesarean section.” | 1 | 2 | 3 | 4 | 5 |
| 13 | “It is easy for me to log into the app with my username and password.” | 1 | 2 | 3 | 4 | 5 |
| 14 | “It is easy for me to find the correct patient using the Patient name ID.” | 1 | 2 | 3 | 4 | 5 |
| 15 | “It is easy for me to take or retake a picture of the wound using the app.” | 1 | 2 | 3 | 4 | 5 |
| 16 | “It is easy for me to read the recommendations given by the app.” | 1 | 2 | 3 | 4 | 5 |
| 17 | “It is easy for me to communicate with women when using the app.” | 1 | 2 | 3 | 4 | 5 |
| 18 | “I feel comfortable using the app during home visit for follow up of women who delivered by caesarean section.” | 1 | 2 | 3 | 4 | 5 |
| 19 | “It is easy for me to understand how to use the app.” | 1 | 2 | 3 | 4 | 5 |
| 20 | “The app is a good fit for the way I conduct visits to women after cesarean section.” | 1 | 2 | 3 | 4 | 5 |
| 21 | “The app will save me time during home visits to women who delivered by cesarean section.” | 1 | 2 | 3 | 4 | 5 |
| 22 | “I think using this mHealth app will be more helpful than hurtful.” | 1 | 2 | 3 | 4 | 5 |
aQuestions 1 to 12 are from the mHealth App Usability Questionnaire (MAUQ). Questions 13 to 18 were developed to assess the usability of core functions of the mobile health community health worker app not captured by the MAUQ or Practitioner Opinion (Acceptability) Scale (POAS). Questions 19 to 22 were adapted from the POAS.
Discussion
Principal Results
In this study, we adapted 2 existing tools, the MAUQ and POAS, to evaluate CHWs’ perceptions of the usability and acceptability of a novel mHealth app for comprehensive home-based follow-up care among women who have delivered via cesarean section in Rwanda. Our final tool included 22 items in total: 12 (55%) out of the 15 original items from the MAUQ tool, all 4 (18%) of the items that we pulled from the POAS, and 6 (27%) new items added by our study team. The final items had high content validity among content experts and high face validity and internal reliability among CHWs.
While there is an increasing number of mHealth apps designed to support CHWs across SSA, very few are operating at a large scale and beyond isolated research settings [,,]. One of the key barriers to scale-up is the limited understanding of whether these tools are perceived as usable and acceptable by the CHWs expected to adopt them. Without sufficient usability and acceptability, even well-designed tools are unlikely to integrate effectively into CHW workflows. Although a few studies in SSA have examined these dimensions among CHWs [,-], most have not used adapted or validated tools, limiting the reliability and comparability of their findings. A recent review found that 70% of researcher-developed usability questionnaires for mHealth apps were not validated—and those validated were not tailored to the community health context []. Validated assessment frameworks are critical to draw robust conclusions about user experience and determining the potential for mHealth tools to be well integrated into routine practice.
This quantitative CHW-MUAAT scale provides a standardized, scalable way to summarize usability and acceptability constructs, which facilitates comparison of potential digital tools in one environment or of the functionality of a single tool in different contexts []. However, complementary qualitative data can offer deeper insights into the contextual factors that shape CHW perceptions of usability or acceptability. For example, a mixed methods study identified that CHWs in Uganda had high rates of digital health acceptance but low digital tool use due to low smartphone ownership []; similarly, a study of YendaNafe, a mobile app to support CHW activities in Malawi, found an overall positive acceptance of the app but that there were several facilitators of and barriers to actual app use []. For this study, our qualitative results indicated general acceptability and emphasized that strong CHW-patient relationships and trust were needed for successful implementation (full results available elsewhere []).
To meaningfully inform the integration of an mHealth app into CHWs’ workflows, usability and acceptability assessments must be tailored to the local context and to the needs and capacities of end users. In our study, this required careful linguistic and cognitive adaptation of existing measurement tools. Given that most CHWs in Rwanda have completed only primary school education, our team revised the wording of several items to ensure that they were easily understandable. For example, the original item “There is a high probability that using this strategy may cause or result in more benefit than harm” was simplified to “I think using this mHealth app will be more helpful than hurtful” []. Additionally, all items were translated into Kinyarwanda, the primary language spoken by CHWs, with attention to clarity and cultural appropriateness.
Beyond linguistic adaptation, we also modified the content of existing tools to better capture usability and acceptability as related to the specific task—home-based follow-up care after cesarean delivery. This included rewording items from the MAUQ and POAS tools and adding step-specific questions relevant to CHWs’ real-world experience. For example, questions about ease of logging into the app were added given that many CHWs have limited experience with smartphones. This consideration is important in Rwanda, where, as of 2024, although 80% of households own a mobile phone, only 34% own a smartphone []. By demonstrating a structured approach to adapting and contextualizing usability and acceptability assessment tools, this study offers both a practical instrument and a replicable methodology for future CHW-focused mHealth evaluations in similar settings across the region.
This instrument can be adapted for other teams interested in assessing the usability and acceptability of a particular CHW mHealth tool with appropriate adaptation and validation. First, they may need to identify usability questions tailored to community health interventions, similarly to how our team added 7 usability questions specific to our postcesarean monitoring app. These questions will need to undergo reliability and validity assessments. The full tool should then be piloted in their settings, whereas smaller sample sizes may be warranted given the baseline work presented in this paper.
Limitations
We acknowledge that the adaptation and validation of the tool has some limitations that should be considered. First, this study had a small sample size of content experts and CHWs. Because of the overall high scores and consistent conclusions, we are confident in the validity of the final tool developed, but future validation studies will likely need larger sample sizes if there is more variability in participant responses. Second, this tool focuses on a quantitative assessment that cannot pick up nuanced responses. Our final evaluation includes the quantitative responses collected using this tool during the usability and acceptability testing of the app. However, we do believe that these standardized assessments can help those evaluating mHealth apps identify areas for improvement and test against prespecified targets to determine whether the mHealth app is ready for broader use. Finally, this tool was designed for a specific intervention—a mobile phone app developed for CHWs to use during the home visit follow-up of women who delivered via cesarean section—and was only validated in the Kinyarwanda language. Other research groups should proceed cautiously and likely conduct their own validation studies when applying this tool to a different intervention or in a different language.
Conclusions
Our team is using the final tool as a core component of evaluating an mHealth app used by CHWs to support women after cesarean delivery. The processes and tool described in this paper serve as a model for others conducting CHW usability and acceptability assessments of mHealth apps in SSA. Such assessments will be critical for moving toward better integration of innovative solutions into CHW activities.
Acknowledgments
The study team acknowledges the support of Partners In Health/Inshuti Mu Buzima; Kirehe District Hospital; and Mulindi, Nasho, and Mushikiri Health Centers’ leadership and the community health workers for their voluntary participation. The authors declare the use of generative artificial intelligence (GenAI) in the research and writing process. According to the Generative Artificial Intelligence Delegation Taxonomy (2025), limited proofreading and editing tasks were delegated to GenAI tools under full human supervision. The GenAI tool used was ChatGPT (OpenAI). Responsibility for the final manuscript lies entirely with the authors. GenAI tools are not listed as authors and do not bear responsibility for the final outcomes.
Data Availability
The datasets generated or analyzed during this study are not publicly available due to restrictions on data sharing in accordance with the Rwanda Data Protection and Privacy Law but are available from the corresponding author on reasonable request.
Funding
This study is part of a study funded by National Institutes of Health grant NIH/FIC-5R21HD103052-02. Author SN is supported by the Fogarty International Center and National Institute of Mental Health of the National Institutes of Health under award D43 TW010543.
Authors' Contributions
Conceptualization: SN, JN, EHE, BH-G, VKC
Data curation: JN, MK, SN, EHE
Funding acquisition: BH-G
Methodology: SN, JN, EHE, BH-G, VKC
Supervision: BH-G, VKC
Writing—original draft: JN
Writing—review and editing: JN, SN, EHE, MK, AB, RRF, NB, LB, BH-G, VKC
BH-G and VKC were principal investigators. All authors reviewed and approved of the final version of the manuscript.
Conflicts of Interest
None declared.
References
- Braun R, Catalani C, Wimbush J, Israelski D. Community health workers and mobile technology: a systematic review of the literature. PLoS One. 2013;8(6):e65772. [FREE Full text] [CrossRef] [Medline]
- Early J, Gonzalez C, Gordon-Dseagu V, Robles-Calderon L. Use of mobile health (mHealth) technologies and interventions among community health workers globally: a scoping review. Health Promot Pract. Nov 2019;20(6):805-817. [CrossRef] [Medline]
- Glenton C, Javadi D, Perry HB. Community health workers at the dawn of a new era: 5. Roles and tasks. Health Res Policy Syst. Oct 12, 2021;19(Suppl 3):128. [FREE Full text] [CrossRef] [Medline]
- Chang LW, Njie-Carr V, Kalenge S, Kelly JF, Bollinger RC, Alamo-Talisuna S. Perceptions and acceptability of mHealth interventions for improving patient care at a community-based HIV/AIDS clinic in Uganda: a mixed methods study. AIDS Care. 2013;25(7):874-880. [FREE Full text] [CrossRef] [Medline]
- Lacroze E, Frühauf A, Nordmann K, Rampanjato Z, Muller N, De Neve JW, et al. Usability and acceptance of a mobile health wallet for pregnancy-related healthcare: a mixed methods study on stakeholders' perceptions in central Madagascar. PLoS One. 2023;18(1):e0279880. [FREE Full text] [CrossRef] [Medline]
- Medhanyie AA, Little A, Yebyo H, Spigt M, Tadesse K, Blanco R, et al. Health workers' experiences, barriers, preferences and motivating factors in using mHealth forms in Ethiopia. Hum Resour Health. Jan 15, 2015;13(1):2. [FREE Full text] [CrossRef] [Medline]
- Martin-Payo R, Carrasco-Santos S, Cuesta M, Stoyan S, Gonzalez-Mendez X, Fernandez-Alvarez MD. Spanish adaptation and validation of the User Version of the Mobile Application Rating Scale (uMARS). J Am Med Inform Assoc. Nov 25, 2021;28(12):2681-2686. [FREE Full text] [CrossRef] [Medline]
- Ensink CJ, Keijsers NL, Groen BE. Translation and validation of the System Usability Scale to a Dutch version: D-SUS. Disabil Rehabil. Jan 2024;46(2):395-400. [CrossRef] [Medline]
- Schnall R, Cho H, Liu J. Health information technology usability evaluation scale (Health-ITUES) for usability assessment of mobile health technology: validation study. JMIR Mhealth Uhealth. Jan 05, 2018;6(1):e4. [FREE Full text] [CrossRef] [Medline]
- Parmanto B, Lewis ANJ, Graham KM, Bertolet MH. Development of the Telehealth Usability Questionnaire (TUQ). Int J Telerehabil. 2016;8(1):3-10. [FREE Full text] [CrossRef] [Medline]
- Zhou L, Bao J, Setiawan IM, Saptono A, Parmanto B. The mHealth App Usability Questionnaire (MAUQ): development and validation study. JMIR Mhealth Uhealth. Apr 11, 2019;7(4):e11500. [FREE Full text] [CrossRef] [Medline]
- O'Connor AM, Cranney A. User manual - acceptability. Ottawa Hospital Research Institute. 2002. URL: https://decisionaid.ohri.ca/docs/develop/User_Manuals/UM_Acceptability.pdf [accessed 2026-01-20]
- Micoulaud-Franchi JA, Sauteraud A, Olive J, Sagaspe P, Bioulac S, Philip P. Validation of the French version of the Acceptability E-scale (AES) for mental E-health systems. Psychiatry Res. Mar 30, 2016;237:196-200. [CrossRef] [Medline]
- Holden RJ, Karsh BT. The technology acceptance model: its past and its future in health care. J Biomed Inform. Feb 2010;43(1):159-172. [FREE Full text] [CrossRef] [Medline]
- Holdener M, Gut A, Angerer A. Applicability of the user engagement scale to mobile health: a survey-based quantitative study. JMIR Mhealth Uhealth. Jan 03, 2020;8(1):e13244. [FREE Full text] [CrossRef] [Medline]
- Eltorai AEM, Ghanian S, Adams CA, Born CT, Daniels AH. Readability of patient education materials on the American Association for Surgery of Trauma website. Arch Trauma Res. Jun 2014;3(2):e18161. [FREE Full text] [CrossRef] [Medline]
- Beaton DE, Bombardier C, Guillemin F, Ferraz MB. Guidelines for the process of cross-cultural adaptation of self-report measures. Spine (Phila Pa 1976). Dec 15, 2000;25(24):3186-3191. [CrossRef] [Medline]
- Shi J, Mo X, Sun Z. [Content validity index in scale development]. Zhong Nan Da Xue Xue Bao Yi Xue Ban. Feb 2012;37(2):152-155. [CrossRef] [Medline]
- Zamanzadeh V, Ghahramanian A, Rassouli M, Abbaszadeh A, Alavi-Majd H, Nikanfar AR. Design and implementation content validity study: development of an instrument for measuring patient-centered communication. J Caring Sci. Jun 2015;4(2):165-178. [FREE Full text] [CrossRef] [Medline]
- Mat Nawi FA, Tambi AM, Samat MF, Mustapha WM. A review on the internal consistency of a scale: the empirical example of the influence of human capital investment on malcom baldridge quality principles in TVET institutions. Asian People J. 2020;3(1):19-29. [FREE Full text] [CrossRef]
- Mustafa N, Safii NS, Jaffar A, Sani NS, Mohamad MI, Abd Rahman AH, et al. Malay version of the mHealth App Usability Questionnaire (M-MAUQ): translation, adaptation, and validation study. JMIR Mhealth Uhealth. Feb 04, 2021;9(2):e24457. [FREE Full text] [CrossRef] [Medline]
- Allen P, Bennett K, Heritage B. SPSS Statistics Version 22: A Practical Guide. Victoria, Australia. Cengage Australia; 2014.
- Huang F, Blaschke S, Lucas H. Beyond pilotitis: taking digital health interventions to the national level in China and Uganda. Global Health. Jul 31, 2017;13(1):49. [FREE Full text] [CrossRef] [Medline]
- O'Donnell A. Commentary on Harder et al. (2020): ensuring the sustainability of mHealth in low- and middle-income countries-how do we cure 'pilotitis'? Addiction. Jun 19, 2020;115(6):1061-1062. [CrossRef] [Medline]
- Lim PC, Lim YL, Rajah R, Zainal H. Usability questionnaire for standalone or interactive mobile health applications: a systematic review. BMC Digit Health. Apr 01, 2025;3:11. [CrossRef]
- An Q, Kelley MM, Hanners A, Yen PY. Sustainable development for mobile health apps using the human-centered design process. JMIR Form Res. Aug 25, 2023;7:e45694. [FREE Full text] [CrossRef] [Medline]
- Chraish M, Oyama C, Aoki Y, Andrew D, Nishio M, Shi S, et al. Bridging the gap between community health workers' digital health acceptance and actual usage in Uganda: exploring key external factors based on technology acceptance model. PLOS Digit Health. Nov 19, 2025;4(11):e0001099. [CrossRef] [Medline]
- Kachimanga C, Mulwafu M, Ndambo MK, Harare J, Murkherjee J, Kulinkina AV, et al. Experiences of community health workers on adopting mHealth in rural Malawi: a qualitative study. Digit Health. May 15, 2024;10:20552076241253994. [FREE Full text] [CrossRef] [Medline]
- Estrada EH, Hedt-Gauthier B, Nkurunziza J, Kubwimana M, Nuss S, Forbes C, et al. Community health workers' usability and acceptability of an mHealth tool for post-cesarean assessments: a mixed-methods study in rural Rwanda. BMC Pregnancy Childbirth. Dec 23, 2025. [FREE Full text] [CrossRef] [Medline]
- Condo J, Mugeni C, Naughton B, Hall K, Tuazon MA, Omwega A, et al. Rwanda's evolving community health worker system: a qualitative assessment of client and provider perspectives. Hum Resour Health. Dec 13, 2014;12:71. [FREE Full text] [CrossRef] [Medline]
- Seventh integrated household living conditions survey. National Institute of Statistics of Rwanda. 2025. URL: https://www.statistics.gov.rw/sites/default/files/documents/2025-04/EICV_7_booklet%20for%20dissemination.pdf [accessed 2025-07-14]
Abbreviations
| CHW: community health worker |
| CHW-MUAAT: Community Health Worker mHealth Usability and Acceptability Assessment Tool |
| CVI: content validity index |
| FVI: face validity index |
| MAUQ: mHealth App Usability Questionnaire |
| mHealth: mobile health |
| NIH: National Institutes of Health |
| POAS: Practitioner Opinion (Acceptability) Scale |
| SSA: sub-Saharan Africa |
Edited by A Stone; submitted 30.Jul.2024; peer-reviewed by Å Grönlund; comments to author 30.May.2025; revised version received 05.Dec.2025; accepted 09.Dec.2025; published 20.Feb.2026.
Copyright©Jonathan Nkurunziza, Sarah Nuss, Eve Hiyori Estrada, Marthe Kubwimana, Adeline Adwoa Boatin, Laban Bikorimana, Richard Ribon Fletcher, Nissi Byiringiro, Bethany Hedt-Gauthier, Vincent Kalumire Cubaka. Originally published in JMIR mHealth and uHealth (https://mhealth.jmir.org), 20.Feb.2026.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on https://mhealth.jmir.org/, as well as this copyright and license information must be included.

