Published on in Vol 6, No 12 (2018): December

Data Integrity–Based Methodology and Checklist for Identifying Implementation Risks of Physiological Sensing in Mobile Health Projects: Quantitative and Qualitative Analysis

Data Integrity–Based Methodology and Checklist for Identifying Implementation Risks of Physiological Sensing in Mobile Health Projects: Quantitative and Qualitative Analysis

Data Integrity–Based Methodology and Checklist for Identifying Implementation Risks of Physiological Sensing in Mobile Health Projects: Quantitative and Qualitative Analysis

Original Paper

1Mobile Health Systems Lab, Institute of Robotics and Intelligent Systems, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland

2Department of Epidemiology & Public Health, Swiss Tropical and Public Health Institute, Basel, Switzerland

3University of Basel, Basel, Switzerland

4Universidad Peruana Cayetano Heredia, Lima, Peru

Corresponding Author:

Walter Karlen, Prof Dr

Mobile Health Systems Lab, Institute of Robotics and Intelligent Systems

Department of Health Sciences and Technology

ETH Zurich

Building BAA

Lengghalde 5

Zurich, 8092

Switzerland

Phone: 41 44 63 3 77 54

Email: walter.karlen@ieee.org


Background: Mobile health (mHealth) technologies have the potential to bring health care closer to people with otherwise limited access to adequate health care. However, physiological monitoring using mobile medical sensors is not yet widely used as adding biomedical sensors to mHealth projects inherently introduces new challenges. Thus far, no methodology exists to systematically evaluate these implementation challenges and identify the related risks.

Objective: This study aimed to facilitate the implementation of mHealth initiatives with mobile physiological sensing in constrained health systems by developing a methodology to systematically evaluate potential challenges and implementation risks.

Methods: We performed a quantitative analysis of physiological data obtained from a randomized household intervention trial that implemented sensor-based mHealth tools (pulse oximetry combined with a respiratory rate assessment app) to monitor health outcomes of 317 children (aged 6-36 months) that were visited weekly by 1 of 9 field workers in a rural Peruvian setting. The analysis focused on data integrity such as data completeness and signal quality. In addition, we performed a qualitative analysis of pretrial usability and semistructured posttrial interviews with a subset of app users (7 field workers and 7 health care center staff members) focusing on data integrity and reasons for loss thereof. Common themes were identified using a content analysis approach. Risk factors of each theme were detailed and then generalized and expanded into a checklist by reviewing 8 mHealth projects from the literature. An expert panel evaluated the checklist during 2 iterations until agreement between the 5 experts was achieved.

Results: Pulse oximetry signals were recorded in 78.36% (12,098/15,439) of subject visits where tablets were used. Signal quality decreased for 1 and increased for 7 field workers over time (1 excluded). Usability issues were addressed and the workflow was improved. Users considered the app easy and logical to use. In the qualitative analysis, we constructed a thematic map with the causes of low data integrity. We sorted them into 5 main challenge categories: environment, technology, user skills, user motivation, and subject engagement. The obtained categories were translated into detailed risk factors and presented in the form of an actionable checklist to evaluate possible implementation risks. By visually inspecting the checklist, open issues and sources for potential risks can be easily identified.

Conclusions: We developed a data integrity–based methodology to assess the potential challenges and risks of sensor-based mHealth projects. Aiming at improving data integrity, implementers can focus on the evaluation of environment, technology, user skills, user motivation, and subject engagement challenges. We provide a checklist to assist mHealth implementers with a structured evaluation protocol when planning and preparing projects.

JMIR Mhealth Uhealth 2018;6(12):e11896

doi:10.2196/11896

Keywords



Background

Limited access to adequate health care is a major burden in low- and middle-income countries and affects the poor most [1]. Centralized and outreach health care facilities are often sparsely available, difficult to reach, and overloaded. In addition, access to the health care centers can be costly, as patients often have to pay for transportation and compensate for the loss of income because of their absence from work [2]. Mobile health (mHealth) is a promising field that seeks to bring health care closer to the patient, thereby improving access and reducing costs because of its potential for a system-wide application [3]. We interpret mHealth as the use of mobile, digital communication technologies (eg, mobile phones) in medical and public health applications for effectively delivering health care and medical information [4]. Biomedical sensing using connected mobile sensors is an important but largely unexplored application in mHealth. It provides objective measurement of physiological parameters and facilitates more reliable diagnoses and assessments of patients. Physiological parameters that can currently be measured with mobile tools include blood pressure [5], respiratory rate (RR) [6], heart rate (HR) and electrocardiogram [7], peripheral capillary oxygen saturation (SpO2) [8], and blood glucose levels [9].

The integration of additional medical sensors into mHealth projects increases the technological complexity. Furthermore, users require additional skills and medical knowledge, whereas systems need to be purchased and maintained. Thus, these additional challenges need to be considered during the implementation of physiological monitoring projects. The validated use of medical sensors depends on well-defined working conditions and the adherence to standards to ensure correct sensor function and data quality. Sensor failures and motion artifacts are possible intermittent issues encountered and, therefore, when operating in remote settings, a basic understanding of medical sensing mechanisms is required for safe application of sensors and to identify faulty or noisy data at the point of use. It can be challenging to address these issues when inexperienced community health care workers with little or no prior knowledge about interpreting physiological signals are operating the sensors. Numerous mHealth projects have implemented physiological sensors, for example, pulse oximeters, for measuring SpO2 and HR, but none of them directly focused on evaluating the challenges associated with their implementation. Challenges were observed in clinical settings, that is, Hudson et al identified that the lack of training and nonfamiliarity with clinical alarms are barriers to apply pulse oximeters [10]. Furthermore, Spence et al identified different priorities across stakeholders [11], and English et al identified significant differences in observed errors between clinicians and nursing staff [12]. In summary, no research study has systematically examined the challenges of implementing physiological sensing and monitoring with mHealth tools.

As a consequence, no established methodology exists that would enable mHealth implementers to formally evaluate their projects and prevent implementation pitfalls with respect to physiological monitoring in low-resource settings. Although King et al organized focus group discussions with trained health care providers to identify challenges when managing pediatric pneumonia with pulse oximetry [13], their findings are country specific and limited to pulse oximeters. Wallis et al organized group discussions and proposed a roadmap for overcoming barriers of implementing image-based mHealth implementations [14], but their strategies are limited to image-based applications. On the other hand, Aranda-Jan et al applied the strengths, weaknesses, opportunities, and threats analysis method to review mHealth projects [15]. In addition, Eckman et al provided a conceptual strategy that involved all stakeholders into the design phase to assess the common failures of mHealth implementation [16]. However, both approaches did not explicitly address the challenges of physiological sensing and the specific risks associated to adding medical sensors to mHealth projects. The absence of a methodology or guideline during implementation could easily lead to overlooking domain-specific issues, evaluation errors, and the underestimation of risks and, therefore, prevent projects from achieving their goal of improving health outcomes.

We consider data integrity as the most important criterion for evaluating the risks of an mHealth project. Data integrity represents the faithfulness of information comprising criteria such as completeness, accuracy, relevance, consistency, usability, and reliability [17]. During unsupervised data collection, as it is frequently the case in mHealth, data completeness and consistency are critical quality metrics. Incomplete, poor, and missing data not only reduce the sample size but may also introduce bias or false conclusions. In clinical decision making, the signal quality and its reliability during physiological data collection using medical sensors are the most important factors [18]. Usability of a medical device is another component of data integrity that is associated with correct use and usage errors. International standards specify usability evaluation processes to reduce the risk of usability failures [19]. Poor usability can lead to the misuse of a medical device or a reduction of user engagement, resulting in unusable or missing data.

Due to the decentralized nature of mHealth, the assurance of data integrity is challenging [20]. High measurement uncertainty because of the lack of a controlled environment, unknown training status of the user, and higher risks for misuse of the technology require special attention. Although the goal of any mHealth implementation is to provide access to health services and, consequently, improve health outcomes, obtaining good data integrity with the provided technology is essential to positively influence these outcomes. Therefore, evaluating data integrity should not only be part of the evaluation of implementation success at the end of an mHealth study but considered and assessed already early in the preparation phase. Consequently, data integrity could serve as the central theme when framing a methodology for evaluating implementation challenges.

Objectives

Our goal was to develop a methodology to systematically evaluate general risks and challenges of sensor-based physiological monitoring in mHealth and to avoid pitfalls before and during its implementation. Our specific aims in developing such a methodology were to (1) identify sources of low data integrity with a special focus on implementations that occur in remote or low-resource settings, (2) derive generalized risk factors that could guide a pre-implementation evaluation, and (3) provide an actionable tool to conduct such evaluation. The results could support implementers in evaluating their projects with regard to hidden risks and facilitate quality control early in the design and implementation of advanced mHealth tools.


Overview

To identify sources of data integrity loss, we retrospectively analyzed physiological data collected from a randomized controlled trial that implemented sensor-based mHealth tools to assess health outcomes in a rural setting [21]. After the analysis of the data integrity gaps in the recorded data, we identified possible causes that could have led to these gaps from both the paper-based trial case report forms (CRFs) and through qualitative data obtained from posttrial semistructured interviews with the app users conducted on site after the trial. The method development process is shown in Figure 1.

Data Collection

We retrospectively analyzed physiological data and paper-based CRFs collected during a randomized controlled trial conducted in 82 Peruvian rural communities [21]. The trial was approved by the Universidad Peruana Cayetano Heredia ethical review board and the Cajamarca Regional Health Authority. The trial was registered on the ISRCTN registry (ISRCTN26548981).

Figure 1. The flowchart demonstrates the data integrity–based analysis for identifying the source of data integrity loss.
View this figure

A total of 317 children aged between 6 and 36 months were enrolled, and informed written consent was obtained from the children’s guardians. A total of 9 field workers (FWs) were trained to visit the children on 7 fixed geographical routes. Children were preassigned to these routes and visited in parallel by FWs once a week during the course of 60 weeks (6 weeks pilot, followed by a 54-weeks trial from February 2016 to May 2017, excluding 4 weeks of public holidays). To reduce the possibility of a courtesy bias, the routes of the FWs were rotated every 2 months.

During the weekly visits, FWs filled out a CRF and recorded physiological measurements with an mHealth app developed with LambdaNative (University of British Columbia, Canada) [22]. The app was installed on a tablet (Lenovo TAB 2 A7-10, Lenovo, CN) and connected to an external pulse oximetry sensor (iSpO2 Rx, Masimo International, Neuchatel, CH). FWs placed the multisite Y probe on the child’s thumb, index finger, or sole of a foot for the measurement of photoplethymogram (PPG), HR, and SpO2. The FW also measured RR with the same app by tapping on the touch sensitive screen of the tablet with each inhalation phase of breathing while observing the child’s bared belly [6]. All data collection procedures and interactions with the guardians and the child were subject of the informed consent and were approved by the ethics board.

The global positioning system sensor of the tablet registered the location where the visits took place (usually at the subject’s home). The assigned identification codes for children and FWs, date, and time were recorded with the app and the CRF. Furthermore, the health status of the child in the preceding week (maximum 2-week recall), the availability of the child (eg, absent from home), and unexpected sensor- or app-related technical problems during the visit were annotated in the CRF. Field coordinators conversed daily with FWs whether any problems occurred during the day, downloaded data, tested the sensors, and charged the tablets for the following day.

In addition to the assessments by the FWs, health personnel from 22 health care centers in the trial’s catchment area used the same tablets and software to collect physiological data in their consultations. The FWs received a 5-day initial training for tablet and CRF data collection with monthly retraining sessions of 2 hours. The health care center personnel were initially trained in 2 group sessions. Due to frequent changes of personnel in health care centers, new staff was retrained individually on site and physiological data were downloaded on a monthly basis.

Quantitative Data Analysis

We quantitatively evaluated data completeness and signal quality of the physiological data and CRFs completed by FWs (N=9) with Matlab (R2016b, MathWorks Inc, Natick, Massachusetts, USA).

Data Completeness

We analyzed the completeness of home visit data and explored reasons for missing data. For this assessment, we considered a child no longer contributing to our data integrity analysis if there were no visits available for more than 8 consecutive weeks during the main trial period. We analyzed the tablet and CRF data separately. We considered the visit as missed if there were no tablet or CRF entries during a given week. We compared the data completeness between the pilot trial and main trial to assess training effects and improvements because of feedback from the pilot period. In the case of missing visits, we consulted with the field coordinator that was responsible for the FWs route planning for possible reasons. In addition, we reviewed the CRFs for potential explanations for the missing visits or recordings. For health care center recordings, we investigated barriers of using the tablet from interviews with the staff members.

Signal Quality

We evaluated the signal quality of the waveform obtained from the pulse oximeter. We calculated a signal quality index (SQI) using the established cross-validation based on morphological features and short-term variations [23]. We classified the PPG signal into 2 quality categories. We defined PPG signals that had sufficient quality to extract SpO2 values as “sufficient” (time series with high SQI for consecutive 8 seconds or longer) and the remaining as “insufficient”. To evaluate the performance across FWs over time, we evaluated the PPG signal quality for each FW separately. We calculated a “sufficient” PPG ratio over the total number of PPG signals within a sliding window of 40 recordings and a step size of 8 recordings. We chose these specific numbers because ideally each of the FWs should have obtained approximately 40 recordings per week and 8 recordings per day.

Qualitative Data Analysis

We conducted semistructured posttrial interviews with the 7 FWs who were last to complete the children’s visits to assess their routines, experiences, and problems encountered during data collection. In addition, we conducted interviews with 7 health care center staff members (nurses or technicians) who were trained to use the tablet and worked at 7 different health care centers. These 7 health care centers were selected because of their varied geographical distribution, infrastructure, and load of patients. We assessed the frequencies and difficulties of using the tablet (see Multimedia Appendix 1). JZ and MM conducted the face-to-face interviews. Questions were asked in English and translated into Spanish during the interviews. All interviews were recorded with written notes and later digitalized by JZ and MM. Spontaneous follow-up questions and answers were also included in the analysis. Furthermore, we investigated potential usability issues that were not identified during the app development and trial pilot phase as well as whether the users had any difficulties using the tablet.

We conducted a content analysis [24] on the qualitative data, resulting in predominant themes around potential reasons that could affect the 3 main sources for data integrity (data completeness, signal quality, and usability). JZ collected and familiarized with the data, coded the reasons, and searched for themes. The final themes were discussed with LT and WK and reviewed by DM and WK. JZ then created a thematic map of potential reasons that could cause insufficient data integrity by identifying commonalities among all codes.

Generalizing Risk Factors and Checklist Development

We systematically evaluated the obtained challenge categories to derive a methodology that could guide the pre-implementation evaluation of risks for general physiological sensing projects. We interpreted the main themes generated from the thematic map as challenges to be assessed and detailed each of them into specific risk factors based on the observed experiences during the trial. The risk factors were aggregated by JZ into a checklist draft.

To generalize the risk factors in this pulse oximetry–based checklist draft to other physiological sensing approaches, we selected 8 studies [25-32] that we considered representative of medical sensors–based mHealth projects (details are listed in Multimedia Appendix 2). A total of 4 graduate students (JB, SH, MM, and NN) with experience in conducting projects in low-resource settings reviewed and evaluated 2 selected projects each and applied the checklist to the selected projects. The list of risk factors was expanded with factors that were missing, either identified by the authors of the reviewed projects or from the reviewers’ own experiences. The wording and usability issues of the checklist were improved based on the feedback from the reviewers.

A total of 5 researchers (AA, DC, KK, WK, and BP) with proven practical experience in global mHealth implementation were invited by email to join an expert panel, assess the checklist, and provide feedback in 2 evaluation rounds. The first round was conducted via email to collect individual feedback on the checklist and suggestions for change from each expert. JZ aggregated all feedback into a point-by-point list of change recommendations and distributed it to all experts for review before the second round. The second round consisted of a group discussion that was conducted via videoconference. The list was presented point-by-point to the experts (JZ) and in case of disagreements between experts, discussed until a final agreement was reached. JZ translated decisions on changes into the checklist, which was then distributed to experts for final approval.


Quantitative Data Analysis

Data from 300 out of the 317 recruited children met the inclusion criteria for the quantitative data analysis. A total of 15,757 home visits were made to these children during the trial and 1589 during the pilot period (Figure 2). We observed a higher percentage of visits entered through CRFs during the trial (15,322/15,757, 97.27%) compared with the pilot (13,910/15,757, 88.28%; Table 1). FWs encountered the children at home in 13,802 (13,802/15,757, 87.59%) cases. In 1953 cases (1953/15,757, 12.39%), children were absent from home, and hence, no data could be collected.

Figure 2. Visits obtained from tablet and case report form (CRF) entries over study weeks. The pixels in the order of legend sequence denote Both: visits registered both with the tablet and in the case report form, Tablet: visits only entered in the tablet, CRF: visits only entered in the CRF, and Missing: no visit recorded with either tablet or CRF. The continuous black lines indicate 4 full weeks of public holidays in the trial region. Missing visits at week 60 were because of Easter vacation.
View this figure
Table 1. Overview of the quantitative data from the 300 included children with respect to data completeness and photoplethymogram signal quality collected in the field during the pilot and trial.
VisitsPilot (N=1800a), n (%)Trial (N=16,200a), n (%)
Actual visits (tablet)n=1182 (65.67)n=15,439 (95.30)
 Total PPGb977 (82.66)12,098 (78.36)
 Sufficient quality PPG368 (37.67)7653 (63.26)
 Insufficient quality PPG609 (62.33)4445 (36.74)
Actual visits (CRFc)n=1589 (88.28)n=15,757 (97.27)
 Successful visits1212 (76.27)13,802 (87.59)
 Unsuccessful visits377 (23.73)1953 (12.39)
 Unlabeled visits0 (0.00)2 (0.01)

aN values based on scheduled visits.

bPPG: photoplethymogram.

cCRF: case report form.

Overall, 2 FWs left the study team during the trial. FW 5 left because of personal reasons after 6 months and was replaced by FW 6. FW 9 left already after 218 recordings that were insufficient for estimating a signal quality trend and, consequently, was excluded from the comparison. In total, the remaining 8 FWs recorded 82.66% (977/1182) PPG measurements during the pilot and 78.36% (12,098/15,439) PPG measurements during the trial (Table 1). For the trial, we classified 7653 (7653/12,098, 63.26%) PPG signals as “sufficient” and 4445 (4445/12,098, 36.74%) as “insufficient”. Of the 8 FWs, 7 increased their “sufficient” PPG ratio over time with a mean slope of 0.1226 (SD 0.0512; Figure 3).

Qualitative Analysis

After the interviews with 7 FWs and 7 health care center staff members, we identified sources of low data integrity in 3 data integrity domains: (1) reasons regarding incomplete data, (2) low signal quality, and (3) usability issues.

Data Completeness

FWs encountered difficulties to find the correct routes to the family homes at the beginning of the pilot because of long distances and rough roads. To arrange efficient routes for each FW, the field coordinators evaluated the number of children per route, the actual duration to complete each route, and a rotation of FWs to share extra workload for routes to remote communities or hardship during difficult weather conditions. The pilot enabled to adjust the routes and refine data collection tools and protocols. After adaptation, we observed that a higher percentage of children were visited during the trial compared with the pilot. Furthermore, FWs noticed that if children were absent from their homes during the scheduled visits, it was mainly because the guardians had taken them to the fields as most of them were farmers.

FWs reported issues with the tablets and sensors, specifically the freezing of the app during measurements (3 FWs), that no pulse oximeter connection could be established (3 FWs) or unexpected insufficient tablet battery levels (1 FW) to perform all measurements as planned. All these issues were addressed by reporting to and solved daily by research assistants when FWs returned to the research station.

Another factor that hindered the measurement was guardians’ concern and preference not to let FWs interact with the children when the children resisted cooperating, when they were sick, or were sleeping. According to the CRFs, mothers did not allow measurements of the child in 305 cases. FWs also reported that when a child was sick, the mothers did not allow baring the child’s chest and abdomen to measure respiratory movement.

Most FWs (5 out of 7) perceived the lack of rapport with the child as a hindering factor at the beginning of the trial and after route rotations. They reported that the child was agitated and nervous and, therefore, resistant to interact. This problem was eventually solved and the trust between children and FWs built up over time.

In general, health care center staff were eager to use the tablet to measure the 3 parameters (HR, SpO2, and RR) using a single system. However, staff changes and extra workload were reasons for the low usage of the tablet. In 4 out of 7 health care centers where the interviews took place, the trained health care center staff member quit their job with the health service provider unexpectedly before a new staff member could be instructed to use the tablet. In addition, 1 health care center staff member indicated that health care center staff members were unable to spend extra time to collect measurements with the tablet because they had to complete their routine paper registrations and measurements for visiting patients with their regular medical devices.

Figure 3. “Sufficient” photoplethymogram quality ratio over recording blocks for all 9 field workers (field worker 1-9 [n=number of photoplethymogram recordings performed]) during the trial. The blue dots depict the ratios between number of “sufficient” photoplethymogram signals and total number of photoplethymogram signals within each recording block (40 consecutive recordings with a step size of 8) by each field worker and the red trend lines are the linear fit of the ratios estimating the trend of recording quality (a=slope of trend line). Field worker 9 did not produce sufficient recordings for meaningful trend estimation and was not included in the signal quality analysis. FW: field worker; PPG: photoplethymogram.
View this figure
Signal Quality

The FWs reported that cold fingers and movements of the children led to poor signal quality. For most of the visits when ambient temperatures were low, the pulse oximeter was not able to acquire a signal and the app indicated insufficient perfusion. With the progression of the study, FWs addressed this problem by warming the child’s finger before the measurement. The FWs also indicated that children tended to move after 10 seconds of measurements, leading to movement artifacts. In addition, children became nervous after approximately 3 unsuccessful measurement attempts and became less compliant.

Usability

Usability was primarily assessed in the pilot phase where the app was iteratively improved day-by-day in close interaction with the FWs. Workflow issues were addressed and data entry speed optimized. Translations of instructions from English to Spanish were confusing and, consequently, simplified.

A single FW reported that the font size of the selection list for demographic information (eg, the child’s communities and child’s identifier) was too small and the selection lists were too long to go through. The remaining FWs considered the app easy to use with a logical workflow.

Thematic Map

From the coded reasons for loss of data integrity in the 3 studied data integrity domains (data completeness, signal quality, and usability), we obtained 5 clusters: (1) environment, (2) technology, (3) user skills, (4) user motivation, and (5) subject engagement, which were represented in a thematic map (Figure 4). The strength of connections between codes denotes the frequency of occurrence of the codes and, therefore, illustrates the importance of a code within the cluster. We identified these 5 clusters as main challenge categories for the implementation of mHealth physiological monitoring in low-resource settings.

Figure 4. Thematic map showing 5 main challenge categories (technology, environment, user skills, user motivation, and subject engagement) generated by the related codes. The numbers along the connection lines indicate the frequency of occurrence of the codes, therefore, indicating the weight each code contributes to the main challenge. FW: field worker.
View this figure

Generalized Risk Factors and Evaluation Tool

The risk factors generated from the above-mentioned 5 challenge categories were expanded through further mHealth projects evaluation and expert reviews and were consolidated into a checklist. The checklist was divided into 5 sections that relate to the main challenge categories obtained from the thematic map and serves as an actionable evaluation tool. “Technology” considers technical aspects of the system, mobile devices, measurement devices or medical sensors, data management, software, and technical support. “Environment” takes into consideration the risks from climate, geography, culture, and society that can influence the quality of data collection and technology performance. “User skills” considers literacy, training, feedback, and retraining of the users. “User motivation” considers user availability and monitoring strategies to encourage user performance. “Subject engagement” focuses on the availability of the subject to be measured. Each section of the questionnaire features questions that can be answered with either a “yes,” “no,” “in progress,” or “not applicable (N/A).” By inspecting the “no” column of the checklist, the open issues and sources for potential risks can be visually identified. The checklist is available under a CreativeCommons NonCommercial ShareAlike licence as a printable PDF and an interactive Web-based form [32].


Principal Findings

In this study, we evaluated implementation challenges of physiological monitoring with mobile sensors in low-resource settings and developed a data integrity–based methodology to evaluate the challenges according to the factors environment, technology, user skills, user motivation, and subject engagement. This methodology could, that is, in the form of the developed checklist, assist mHealth implementers to identify risks.

Until now, methodologies for systematically assessing implementation challenges in physiological monitoring enabled by mHealth did not exist. Implementation challenges were reported only intermittently covering training [10], limited resources [34], motivational barriers [35], language and cultural barriers, weak health systems, and limited external financing schemes [36]. With our approach that focuses on the exploration of challenges based on data integrity, we provide central themes that implementers can systematically follow. By exploring the causality of data integrity loss, the methodology provides a broad coverage of risks.

Environment- and technology-related challenges are closely linked and should be evaluated with respect to the following aspects: weather, geography, population, and related difficulties that influence the access to subjects as well as the mHealth tool’s functionality. Unlike text or voice message–based mHealth projects, where the mobile communication infrastructure is the major bottleneck that influences study outcomes [36], environmentally induced barriers such as missing subject recordings because of inconvenient transportation have large impact on sensor-based mHealth projects. Those factors should be carefully considered and potential solutions tested and planned for.

In addition, implementers should plan for sufficient follow-up and technical support during the lifetime of a project. In our case, the cold climate made the children feel uncomfortable to bare their chest and abdomen and, in addition, cold fingers negatively influenced the signal strength. This problem could be addressed by considering whether the chosen sensing modalities are suitable for the local settings. Moreover, from our experience, good preparation includes collaborating and exchanging information with all stakeholders (parents, caregivers, and health care center personnel) early in the process, which helps to evaluate the feasibility of the chosen system and methods before implementation [27].

To solve user skill–related challenges, sufficient training of the users, understanding their opinions and attitudes toward devices and systems, as well as assisting them in fostering a good relationship with the subjects are essential. The “sufficient” PPG ratio for all except 1 FW increased across the study period, indicating a positive correlation between the users’ experience level and achieved signal quality. FW 6 who had a negative trend in signal quality was hired midtrial and was not part of the extensive training during the pilot study. Therefore, we cannot exclude the fact that the training provided at the appointment was insufficient. The posttrial interview with FW 6 did not reveal a clear reason why the decreasing trend could have happened. Therefore, further investigations will be needed. This was the first time that mHealth technology was introduced into the trial region. Although mobile phones were widely used in this area, the sensor-related mHealth tools were new to the users (FWs). We recommend training the users to apply the sensors within the target environment to ensure they are fully comfortable with the functionality and able to perform minor troubleshooting themselves as well as perform regular refresher trainings. In addition, implementers should develop evaluation methods to track and supervise the performance of the users during the project’s lifetime and be prepared to receive feedback from users. This way, users can be trained and retrained based on specific issues encountered with the aim of increasing data quality and efficiency.

The subject engagement challenges relate to the level of cooperation between users and subjects. The positive engagement is one of the most important factors that contribute to data completeness. Moreover, medical sensors are sensitive to motion artifacts; therefore, collecting measurements from pediatric populations highly depends on their willingness to cooperate. First of all, the user should establish a good relationship with the subjects. We preemptively considered this as an important factor and conducted extensive pretrial training for FWs in 2 kindergartens and day cares to familiarize them with working with children. Our FWs tried to establish a friendly rapport and played games with the children to calm them before measurements. In general, users should practice measurements on the targeted subject population to optimally perform measurements while creating a conducive environment. In addition, communication with and gaining support from subjects’ family members are essential. In our case, the parents’ support in general was high. In this trial, no cultural groups rejected participation. For pediatric studies, parents should be encouraged to support the mHealth users in handling their children.

Although health care centers in low-resource settings are eager to use technological support to assist clinical measurements, the users faced motivational challenges. On the one hand, supervised training and observable benefits for the staff might increase their motivation to use the new technology. Haberer et al show that sufficient training and improved skills increase the motivation of users [37]. On the other hand, strong motivation also increases lay workers’ performance. Mwendwa et al suggest that poor performance of the community health care workers cannot be solely solved by training skills but also by highlighting the consequences of the measurements and explaining the process of data collection [38]. A properly supervised training and explanations of the benefits of the mHealth tool have the potential to increase user motivation. However, as Graham et al identified in their recent study on implementation of handheld pulse oximetry in Nigerian hospitals, provision of equipment and training alone is not enough [39]. Reminders and encouragement of peers are needed as increased workload burden and technical difficulties were negatively influencing motivation to adopt pulse oximetry. Although these findings were not obtained from an mHealth implementation study, we have good reason to believe that this applies to technology implementation in general, including mHealth.

Checklists have been proven to raise awareness and prevent incidents of certain reoccurring issues. Pilots and aircrew perform preflight checklists to improve flight safety [40]. The World Health Organization suggests using a surgical safety checklist in operating room environments to reduce the number of surgical incidents and deaths [40]. Other health care–related checklists were developed such as assessing the scalability of pilot projects [41], reporting health interventions [42], checking mHealth solutions [43], and monitoring and evaluating outcomes of digital interventions [44]. However, the effectiveness of a checklist depends on the complete implementation of recommended actions. Van Klei et al showed that after applying the surgical safety checklist in operating rooms, the mortality rate only reduced significantly for those surgeons who fully completed the checklist [45]. Furthermore, to effectively distribute the checklist to targeted audiences as well as encourage its use is challenging. Therefore, we provide a tool online for easy and efficient assessment.

Historically, widespread adoption of mHealth tools is limited with too many proof-of-concept projects not achieving sustainable implementations and often lacking evidence to justify scaling [2]. The main challenge categories covered by our methodology coincide with the critical factors for success in scaling medical mobile technologies identified by Lundin and Dumont [46]. Besides understanding the needs from the local area, integrating the technology into the local health care systems, engaging end users, and involving all related stakeholders, other factors that are not driven by data integrity (eg, finance-related factors) can determine the scaling success of mHealth projects.

Limitations

Our methodology development is based on the physiological measurements performed in a single trial limited to pulse oximetry and RR measurements. Therefore, the 5 identified sources for loss of data integrity may not be equally weighted in other projects. For example, in a user self-management project, where the mHealth user is also the studied subject, the aspects of training and education become more important and, therefore, might require a stronger emphasis. Furthermore, although the monitored trial implemented mHealth tools, it did not aim at scaling the usage of the tools. A scaling project could have, because of its extension to multiple geographical locations spanning over different health districts, slightly different aims and would have more sophisticated monitoring tools in place. Our methodology might have not comprehensively captured these aims. However, as our methodology is based on data integrity, the evaluation approach can be easily expanded to these differences.

We were not able to validate the effectiveness of the provided checklist prospectively on a large number of projects. We tested and expanded the checklist extensively by reviewing multiple published projects implementing medical sensors by using early drafts of the checklist and complementing missing aspects. Furthermore, an invited panel of experts evaluated and complemented the checklist with missing aspects based on their own diverse expertise. To promote adoption and collect feedback from early adopters, we have published the checklist online.

Outlook

To enable a dynamic growth of the checklist, we provide a digital form of the checklist online where anonymous usage of the checklist is tracked. We plan to use these data, together with direct feedback from implementers, to improve the checklist in regular intervals and redistribute updated versions through the same platform. As there is currently a lack of target product profiles for sensor-based mHealth systems in many disease management apps and our checklist is developed for implementers to reduce the risk of data integrity loss, we would like to explore the potential of the checklist to serve as a reference for building target product profiles that call for high degrees of data integrity.

Conclusions

Introducing physiological monitoring with mHealth tools into low-resource settings can deliver simple and effective sensing technologies to improve objectivity of health assessments but faces challenges on multiple levels. The target environment, appropriateness of the technology, the skills and motivation of the user, as well as the subject engagements influence the implementation of mHealth solutions alike. With our newly developed methodology and its derived checklist, we enable project implementers to follow a structured evaluation protocol, identify potential risks, and reevaluate challenges during implementation. Such a systematic evaluation of challenges could also be applied and adapted to other areas in the rapidly growing digital health field.

Acknowledgments

The authors would like to thank Matthias Hüser who designed the assessment app, translated usability issues into design improvements, and maintained the app throughout the study. Piotr Wilcynski prepared an earlier analysis of loss of data integrity based on real-time monitoring. The authors also thank the local health district Red Salud and health care centers, the families, and stakeholders, as well as the Swiss-Peruvian Health Research Platform and their staff in San Marcos, especially, Angelica Fernandez and Maria Luisa Huyalinos, for coordinating the project. The authors appreciate the great help from Raphaela Graf, Matias Finat, and Serena Haver for translations during interviews; Joanne Lim for proofreading the manuscript; Jenny Brown (JB) and Simon Hofstede (SH) for reviewing; and Dustin Dunsmuir for commenting on the checklist. Afua Adjekum, Dr Daniel Cobos, Dr Kristina Keitel, and Dr Beth Payne kindly accepted to join the expert panel and provided valuable time and feedback to improve the checklist. This study was supported through ETH Global seed funding, the Swiss National Science Foundation (150640), and the UBS Optimus Foundation.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Semistructured interview questions.

PDF File (Adobe PDF File), 433KB

Multimedia Appendix 2

Overview of the mobile health (mHealth) projects used for testing and reviewing the checklist.

PDF File (Adobe PDF File), 341KB

  1. Peters DH, Garg A, Bloom G, Walker DG, Brieger WR, Rahman MH. Poverty and access to health care in developing countries. Ann N Y Acad Sci 2008;1136:161-171. [CrossRef] [Medline]
  2. Tomlinson M, Rotheram-Borus MJ, Swartz L, Tsai AC. Scaling up mHealth: where is the evidence? PLoS Med 2013;10(2):e1001382 [FREE Full text] [CrossRef] [Medline]
  3. Istepanian R, Jovanov E, Zhang YT. Introduction to the special section on m-Health: beyond seamless mobility and global wireless health-care connectivity. IEEE Trans Inf Technol Biomed 2004 Dec;8(4):405-414. [CrossRef] [Medline]
  4. World Health Organization. mHealth: New horizons for health through mobile technologies. 2011.   URL: http://www.who.int/goe/publications/goe_mhealth_web.pdf [accessed 2018-12-04] [WebCite Cache]
  5. Poon CC, Zhang YT, Bao SD. A novel biometrics method to secure wireless body area sensor networks for telemedicine and m-Health. IEEE Commun Mag 2006 Apr;44(4):73-81. [CrossRef]
  6. Karlen W, Gan H, Chiu M, Dunsmuir D, Zhou G, Dumont GA, et al. Improving the accuracy and efficiency of respiratory rate measurements in children using mobile devices. PLoS One 2014;9(6):e99266 [FREE Full text] [CrossRef] [Medline]
  7. Rawstorn JC, Gant N, Warren I, Doughty RN, Lever N, Poppe KK, et al. Measurement and data transmission validity of a multi-biosensor system for real-time remote exercise monitoring among cardiac patients. JMIR Rehabil Assist Technol 2015 Mar 20;2(1):e2 [FREE Full text] [CrossRef] [Medline]
  8. Hardinge M, Rutter H, Velardo C, Shah SA, Williams V, Tarassenko L, et al. Using a mobile health application to support self-management in chronic obstructive pulmonary disease: a six-month cohort study. BMC Med Inform Decis Mak 2015 Jun 18;15:46 [FREE Full text] [CrossRef] [Medline]
  9. Coughlin SS. Mobile technology for self-monitoring of blood glucose among patients with type 2 diabetes mellitus. Mhealth 2017;3:47 [FREE Full text] [CrossRef] [Medline]
  10. Hudson J, Nguku SM, Sleiman J, Karlen W, Dumont GA, Petersen CL, et al. Usability testing of a prototype Phone Oximeter with healthcare providers in high- and low-medical resource environments. Anaesthesia 2012 Sep;67(9):957-967 [FREE Full text] [CrossRef] [Medline]
  11. Spence H, Baker K, Wharton-Smith A, Mucunguzi A, Matata L, Habte T, et al. Childhood pneumonia diagnostics: community health workers' and national stakeholders' differing perspectives of new and existing aids. Glob Health Action 2017;10(1):1290340 [FREE Full text] [CrossRef] [Medline]
  12. English LL, Dunsmuir D, Kumbakumba E, Ansermino JM, Larson CP, Lester R, et al. The PAediatric Risk Assessment (PARA) mobile app to reduce postdischarge child mortality: design, usability, and feasibility for health care workers in Uganda. JMIR Mhealth Uhealth 2016 Feb 15;4(1):e16 [FREE Full text] [CrossRef] [Medline]
  13. King C, Boyd N, Walker I, Zadutsa B, Baqui AH, Ahmed S, et al. Opportunities and barriers in paediatric pulse oximetry for pneumonia in low-resource clinical settings: a qualitative evaluation from Malawi and Bangladesh. BMJ Open 2018 Dec 30;8(1):e019177 [FREE Full text] [CrossRef] [Medline]
  14. Wallis L, Hasselberg M, Barkman C, Bogoch I, Broomhead S, Dumont G, et al. A roadmap for the implementation of mHealth innovations for image-based diagnostic support in clinical and public-health settings: a focus on front-line health workers and health-system organizations. Glob Health Action 2017 Jun;10(sup3):1340254 [FREE Full text] [CrossRef] [Medline]
  15. Aranda-Jan CB, Mohutsiwa-Dibe N, Loukanova S. Systematic review on what works, what does not work and why of implementation of mobile health (mHealth) projects in Africa. BMC Public Health 2014;14:188 [FREE Full text] [CrossRef] [Medline]
  16. Eckman M, Gorski I, Mehta K. Leveraging design thinking to build sustainable mobile health systems. J Med Eng Technol 2016 Aug;40(7-8):422-430. [CrossRef] [Medline]
  17. Boritz JE. IS practitioners' views on core concepts of information integrity. Int J Account Inform Syst 2005 Dec;6(4):260-279. [CrossRef]
  18. Orphanidou C. Signal Quality Assessment in Physiological Monitoring: State of the Art and Practical Considerations. Switzerland: Springer International Publishing; 2018.
  19. International Standard Organisation. IEC 62366-1:2015 Medical devices -- Part 1: Application of usability engineering to medical devices. Geneva, Switzerland: IEC; 2015.
  20. Karlen W. Data Quality in mHealth. 2015 Presented at: 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; August 25-29, 2015; Milan. [CrossRef]
  21. Hartinger SM, Nuño N, Hattendorf J, Verastegui H, Ortiz M, Mäusezahl D. A factorial cluster-randomised controlled trial combining home-environmental and early child development interventions to improve child health and development: rationale, trial design and baseline findings. BioRxiv 2018; preprint 2018. [CrossRef]
  22. Petersen CL, Gorges M, Dunsmuir D, Ansermino M, Dumont GA. Experience report: functional programming of mHealth applications. In: Proceedings of the 18th ACM SIGPLAN international conference on Functional programming. 2013 Presented at: ICFP'13; September 25-27, 2013; Boston, Massachusetts, USA p. 357-362. [CrossRef]
  23. Karlen W, Kobayashi K, Ansermino JM, Dumont GA. Photoplethysmogram signal quality estimation using repeated Gaussian filters and cross-correlation. Physiol Meas 2012 Oct;33(10):1617-1629. [CrossRef] [Medline]
  24. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006 Jan;3(2):77-101. [CrossRef]
  25. Karantonis DM, Narayanan MR, Mathie M, Lovell NH, Celler BG. Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring. IEEE Trans Inf Technol Biomed 2006 Jan;10(1):156-167. [Medline]
  26. Waki K, Fujita H, Uchimura Y, Omae K, Aramaki E, Kato S, et al. DialBetics: a novel smartphone-based self-management support system for type 2 diabetes patients. J Diabetes Sci Technol 2014 Mar 13;8(2):209-215 [FREE Full text] [CrossRef] [Medline]
  27. Ettinger KM, Pharaoh H, Buckman RY, Conradie H, Karlen W. Building quality mHealth for low resource settings. J Med Eng Technol 2016;40(7-8):431-443. [CrossRef] [Medline]
  28. Dunsmuir DT, Payne BA, Cloete G, Petersen CL, Görges M, Lim J, et al. Development of mHealth applications for pre-eclampsia triage. IEEE J Biomed Health Inform 2014 Nov;18(6):1857-1864. [CrossRef] [Medline]
  29. Lim J, Cloete G, Dunsmuir DT, Payne BA, Scheffer C, von Dadelszen P, et al. Usability and feasibility of PIERS on the Move: an mHealth app for pre-eclampsia triage. JMIR Mhealth Uhealth 2015;3(2):e37 [FREE Full text] [CrossRef] [Medline]
  30. Dada OA, Odubena O, Adepoju AA, Vidler M, Orenuga EA, Osiberu B, et al. 3 Development and testing of pictograms for the symptoms of pre-eclampsia in Ogun State, Nigeria: preeclampsia in low and middle income countries. Pregnancy Hypertens 2016 Jul;6(3):179. [CrossRef]
  31. Bobrow K, Farmer AJ, Springer D, Shanyinde M, Yu LM, Brennan T, et al. Mobile phone text messages to support treatment adherence in adults with high blood pressure (SMS-Text Adherence Support [StAR]): a single-blind, randomized trial. Circulation 2016 Feb 09;133(6):592-600 [FREE Full text] [CrossRef] [Medline]
  32. Raihana S, Dunsmuir D, Huda T, Zhou G, Rahman QS, Garde A, et al. Development and internal validation of a predictive model including pulse oximetry for hospitalization of under-five children in Bangladesh. PLoS One 2015;10(11):e0143213 [FREE Full text] [CrossRef] [Medline]
  33. Zhang J, Karlen W. ETH Zurich Sensor Based Digital Health Checklist. 2018.   URL: https:/​/www.​research-collection.ethz.ch/​bitstream/​handle/​20.500.11850/​302878/​checklistV1.​0.​pdf?sequence=3&isAllowed=y
  34. Prue CS, Shannon KL, Khyang J, Edwards LJ, Ahmed S, Ram M, et al. Mobile phones improve case detection and management of malaria in rural Bangladesh. Malar J 2013;12:48 [FREE Full text] [CrossRef] [Medline]
  35. Medhanyie AA, Little A, Yebyo H, Spigt M, Tadesse K, Blanco R, et al. Health workers' experiences, barriers, preferences and motivating factors in using mHealth forms in Ethiopia. Hum Resour Health 2015 Jan 15;13:2 [FREE Full text] [CrossRef] [Medline]
  36. Chib A, van Velthoven MH, Car J. mHealth adoption in low-resource environments: a review of the use of mobile healthcare in developing countries. J Health Commun 2015;20(1):4-34. [CrossRef] [Medline]
  37. Haberer JE, Kiwanuka J, Nansera D, Wilson IB, Bangsberg DR. Challenges in using mobile phones for collection of antiretroviral therapy adherence data in a resource-limited setting. AIDS Behav 2010 Dec;14(6):1294-1301 [FREE Full text] [CrossRef] [Medline]
  38. Mwendwa P. What encourages community health workers to use mobile technologies for health interventions? Emerging lessons from rural Rwanda. Dev Policy Rev 2017;36(1):111-129. [CrossRef]
  39. Graham HR, Bakare AA, Gray A, Ayede AI, Qazi S, McPake B, et al. Adoption of paediatric and neonatal pulse oximetry by 12 hospitals in Nigeria: a mixed-methods realist evaluation. BMJ Glob Health 2018 Jun;3(3):e000812 [FREE Full text] [CrossRef] [Medline]
  40. Walker IA, Reshamwalla S, Wilson IH. Surgical safety checklists: do they improve outcomes? Br J Anaesth 2012 Jul;109(1):47-54 [FREE Full text] [CrossRef] [Medline]
  41. World Health Organization, Department of Reproductive Health and Research, ExpandNet. Beginning with the end in mind: planning pilot projects and other programmatic research for successful scaling up. Geneva, Switzerland: World Health Organization; 2011.
  42. Agarwal S, LeFevre AE, Lee J, L'Engle K, Mehl G, Sinha C, WHO mHealth Technical Evidence Review Group. Guidelines for reporting of health interventions using mobile phones: mobile health (mHealth) evidence reporting and assessment (mERA) checklist. Br Med J 2016;352:i1174. [CrossRef] [Medline]
  43. Bradway M, Carrion C, Vallespin B, Saadatfard O, Puigdomènech E, Espallargues M, et al. mHealth assessment: conceptualization of a global framework. JMIR Mhealth Uhealth 2017 May 02;5(5):e60 [FREE Full text] [CrossRef] [Medline]
  44. World Health Organization. Monitoring and Evaluating Digital Health Interventions: A practical guide to conducting research and assessment. 2016.   URL: http://www.who.int/reproductivehealth/publications/mhealth/digital-health-interventions/en/ [accessed 2018-09-27] [WebCite Cache]
  45. van Klei WA, Hoff RG, van Aarnhem EE, Simmermacher RK, Regli LP, Kappen TH, et al. Effects of the introduction of the WHO “surgical safety checklist” on in-hospital mortality: A cohort study. Ann Surg 2012 Jan;255(1):44-49. [CrossRef] [Medline]
  46. Lundin J, Dumont G. Medical mobile technologies - what is needed for a sustainable and scalable implementation on a global scale? Glob Health Action 2017 Jun;10(sup3):1344046 [FREE Full text] [CrossRef] [Medline]


CRF: case report form
FW: field worker
HR: heart rate
mHealth: mobile health
PPG: photoplethymogram
RR: respiratory rate
SpO2: peripheral capillary oxygen saturation
SQI: signal quality index


Edited by G Eysenbach; submitted 27.09.18; peer-reviewed by C King, S Cartledge, S Shah; comments to author 20.10.18; revised version received 14.11.18; accepted 22.11.18; published 14.12.18

Copyright

©Jia Zhang, Laura Tüshaus, Néstor Nuño Martínez, Monica Moreo, Hector Verastegui, Stella M Hartinger, Daniel Mäusezahl, Walter Karlen. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 14.12.2018.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mhealth and uhealth, is properly cited. The complete bibliographic information, a link to the original publication on http://mhealth.jmir.org/, as well as this copyright and license information must be included.