%0 Journal Article %@ 2291-5222 %I JMIR Publications %V 13 %N %P e57018 %T Understanding the Relationship Between Ecological Momentary Assessment Methods, Sensed Behavior, and Responsiveness: Cross-Study Analysis %A Cook,Diane %A Walker,Aiden %A Minor,Bryan %A Luna,Catherine %A Tomaszewski Farias,Sarah %A Wiese,Lisa %A Weaver,Raven %A Schmitter-Edgecombe,Maureen %K ecological momentary assessment %K smart home %K smartwatch %K cognitive assessment %K well-being %K monitoring %K monitoring behavior %K machine learning %K artificial intelligence %K app %K wearables %K sensor %K effectiveness %K accuracy %D 2025 %7 10.4.2025 %9 %J JMIR Mhealth Uhealth %G English %X Background: Ecological momentary assessment (EMA) offers an effective method to collect frequent, real-time data on an individual’s well-being. However, challenges exist in response consistency, completeness, and accuracy. Objective: This study examines EMA response patterns and their relationship with sensed behavior for data collected from diverse studies. We hypothesize that EMA response rate (RR) will vary with prompt time of day, number of questions, and behavior context. In addition, we postulate that response quality will decrease over the study duration and that relationships will exist between EMA responses, participant demographics, behavior context, and study purpose. Methods: Data from 454 participants in 9 clinical studies were analyzed, comprising 146,753 EMA mobile prompts over study durations ranging from 2 weeks to 16 months. Concurrently, sensor data were collected using smartwatch or smart home sensors. Digital markers, such as activity level, time spent at home, and proximity to activity transitions (change points), were extracted to provide context for the EMA responses. All studies used the same data collection software and EMA interface but varied in participant groups, study length, and the number of EMA questions and tasks. We analyzed RR, completeness, quality, alignment with sensor-observed behavior, impact of study design, and ability to model the series of responses. Results: The average RR was 79.95%. Of those prompts that received a response, the proportion of fully completed response and task sessions was 88.37%. Participants were most responsive in the evening (82.31%) and on weekdays (80.43%), although results varied by study demographics. While overall RRs were similar for weekday and weekend prompts, older adults were more responsive during the week (an increase of 0.27), whereas younger adults responded less during the week (a decrease of 3.25). RR was negatively correlated with the number of EMA questions (r=−0.433, P<.001). Additional correlations were observed between RR and sensor-detected activity level (r=0.045, P<.001), time spent at home (r=0.174, P<.001), and proximity to change points (r=0.124, P<.001). Response quality showed a decline over time, with careless responses increasing by 0.022 (P<.001) and response variance decreasing by 0.363 (P<.001). The within-study dynamic time warping distance between response sequences averaged 14.141 (SD 11.957), compared with the 33.246 (SD 4.971) between-study average distance. ARIMA (Autoregressive Integrated Moving Average) models fit the aggregated time series with high log-likelihood values, indicating strong model fit with low complexity. Conclusions: EMA response patterns are significantly influenced by participant demographics and study parameters. Tailoring EMA prompt strategies to specific participant characteristics can improve RRs and quality. Findings from this analysis suggest that timing EMA prompts close to detected activity transitions and minimizing the duration of EMA interactions may improve RR. Similarly, strategies such as gamification may be introduced to maintain participant engagement and retain response variance. %R 10.2196/57018 %U https://mhealth.jmir.org/2025/1/e57018 %U https://doi.org/10.2196/57018