JMIR Publications

JMIR mHealth and uHealth

Advertisement

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 12.07.17 in Vol 5, No 7 (2017): July

This paper is in the following e-collection/theme issue:

    Original Paper

    Patterns of User Engagement With the Mobile App, Manage My Pain: Results of a Data Mining Investigation

    1Centre for Disease Modelling, Department of Mathematics and Statistics, York University, Toronto, ON, Canada

    2ManagingLife, Inc., Toronto, ON, Canada

    3School of Kinesiology & Health Science, York University, Toronto, ON, Canada

    4Department of Psychology, York University, Toronto, ON, Canada

    5Department of Anesthesia and Pain Management, Toronto General Hospital, Toronto, ON, Canada

    Corresponding Author:

    Tahir Janmohamed, PEng, MBA

    ManagingLife, Inc.

    Unit 4, 850 Richmond St W

    Toronto, ON, M6J 1C9

    Canada

    Phone: 1 416 910 3760

    Email:


    ABSTRACT

    Background: Pain is one of the most prevalent health-related concerns and is among the top 3 most common reasons for seeking medical help. Scientific publications of data collected from pain tracking and monitoring apps are important to help consumers and healthcare professionals select the right app for their use.

    Objective: The main objectives of this paper were to (1) discover user engagement patterns of the pain management app, Manage My Pain, using data mining methods; and (2) identify the association between several attributes characterizing individual users and their levels of engagement.

    Methods: User engagement was defined by 2 key features of the app: longevity (number of days between the first and last pain record) and number of records. Users were divided into 5 user engagement clusters employing the k-means clustering algorithm. Each cluster was characterized by 6 attributes: gender, age, number of pain conditions, number of medications, pain severity, and opioid use. Z tests and chi-square tests were used for analyzing categorical attributes. Effects of gender and cluster on numerical attributes were analyzed using 2-way analysis of variances (ANOVAs) followed up by pairwise comparisons using Tukey honest significant difference (HSD).

    Results: The clustering process produced 5 clusters representing different levels of user engagement. The proportion of males and females was significantly different in 4 of the 5 clusters (all P ≤.03). The proportion of males was higher than females in users with relatively high longevity. Mean ages of users in 2 clusters with high longevity were higher than users from other 3 clusters (all P <.001). Overall, males were significantly older than females (P <.001). Across clusters, females reported more pain conditions than males (all P <.001). Users from highly engaged clusters reported taking more medication than less engaged users (all P <.001). Females reported taking a greater number of medications than males (P =.04). In 4 of 5 clusters, the percentage of males taking an opioid was significantly greater (all P ≤.05) than that of females. The proportion of males with mild pain was significantly higher than that of females in 3 clusters (all P ≤.008).

    Conclusions: Although most users of the app reported being female, male users were more likely to be highly engaged in the app. Users in the most engaged clusters self-reported a higher number of pain conditions, a higher number of current medications, and a higher incidence of opioid usage. The high engagement by males in these clusters does not appear to be driven by pain severity which may, in part, be the case for females. Use of a mobile pain app may be relatively more attractive to highly-engaged males than highly-engaged females, and to those with relatively more complex chronic pain problems.

    JMIR Mhealth Uhealth 2017;5(7):e96

    doi:10.2196/mhealth.7871

    KEYWORDS



    Introduction

    Internet-based and mobile health (mHealth) apps are transforming how people monitor, manage, and communicate health-related information [1]. This trend has been documented in the fields of medicine [2], nursing [3], psychology [4], kinesiology [5], nutrition [6], and for multiple health concerns and diseases [1].

    Pain is one of the most prevalent health-related concerns and is among the top 3 most common reasons for seeking medical help [7]. Several recent reviews have highlighted the many commercially available pain-related apps that can be downloaded from online app stores by people with chronic pain [8-11]. As of 2015, between 279 [8] and 283 [9] pain-related apps were commercially available to monitor and track pain. The rapid proliferation of mobile apps, in general, and for pain in particular, has not been accompanied by equal attention to determining the factors consumers and healthcare professionals prefer or require when selecting from among the many available apps. App quality, usability, effectiveness, and other relevant data for most mHealth apps are either unavailable, incomplete, or potentially inaccurate [1,8,9]. Consumers and healthcare providers have little reliable information to consult when seeking the best app for their needs. To illustrate the mismatch between pain-related app availability and reliable scientific data, de la Vega and Miró [9] noted that of the 34 pain-related apps evaluated in the published scientific literature, not one was available on any major online app store. Conversely, of the 283 pain-related apps commercially available at the major app stores, not one has been evaluated in a scientific publication.

    Accordingly, the present study had 2 objectives. The first was to describe a first-of-a-kind collaboration between the award-winning mobile app Manage My Pain (developed to monitor and track pain) and pain, mental health, and data mining experts. The second objective was to present data from greater than 24,000 users (comprising more than 544,000 data points) by clustering data using key variables that defined the user base. Specifically, using a measure of user engagement with the app (eg, what distinguishes the user who has used the app frequently and over the longer term from others?), defined by the longevity and number of records for each user, we were able to group the users (using clustering methods) into 5 groups differentiated by high or low number of entries and high or low longevity. We then characterized the 5 groups of users by gender, as well as other attributes collected by the app: age, number of pain conditions, number of current medications, opioid use, and pain severity rating.


    Methods

    Manage My Pain

    Manage My Pain [12], developed by ManagingLife, helps people living with pain to track their pain and functioning daily basis using an Android mobile phone app. Since Manage My Pain was launched in 2011, more than 24,000 people have created an account and recorded their pain. In total, more than 544,000 pain episodes have been documented by users.

    The central feature of Manage My Pain is the “pain record” that enables users to enter details about their pain episodes. Users are asked to complete only 1 item, a rating of pain severity using a slider on a visual analogue scale. They then have the option of completing 7 more items regarding their present pain that typically take less than 1 minute to complete (Figure 1). The app issues daily reminders and prompts users to reflect on their daily accomplishments. With regular use, users are empowered and gain self-awareness through charts and graphs that provide insight to their pain and functioning and how it changes over time.

    The information collected by the app can be summarized into a report intended for clinical use, where the information collected is presented in a concise fashion and primarily focuses on changes in the self-reported outcome data between clinical visits. Output is structured on a single page and tends to be more accurate than a patient’s recollection of pain since the last clinical visit, as it captures pain closer to the time of experience and is less influenced by recency and recall biases that plague existing methods for capturing pain information [13]. To supplement the information presented in the reports, users can add pain conditions, gender, age, and medications to their profile in the app.

    The app supports 7 languages (English, Spanish, French, German, Russian, Simplified Chinese, and Korean) and has users from over 130 countries. It is available free in a Lite version, or users can opt to pay a one-time fee for a Pro version. The only difference between the versions is that the former limits the number of records that can be viewed at one time to 10. If users choose to take advantage of ManagingLife’s secure, cloud-based storage, they can create an online account and agree to ManagingLife’s Privacy Policy [14], which includes consent to use their aggregated and de-identified data for research purposes. Creating an account not only enables cross-device synchronization through encrypted data transmission and secure storage in the instance a device is lost, it also enables features such as advanced report generation and access to the Profile section of the app (Figure 1). The majority of the analysis in this paper is derived from the self-reported information contained in the Profile section of the app. Users also have the ability to use Manage My Pain without creating an account in which case data does not leave the device and are therefore not accessible for research such as the present report.

    Figure 1. Screenshots of Manage My Pain showing how pain episodes are recorded (left) and where users can capture information about themselves (right).
    View this figure

    Procedure

    The present study was reviewed and approved by the Research Ethics Board at York University (Human Participants Review Committee, Certificate e2015-160). The users’ database was accessed and downloaded in 2 separate files (plain text format): user information and pain records. The user-information file contained the field's user identification (ID), date of birth, gender, pain conditions, and medications. Information specific to individual pain conditions included in the pain records file were date, location, other associated symptoms, characteristics, alleviating factors, ineffective factors, aggravating factors, severity, environment, pain type, and pain duration. All fields in the text files were delimited using special characters. The files used in this study were downloaded on January 02, 2017. This study covered pain episodes recorded by users between September 13, 2011 and January 02, 2017.

    Participants and Measures

    The primary dataset included 544,425 records from 24,816 users. From these users, we selected 18,324 users who had recorded at least 2 pain episodes. The total number of data points from these 18,324 selected users was 537,853. We excluded users with only 1 pain record as we considered them as having only engaged with the app through a single use. In addition, the objective of our research was to highlight differences between users with varying degrees of engagement, whereas including those with a single-use would sway the analysis towards comparing engaged versus single-use instead.

    We defined user engagement with the app using 2 aspects of usage: longevity and number of records. Longevity was calculated as the number of days between the first and the last pain record. The number of records was the total number of entries by a user in the database. For each user, we extracted 6 features from the database for the cluster-based analysis [15] (Textbox 1).


    Textbox 1. Features extracted from the database for the cluster-based analysis.
    View this box

    Data Analysis

    Clustering

    We applied data clustering methods to distinguish between highly engaged users and users who do not regularly use the app. Clustering involved partitioning a set of objects, or members of a defined population, into 2 or more subgroups such that the members of 1 subgroup were similar to each other and dissimilar to members of the other subgroups. Each object or subgroup member was represented using one or more variables for the purpose of clustering. These variables were typically referred to as features or attributes. The similarity or dissimilarity between pairs of objects (or subgroup members) was measured as the distance between the feature vectors representing them.

    The output of the clustering process was usually a set of clusters where each object was assigned membership in 1 of the clusters. We used the method known as k-means [16] as our primary data analytic approach to clustering users. Under the k-means clustering method, the number of clusters is set a priori to some constant k, and the dataset is partitioned into k clusters. In the initialization stage, the k means were selected at random. Each item in the dataset was assigned to the mean closest to it. In each subsequent iteration, for each cluster, the mean was calculated based on the current members of that cluster. Each data point was then re-assigned to the cluster whose mean was the closest. The iterative process stopped when the clusters did not change between iterations.

    In our clustering experiment, since we were interested in user engagement, we used the 2 defining variables—longevity and number of records—as features of user engagement. We also added frequency (average number of records per day) as an extra feature to distinguish between users who had the same number of records over different periods of longevity. We transformed these 3 feature values using a logarithmic scale because the difference between small feature values of 2 users was more indicative of their different levels of engagement than a similar difference in large feature values.

    We compared the k-means clustering solutions that produced different numbers of clusters to find the solution with the best fit to the data. We also compared the k-means clustering results with results obtained using Mclust [17], another clustering method. In Mclust, a maximum number of clusters, M, and a set of mixture models were initially chosen. For each of these models, hierarchical agglomerative clustering was applied to obtain an initial clustering for each possible number of clusters from 2 to M. Using these clusters for each model as the base clusters, expectation-maximization algorithm was applied to update cluster assignments of objects for the number of clusters from 2 to M. Finally, Bayesian information criterion was used to choose a clustering solution from different models and different numbers of clusters.

    To compare the quality of the clustering solutions between different methods (k-means versus Mclust), we calculated the average silhouette width as a measurement of the fitness of the clustering process. For each object, the silhouette width measured how much more similar (based on a distance measure such as Euclidean distance) a data point is to the points in its own cluster than to points in a neighboring cluster. Higher average silhouette widths indicated tighter clusters where each cluster was well-separated from other clusters.

    After choosing a clustering solution based on average silhouette width, we generated a profile for each cluster of users. A cluster’s profile contained the means of 3 variables (user's age, number of pain conditions, and the number of current medications) calculated from the members that belonged to that cluster. For each cluster, we also calculated distributions of genders and pain severity levels and percentage of opioid users.

    We used R (version 3.3.1) [18] for data loading, pre-processing, clustering, and conducting statistical tests. Notably, the traditional way of handling a dataset as a data-frame in R was slow for analysis of larger datasets. As our dataset contained more than half-a-million pain records, we used the data.table package [19] which made loading, querying, sorting, etc, quicker than the default data-frame approach.

    Characterizing the Clusters

    Once we had determined that we had generated the clusters that best represented the dataset, as evaluated by pain experts, we conducted a chi-square test to evaluate the statistical significance of the association between gender and cluster. We then conducted Z tests to determine whether the proportion of males and females in each cluster differed significantly from what one would expect by chance. We then conducted an analysis of variance (ANOVA) using 3, 2-way independent samples on the 3 database features (age, number of current medications, and number of pain conditions) using cluster and gender as the between-subject factors. We conducted pairwise comparisons using the Tukey honest significant difference (HSD) method for each significant main effect of cluster or gender. We then conducted a Z test to evaluate whether the proportion of males and females using opioids in each cluster differed significantly from what one would expect by chance. Finally, we conducted chi-square tests to investigate the association between gender and pain severity groups (mild, moderate, and severe) in each engagement cluster. All statistical tests were conducted in R.


    Results

    Clustering the Users Based on Their Engagement With the App

    The set of 18,324 users who had 2 or more pain records were clustered based on their level of engagement as measured by longevity, number of records, and frequency. We initially intended to divide users into the following natural groups based on their level of engagement: (1) low longevity, low number of records; (2) low longevity, high number of records; (3) high longevity, low number of records; and (4) high longevity, high number of records. The results from clustering the users into 4 groups are shown in Figure 2. The figure was plotted in the logarithmic scale of the 2 dimensions: longevity and number of records, where the 4 colors represented 4 different clusters. As described earlier, frequency (average number of records per day) was added as an additional variable during the clustering analysis to help emphasize differences between the various engagement clusters more relevant to the user base. The color of a cluster was assigned to all its member-objects (ie, the users who belonged to that cluster).

    The Blue cluster represented users with high longevity and high number of records. Similarly, the users in the Black cluster generally had high longevity, but low number of records. However, the other 2 clusters (Red and Green) did not seem to align with the 2 other intended clusters characterized by (1) low longevity, high number of records; and (2) low longevity and low number of records. Instead, the Red and Green clusters appeared to differ at the low-end of longevity, but were similar in terms of representing low engagement. Hence, we conducted the clustering experiment again using 5 clusters (Figure 3).

    The statistics derived from users of all 5 clusters are shown in Table 1. We discovered the following association between the clusters and the intended 4 groups of users based on the means of longevity and number of records as calculated from the users in a cluster: (1) Blue: high longevity, high number of records; (2) Black: high longevity, low number of records; (3) Cyan: low longevity, high number of records; and (4) Red and Green: low longevity, low number of records (Figure 3).

    We also found that the average silhouette width was higher for the clustering results produced by k-means (0.20) than that produced by Mclust (0.02). Hence, we accepted the 5-cluster output of k-means, as shown in Figure 3, for further experiments in this study.

    Table 1. Cluster characteristics according to the 5-cluster solution.
    View this table
    Figure 2. Clustering solution using 4 clusters.
    View this figure
    Figure 3. Clustering solution using 5 clusters where Blue is high longevity, high number of records; Black is high longevity, low number of records; Cyan is low longevity, high number of records; and Red and Green are low longevity, low number of records.
    View this figure

    Cluster by Gender Profiles

    The distribution of users from the 3 categories of gender (male, female, not provided) across each of the 5 engagement clusters is shown in Figure 4.

    A chi-square test was conducted to evaluate the association between 3 genders (male, female and not provided) and 5 clusters. The association was statistically significant (χ28 = 761.24, P <.001 ).We then conducted Z-tests to evaluate whether the proportions of each pair of genders (male-female, male-not provided, female-not provided) differed significantly in each cluster. Pairwise between-gender differences were significant ( all P ≤.05) for all clusters except the male-female difference in the Cyan cluster. Thus, the proportion of male users with apparent high longevity (Blue and Black clusters) was higher than the female users. On the other hand, for the low engagement clusters (Red and Green), the proportion of females was significantly higher than males. As noted above, only 11.33% (2076/18,324) of users in this study were male whereas 49.90% (9144/18,324) were female—with the remaining users not providing gender information. These data showed that once males registered, they were more likely to use the app for a longer period and with more consistency than females. It is notable that the proportion of the sample that did not provide a gender decreased significantly as user engagement increased from a high of 44% in the Green cluster to a low of 8% to 10% in the Blue and Black clusters, respectively. While knowledge of the genders of the individuals who chose not to provide gender data would be helpful in interpreting the present results, the finding that only 8% to 10% of the Blue and Black clusters did not provide their gender gave us more confidence that a greater proportion of males than females in the highly-engaged clusters was truly representative of the gender distribution and was not an artifact of the undeclared proportion.

    Moreover, 68.37% (4857/7104) of these users in the “not provided” gender category did not enter their age and did not list any pain condition or current medications. On the other hand, this percentage was only 2.70% (56/2076) and 1.61% (147/9144) for males and females, respectively. Thus, comparing age, pain conditions, and medications between users who provided their gender information versus those who did not was not feasible. Hence, we excluded the users in the not provided gender category from the rest of the analysis.

    To investigate the possible reasons behind the higher level of engagement of male users than female users, we calculated the mean age, mean number of pain conditions, and the mean number of current medications for both genders in each cluster.

    Age

    The mean age of the users from the 5 clusters is shown in Table 2. The results of an ANOVA revealed significant main effects of cluster (F4,10192= 24.09, P<.001) and gender (F1,10192 = 284.88, P <.001) but not the cluster x gender interaction effect (F4,10192 = 1.0, P=.41). Tukey HSD tests showed that the Blue and Black clusters each were significantly older than the 3 other clusters (all P<.001). In contrast, the Blue and Black clusters did not differ significantly (P =1.0). Thus, the average age of users with high longevity (Blue and Black clusters) was higher than that of the other groups of users. Overall, males were significantly older than females with mean ages of 42.32 (SD 12.01) and 37.55 (SD 10.56) years, respectively.

    Figure 4. The distribution of users from each gender category across each of the 5 engagement clusters.
    View this figure
    Table 2. Mean age of males and females in each cluster.
    View this table
    Number of Pain Conditions

    The mean number of self-reported pain conditions for the users in each cluster is presented in Table 3. More engaged users were more likely to self-report a higher number of pain conditions than less engaged users. The results of an ANOVA revealed significant main effects of cluster (F4,9180 = 41.28, P<.001) and gender (F1,9180 = 19.92, P<.001) but not the cluster x gender interaction effect (F4,9180 = 0.37, P=.83). Pairwise comparisons using post-hoc Tukey HSD tests indicated that the difference between the means for the Blue-Black, Black-Cyan, Cyan-Red, and Red-Green clusters were not statistically significant (all P ≥ .07). Across clusters, females reported more pain conditions than did males with mean values of 3.66 (SD 4.02) and 3.18 (SD 3.44), respectively.

    Number of Current Medications

    The mean number of current medications for users who reported taking at least 1 medication is shown in Table 4. The results of an ANOVA revealed significant main effects of cluster (F4,5408 = 58.67, P<.001) and gender (F1,5408 = 4.33, P=.04) but not the cluster x gender interaction effect (F4,5408 = 1.59, P=.17). Follow-up pairwise comparisons using post-hoc Tukey HSD tests revealed that the difference between the means of the Black-Cyan and Red-Green clusters were not significant (P>.99), whereas the difference between the Blue and each of the other clusters was significant (all P<.001). Thus, more engaged users reported taking more medications than did less engaged users. Moreover, females reported taking a greater number of pain medications than males, with mean values of 3.91 (SD 3.30) and 3.68 (SD 3.32), respectively.

    Table 3. Mean number of pain conditions for males and females in each cluster.
    View this table
    Table 4. Mean number of current mediations for males and females in each cluster.
    View this table

    Opioid Use

    The number and percentage of males and females within each cluster reporting the current use of an opioid are shown in Table 5. The percentage of males taking an opioid was significantly greater (all P ≤ .05) than that of females for all clusters except the Red cluster where the percentages did not differ.

    Pain Severity Rating

    The number of male and female users in 3 pain severity groups within 5 clusters is shown in Table 6. Chi-square test of independence revealed that pain severity and gender were independent of each other in the Black and Cyan clusters (all P ≥ .40). We conducted follow-up chi-square tests between gender and pairs of pain severity groups for the Blue, Red, and Green clusters. In the Blue cluster, mild-severe (χ21= 11.18, P=.008) and mild-moderate (χ21 = 9.65, P=.002) pairs had a statistically significant association with the male and female genders. The association between the mild-moderate pair and gender was significant in the Red (χ21 = 8.09, P=.004) and Green (χ21= 12.76, P<.001) clusters. These findings indicated that across the Blue, Red, and Green clusters, the proportion of males with mild pain was significantly higher than that of females.

    Table 5. Number and percentage of each gender reporting current opioid use within each cluster where the percentages for each gender were calculated using a denominator that comprised the number of that gender taking at least 1 current medication in a cluster.
    View this table
    Table 6. Number and percentage of male and female users by pain severity groups within each cluster.
    View this table

    Discussion

    Principal Findings

    Commercially-available apps to track and record pain have proliferated to the point where consumers and healthcare providers alike face a bewildering array with little data to use in making an informed decision regarding options. The pain literature regarding mHealth apps typically focuses on app validation, clinical efficacy, or engagement, but no other study has applied data mining techniques to a large user data base of chronic pain sufferers. With more than 250 commercially-available apps to choose from users have little reliable information to turn to when looking for the best app for their needs. The results of the present study provided an in-depth look at the user base of the Manage My Pain app and described factors associated with high user engagement.

    The main objective of the present study was to use data mining (clustering) methods to analyze engagement patterns from users of Manage My Pain according to several key variables that defined the user base. Specifically, we categorized users based on their gender and level of engagement with the app. The results of the present study were novel in several respects. For one, to the best of our knowledge, this was the first application of clustering methods to describe patterns of use of, and engagement with, a pain-monitoring app among a large number of everyday users who report chronic pain. We used a sample of 18,324 users who recorded at least 2 pain episodes and together generated more than 500,000 records. Using the k-means clustering approach, the users were classified into 5 distinct clusters that differed maximally in user engagement, derived from their frequency and longevity of use. The Blue and Black clusters comprised individuals with high longevity and a large and small number of records, respectively, whereas the Red and Green clusters comprised individuals with low longevity and a relatively small number of records. The Cyan cluster represented individuals with low longevity and relatively large number of records. We then examined the differences among the 5 clusters with respect to gender, age, number of pain conditions, number of pain medications, opioid use, and pain severity rating.

    The most highly engaged clusters (Blue and Black), which were distinguished by frequency of app use but not longevity, differed only in number of pain medications which were greater in the Blue than Black cluster. Otherwise, these clusters were similar in terms of relative gender composition, age, number of pain conditions reported, and proportion using opioids. Together these most engaged clusters comprised 4802 individuals who used the app for an average of 1 year. Compared with the less engaged clusters (Red and Green), the more engaged clusters (1) were, relative to the total number of males and females, more likely to be male; (2) were significantly older; (3) reported a significantly greater number of pain conditions; and (4) were more likely to be opioid users.

    Another potentially important result pertained to the distribution of gender within the 5 clusters. The proportion of males and females in each cluster, except Cyan, differed significantly from what would be expected by chance alone. However, the proportions differed markedly based on user engagement. Among the most engaged clusters of users (Blue and Black), the proportion of the total sample of male users was significantly greater than that of females. In contrast, the opposite was true for the least engaged clusters of users (Green and Red) where the proportion of the total sample of female users was significantly greater than that of males. Although only 11.33% (2076/18,324) of users were male and 49.90% (9144/18,324) females, the data indicated that once males registered, they were more likely to use the app for a longer duration and more consistently than females. The greater proportion of males than females in the highly-engaged cluster was interesting because males typically are less actively engaged in their own healthcare than females [20-22]. For instance, females visit primary care providers more frequently than males [21,22] and adhere more to physician recommendations [23]. In a study of 3.7 million patients registered with primary care physicians in the United Kingdom, the rate of consultation for males was 32% lower than it was for females, with the greatest gender differences seen in patients between the ages of 16 and 60 years [22]. Healthcare-seeking behaviors are also more frequent among females than males who have sustained whiplash injuries and who we would reasonably expect to have pain [24].

    Gender differences in the use and uptake of mHealth technology may help to explain the present results [25,26]. In contrast to the greater healthcare-seeking behaviors in females, males are more likely than females to adopt mHealth technology [25]. Moreover, whereas males tend to find mHealth apps helpful in averting a health problem and in benefiting from them, this is not the case for women [26]. We suggest that the greater proportion of males than females in the highly-engaged cluster may be related to the mobile-based medium through which the pain-related information is self-monitored and recorded. Use of a mobile pain app may be relatively more attractive to highly-engaged males than highly-engaged females and may be one way to increase male uptake of healthcare behaviors in general.

    The relationship between pain severity and gender within clusters indicated that overall, for the Blue, Red, and Green clusters, the proportion of males with mild versus moderate pain was significantly lower than would be expected by chance alone. In the context of the published literature, they are not surprising since across pain conditions males tend to report lower levels of pain than females [27]. It is interesting to note, however, that this pattern was also true for males with mild versus severe pain in the Blue cluster, which, as previously noted, was the most engaged cluster being both high in longevity and high in number of records. Moreover, this was one of the clusters in which the proportion of the total sample of male users was significantly greater than that of females indicating that once males registered, they were more likely to use the app for a longer duration and more consistently than females. The pain severity by gender association in the Blue cluster suggested that the high engagement by males in this cluster did not appear to be driven by pain severity which may, in part, be the case for females.

    Consistent with the published literature showing that women are more likely than men to engage in polypharmacy [28,29], the present results indicated that females in all clusters reported taking a greater number of pain medications than males. Likewise, the percentage of males who reported using opioids was significantly greater than that of females in all but the Red cluster. These data are consistent with prior reports where it was generally found that the frequency of opioid use among males with chronic pain was greater than that of females [30]. Results of a treatment study [31] showed that prior to treatment, fewer females than males were using opioids and females were younger than males. This pattern appears consistent with a recent study [30] which showed that although frequency of opioid use among males with chronic pain was greater than that of females, overall, frequency of use tended to decrease with increasing age. For example, 81% of females with chronic pain aged 25 to 44 years reported using opioids whereas 76.5% of males aged 45 to 64 years reported using opioids. Thus, the present results may also reflect the fact that the highly-engaged males were older than highly-engaged females. These results are of interest and importance given the recent “opioid epidemic” that is the focus of increasing concern among patients with chronic, non-cancer pain, healthcare providers, regulators, and governments [32,33]. By tracking opioid use and pain severity over the next several years we will be in a position to provide important Manage My Pain user data on the extent to which the new opioid prescribing guidelines in the United States [34] and Canada [35] are associated with changes in these parameters.

    A mismatch was noted by de la Vega and Miró [9] between the commercial sector and the scientific community in terms of their respective approaches to app development and evaluation. Of the 34 pain-related apps that were evaluated in the scientific literature, not one was available in any of the major online app stores. In contrast, of the more than 280 commercially available pain-related apps not one was the topic of a scientific publication. The present collaboration between the developers of Manage My Pain, scientists studying pain, and experts in data mining, was an attempt to address this mismatch and initiate a novel direction in pain research. By evaluating trends in how consumers engage with and use commercially-available apps to monitor and track pain we can begin to make inroads in understanding the motivations of populations that have been traditionally more difficult to engage (eg, males suffering with multiple pain conditions).

    Conclusion

    This is the first study to use data mining (clustering) methods to analyze data from the mobile pain app Manage My Pain, according to several key variables that defined the user base. To better understand who uses the mobile pain app, Manage My Pain, clustering methods were applied to a sample of 18,324 users who recorded at least 2 pain episodes and collectively entered 537,853 records into the app. Users were grouped into 5 clusters according to their engagement patterns. Of the clusters, 2 were identified as representing high user engagement based on longevity and frequency of app use. All 5 clusters were first characterized by gender and then by age, number of pain conditions, number of current medications, opioid use, and pain severity rating. Although most users of the app reported being female, the cluster analysis indicated that male users were more likely to be highly engaged in the app. Clusters of highly-engaged users differed from the other clusters in terms of the relative composition of males and females, with a greater proportion of males than females in the former than the latter clusters. In addition, users in the most engaged clusters self-reported a higher number of pain conditions, a higher number of current medications, and a higher incidence of opioid usage. We suggest that use of a mobile pain app may be relatively more attractive to highly-engaged males than highly-engaged females, and to those with relatively more complex chronic pain situations. A mobile pain app, such as Manage My Pain, may be one way to increase uptake of healthcare behaviors in general for both males and people with complex chronic pain situations.

    Acknowledgments

    QAR is supported by The Natural Sciences and Engineering Research Council of Canada (NSERC). JK is supported by a Tier 1 Canadian Institutes of Health Research Canada Research Chair in Health Psychology at York University. HC is supported by a Merit award from the University of Toronto, Department of Anesthesia. JMH is a York University Research Chair.

    Conflicts of Interest

    TJ is the founder and CEO of ManagingLife, Inc. JK and HC are unpaid members of the ManagingLife Advisory Board, providing guidance on the product and the company’s research initiatives.

    References

    1. McKay FH, Cheng C, Wright A, Shill J, Stephens H, Uccellini M. Evaluating mobile phone applications for health behaviour change: a systematic review. J Telemed Telecare 2016 Oct 18. [CrossRef] [Medline]
    2. Ramirez V, Johnson E, Gonzalez C, Ramirez V, Rubino B, Rossetti G. Assessing the use of mobile health technology by patients: an observational study in primary care clinics. JMIR Mhealth Uhealth 2016 Apr 19;4(2):e41 [FREE Full text] [CrossRef] [Medline]
    3. Stinson JN, Jibb LA, Nguyen C, Nathan PC, Maloney AM, Dupuis LL, et al. Construct validity and reliability of a real-time multidimensional smartphone app to assess pain in children and adolescents with cancer. Pain 2015 Dec;156(12):2607-2615. [CrossRef] [Medline]
    4. Rickard N, Arjmand H, Bakker D, Seabrook E. Development of a mobile phone app to support self-monitoring of emotional well-being: a mental health digital innovation. JMIR Ment Health 2016 Nov 23;3(4):e49 [FREE Full text] [CrossRef] [Medline]
    5. Wayne N, Perez DF, Kaplan DM, Ritvo P. Health coaching reduces HbA1c in type 2 diabetic patients from a lower-socioeconomic status community: a randomized controlled trial. J Med Internet Res 2015 Oct 05;17(10):e224 [FREE Full text] [CrossRef] [Medline]
    6. Rollo ME, Aguiar EJ, Williams RL, Wynne K, Kriss M, Callister R, et al. eHealth technologies to support nutrition and physical activity behaviors in diabetes self-management. Diabetes Metab Syndr Obes 2016;9:381-390 [FREE Full text] [CrossRef] [Medline]
    7. St Sauver JL, Warner DO, Yawn BP, Jacobson DJ, McGree ME, Pankratz JJ, et al. Why patients visit their doctors: assessing the most prevalent conditions in a defined American population. Mayo Clin Proc 2013 Jan;88(1):56-67 [FREE Full text] [CrossRef] [Medline]
    8. Lalloo C, Jibb LA, Rivera J, Agarwal A, Stinson JN. “There's a Pain App for That”: review of patient-targeted smartphone applications for pain management. Clin J Pain 2015 Jun;31(6):557-563. [CrossRef] [Medline]
    9. de la Vega R, Miró J. mHealth: a strategic field without a solid scientific soul. A systematic review of pain-related apps. PLoS One 2014;9(7):e101312 [FREE Full text] [CrossRef] [Medline]
    10. Wallace LS, Dhingra LK. A systematic review of smartphone applications for chronic pain available for download in the United States. J Opioid Manag 2014;10(1):63-68. [CrossRef] [Medline]
    11. Rosser BA, Eccleston C. Smartphone applications for pain management. J Telemed Telecare 2011;17(6):308-312. [CrossRef] [Medline]
    12. ManagingLife Inc. Manage my Pain.   URL: https://www.managinglife.com [accessed 2017-04-10] [WebCite Cache]
    13. Gendreau M, Hufford MR, Stone AA. Measuring clinical pain in chronic widespread pain: selected methodological issues. Best Pract Res Clin Rheumatol 2003 Aug;17(4):575-592. [CrossRef] [Medline]
    14. ManagingLife Inc. Privacy Policy.   URL: https://www.managinglife.com/privacy [accessed 2017-04-10] [WebCite Cache]
    15. Zelman DC, Dukes E, Brandenburg N, Bostrom A, Gore M. Identification of cut-points for mild, moderate and severe pain due to diabetic peripheral neuropathy. Pain 2005 May;115(1-2):29-36. [CrossRef] [Medline]
    16. Kaufman L, Rousseeuw P. Finding Groups in Data: An Introduction to Cluster Analysis. Hoboken: Wiley-Interscience; 2005.
    17. Fraley C, Raftery AE. Model-based clustering, discriminant analysis, and density estimation. J Am Stat Assoc 2002 Jun;97(458):611-631. [CrossRef]
    18. The R Project for Statistical Computing. 2016.   URL: http://www.r-project.org/ [accessed 2017-04-10] [WebCite Cache]
    19. Dowle M, Srinivasan A, Gorecki J, Short T, Lianoglou S, Antonyan E. R Project. 2017. Package ‘data.table’   URL: https://cran.r-project.org/web/packages/data.table/data.table.pdf [accessed 2017-07-05] [WebCite Cache]
    20. Bertakis KD, Azari R, Helms LJ, Callahan EJ, Robbins JA. Gender differences in the utilization of health care services. J Fam Pract 2000 Feb;49(2):147-152. [Medline]
    21. Thompson AE, Anisimowicz Y, Miedema B, Hogg W, Wodchis WP, Aubrey-Bassler K. The influence of gender and other patient characteristics on health care-seeking behaviour: a QUALICOPC study. BMC Fam Pract 2016 Mar 31;17:38 [FREE Full text] [CrossRef] [Medline]
    22. Wang Y, Hunt K, Nazareth I, Freemantle N, Petersen I. Do men consult less than women? An analysis of routinely collected UK general practice data. BMJ Open 2013 Aug 19;3(8):e003320 [FREE Full text] [CrossRef] [Medline]
    23. Chung ML, Moser DK, Lennie TA, Worrall-Carter L, Bentley B, Trupp R, et al. Gender differences in adherence to the sodium-restricted diet in patients with heart failure. J Card Fail 2006 Oct;12(8):628-634 [FREE Full text] [CrossRef] [Medline]
    24. Gunnarsson R, Nordeman L, Sunnerhagen K, Billhult A. Gender differences in care-seeking behavior and health care consumption after work-related whiplash injuries: Artur Tenenbaum. Eur J Public Health 2014;24. [CrossRef]
    25. Zhang X, Guo X, Lai K, Guo F, Li C. Understanding gender differences in m-health adoption: a modified theory of reasoned action model. Telemed J E Health 2014 Jan;20(1):39-46. [CrossRef] [Medline]
    26. Guo X, Han X, Zhang X, Dang Y, Chen C. Investigating m-health acceptance from a protection motivation theory perspective: gender and age differences. Telemed J E Health 2015 Aug;21(8):661-669. [CrossRef] [Medline]
    27. Bartley EJ, Fillingim RB. Sex differences in pain: a brief review of clinical and experimental findings. Br J Anaesth 2013 Jul;111(1):52-58 [FREE Full text] [CrossRef] [Medline]
    28. Richardson J, Holdcroft A. Gender differences and pain medication. Womens Health (Lond) 2009 Jan;5(1):79-90. [CrossRef] [Medline]
    29. Darnall BD, Stacey BR. Sex differences in long-term opioid use: cautionary notes for prescribing in women. Arch Intern Med 2012 Mar 12;172(5):431-432 [FREE Full text] [CrossRef] [Medline]
    30. Witkin LR, Zylberger D, Mehta N, Hindenlang M, Johnson C, Kean J, et al. Patient-reported outcomes and opioid use in outpatients with chronic pain. J Pain 2017 May;18(5):583-596. [CrossRef] [Medline]
    31. Murphy JL, Phillips KM, Rafie S. Sex differences between Veterans participating in interdisciplinary chronic pain rehabilitation. J Rehabil Res Dev 2016;53(1):83-94 [FREE Full text] [CrossRef] [Medline]
    32. Lynch ME, Katz J. “One Size Fits All” doesn’t fit when it comes to long-term opioid use for people with chronic pain. Can J Pain 2017 May 04;1(1):2-7. [CrossRef]
    33. Sullivan MD, Howe CQ. Opioid therapy for chronic pain in the United States: promises and perils. Pain 2013 Dec;154 Suppl 1:S94-100 [FREE Full text] [CrossRef] [Medline]
    34. Dowell D, Haegerich TM, Chou R. CDC guideline for prescribing opioids for chronic pain — United States, 2016. MMWR Recomm Rep 2016 Mar 18;65(1):1-49. [CrossRef]
    35. Busse JW, Craigie S, Juurlink DN, Buckley DN, Wang L, Couban RJ, et al. Guideline for opioid therapy and chronic noncancer pain. CMAJ 2017 May 08;189(18):E659-E666. [CrossRef] [Medline]


    Abbreviations

    ANOVA: analysis of variance
    HSD: honest significant difference
    mHealth: mobile health


    Edited by G Eysenbach; submitted 15.04.17; peer-reviewed by C Lalloo, S Della Rosa, S Barbieri, N Bragazzi; comments to author 10.05.17; revised version received 07.06.17; accepted 28.06.17; published 12.07.17

    ©Quazi Abidur Rahman, Tahir Janmohamed, Meysam Pirbaglou, Paul Ritvo, Jane M Heffernan, Hance Clarke, Joel Katz. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 12.07.2017.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mhealth and uhealth, is properly cited. The complete bibliographic information, a link to the original publication on http://mhealth.jmir.org/, as well as this copyright and license information must be included.