Published on in Vol 12 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/53119, first published .
Automated Pain Spots Recognition Algorithm Provided by a Web Service–Based Platform: Instrument Validation Study

Automated Pain Spots Recognition Algorithm Provided by a Web Service–Based Platform: Instrument Validation Study

Automated Pain Spots Recognition Algorithm Provided by a Web Service–Based Platform: Instrument Validation Study

1Rehabilitation Research Laboratory 2rLab, Department of Business Economics, Health and Social Care, University of Applied Sciences and Arts of Southern Switzerland, , Via Violino 11, Manno, , Switzerland

2Institute of Systems and Technologies for Sustainable Production, University of Applied Sciences and Arts of Southern Switzerland, , Lugano, , Switzerland

3IDSIA Dalle Molle Institute for Artificial Intelligence, USI-SUPSI, , Lugano, , Switzerland

4Pain Management Center, Division of Anaesthesiology, Department of Acute Medicine, Neurocenter of Southern Switzerland, Regional Hospital of Lugano, , Lugano, , Switzerland

Corresponding Author:

Corrado Cescon, PhD


Background: Understanding the causes and mechanisms underlying musculoskeletal pain is crucial for developing effective treatments and improving patient outcomes. Self-report measures, such as the Pain Drawing Scale, involve individuals rating their level of pain on a scale. In this technique, individuals color the area where they experience pain, and the resulting picture is rated based on the depicted pain intensity. Analyzing pain drawings (PDs) typically involves measuring the size of the pain region. There are several studies focusing on assessing the clinical use of PDs, and now, with the introduction of digital PDs, the usability and reliability of these platforms need validation. Comparative studies between traditional and digital PDs have shown good agreement and reliability. The evolution of PD acquisition over the last 2 decades mirrors the commercialization of digital technologies. However, the pen-on-paper approach seems to be more accepted by patients, but there is currently no standardized method for scanning PDs.

Objective: The objective of this study was to evaluate the accuracy of PD analysis performed by a web platform using various digital scanners. The primary goal was to demonstrate that simple and affordable mobile devices can be used to acquire PDs without losing important information.

Methods: Two sets of PDs were generated: one with the addition of 216 colored circles and another composed of various red shapes distributed randomly on a frontal view body chart of an adult male. These drawings were then printed in color on A4 sheets, including QR codes at the corners in order to allow automatic alignment, and subsequently scanned using different devices and apps. The scanners used were flatbed scanners of different sizes and prices (professional, portable flatbed, and home printer or scanner), smartphones with varying price ranges, and 6 virtual scanner apps. The acquisitions were made under normal light conditions by the same operator.

Results: High-saturation colors, such as red, cyan, magenta, and yellow, were accurately identified by all devices. The percentage error for small, medium, and large pain spots was consistently below 20% for all devices, with smaller values associated with larger areas. In addition, a significant negative correlation was observed between the percentage of error and spot size (R=−0.237; P=.04). The proposed platform proved to be robust and reliable for acquiring paper PDs via a wide range of scanning devices.

Conclusions: This study demonstrates that a web platform can accurately analyze PDs acquired through various digital scanners. The findings support the use of simple and cost-effective mobile devices for PD acquisition without compromising the quality of data. Standardizing the scanning process using the proposed platform can contribute to more efficient and consistent PD analysis in clinical and research settings.

JMIR Mhealth Uhealth 2024;12:e53119

doi:10.2196/53119

Keywords



Musculoskeletal pain is a frequent problem that affects a significant portion of the population and can have a major impact on quality of life [1]. Understanding the causes and mechanisms underlying musculoskeletal pain is crucial for the development of effective treatments and enhancement of patient outcomes. Moreover, investigating musculoskeletal pain contributes to the advancement of our understanding of anatomy, physiology, and pain mechanisms, with potential implications for comprehending and managing pain [2,3].

There are several methods for measuring muscle pain, including self-report measures, behavioral measures, and physiological measures [4]. Self-report measures involve asking the person to rate his or her level of pain on a scale, such as the Visual Analog Scale or the Numeric Rating Scale [5]. A promising way of evaluating pain using drawings is known as the Pain Drawing Scale [6-10].

Digital technologies have had a significant influence on the evolution of pain drawings (PDs), with different applications in the field of medical apps [11].

The fields of application of PDs include diagnosis of low back pain disorders; paresthesias evoked by implanted neurological stimulators; depiction of orofacial pain, such as headaches and toothaches; and evaluation of users of electric wheelchairs with pain located in the back, buttocks region, and so forth.

Body charts can also illustrate other types of sensory experiences such as numbness, tingling, hypoesthesia, or allodynia [12].

While digitally acquired PDs offer advantages, many studies demonstrate sophisticated analyses of scanned or digitized pen-and-paper PDs, showcasing the versatility of digital image processing. This capability enables the digitization and analysis of extensive collections of pen-and-paper pain diagrams, making it adaptable to various settings and needs.

In this technique, the individuals are instructed to color the area where they are experiencing pain, and the picture is then rated on a scale based on the amount of pain depicted [13]. This can be a useful tool for measuring pain in individuals who have difficulty verbalizing their pain experience, such as young children or nonverbal individuals [14]. However, it is important to keep in mind that PDs can be subjective and may be influenced by factors such as the person’s cultural background or level of education [15]. PDs typically consist of body charts with different views of the human body (dorsal, ventral, and side) or subportions (head, hand, etc), and patients are instructed to color with a marker the area where they experience pain. The body charts can be divided into regions such as the Margolis regions [16]. This technique is used to describe and categorize the location of musculoskeletal pain in the body. The regions are based on anatomical divisions of the body, including the neck, upper extremities, low back, and lower extremities [17].

The purpose of the Margolis regions is to provide a standardized and easily understood way to describe the location of musculoskeletal pain, which can help with diagnosis and treatment planning [18,19].

PDs can be analyzed in a variety of ways, depending on the purpose of the analysis and the method used to create the drawings. The most common method of analyzing PDs is the measurement of the size of the pain region. This can be done using computer software or manual measurement techniques [20].

There are different software programs available for analyzing PDs [21-25]. These programs can be used to perform both qualitative and quantitative analyses of PDs, depending on the specific software and the features it offers [26]. Some researchers also introduced sex-specific body charts in order to facilitate the communication of pain for women [17,27,28].

Some of the features offered by pain analysis software may include image digitization (allowing the conversion of traditional paper drawings into digital format for analysis), image scaling (allowing the adjustment of the size of the PDs to match a reference scale), image analysis (using algorithms to automatically identify and quantify features of the PD, such as the size and shape of the pain region), and data visualization (displaying the results of the analysis in a clear and easy-to-understand format, such as graphs or heat maps).

Submitting paper PDs to patients is simpler than using drawing applications running on tablets. Anyway, the use of PDs is not indicated in patients with vision impairment or in preschool children, although some studies investigated the application in teenagers. Being a self-assessed measure the patients should not have cognitive impairments, or diseases including misperception of their body.

While digital drawings can be easily edited and manipulated, and the tools available on a tablet can offer a wider range of color options and effects, paper PDs are largely used in clinical settings. This preference stems from the fact that many patients feel more comfortable using the pen-on-paper approach rather than digital devices [29-31].

There are different methods for scanning PDs, including using a flatbed scanner, a device that scans flat, thin documents placed on a glass window; a handheld scanner, a portable device that can scan images while being moved over them; a drum scanner, a high-end scanner that uses a rotating cylinder to capture the image; a multifunctional printer scanner, a printer that also includes a scanner function; and a virtual scanner, a software that can use a camera to scan images.

To date, there is no standardization in scanning PDs. The existing softwares for PD acquisition work with specific body charts and do not allow a direct comparison between using the same drawing. The aim of this study was to evaluate the accuracy of PD analysis performed by a web platform using different digital scanners. The objective of this study was to demonstrate that simple and relatively cheap mobile devices can be used to acquire PDs without loss of information.


Ethical Considerations

We did not involve patients, subjects, or animals. The data set was generated through a computer simulation; thus, there was no need to have ethical approval.

Sketch Your Pain Platform

The proposed analysis was performed using a web platform. The main features of this distributed web application (currently available on a local server [32]) are as follows:

  1. Knowledge-base management: the platform allows the collection of patient’s data (biometric, pain history, applied therapies, diagnoses, etc).
  2. PD acquisition: PDs can be uploaded both digitally and from paper (see details in the following section).
  3. Basic PD analysis: each pain spot is analyzed individually (ie, number of pixels, barycenter, etc).
  4. Smart analysis services: the platform provides a plug-in–based mechanism that allows the implementation of additional analysis within the platform. In this way, researchers can apply specific innovative tools to the PDs stored in the database [33].

A paper PD can be imported in two ways: (1) it can be digitally imported by using the specific acquisition tool that allows for drawing directly on a tablet, using a digital pen on the touch screen (available on a local server [34]), or (2) it can be manually imported as a PDF file (the platform allows one to download a PDF file including empty body charts with a unique QR code, which can be printed, filled manually using a color marker, and scanned as a PDF file).

When a PD is generated digitally, it is already aligned with the body charts, while for the paper drawings, the process is more complex and can be summarized as follows. The body chart and all related information are identified thanks to the QR code, including the information of the protocol, subject ID, gender, and view of the body chart, and are stored in a database. The platform code and data are stored on a local server. The patient names and sensitive data are anonymized using codes that are available only to the operators.

The scanned image is aligned and cropped using the 4 markers at the corners as pivots (Figure 1A).

The image is resized in order to have the same number of pixels of the body charts stored in the platform (ie, 2048×1536 pixels; Figure 1B). The areas outside the body chart are removed using a mask image. This step also allows for the removal of all possible out-of-body staining errors (Figure 1C). The pain spots (that should be drawn in color and not in any shade of gray) are identified and isolated from the body chart by computing the SD of each pixel (in this way, the SD of black pixels [0, 0, 0] and white pixels [255, 255, 255] is equal to 0, while a red pixel [255, 0, 0] has an SD of 147.2). The optimal threshold for the minimum SD (based on preliminary tests) that works best in extracting pain spots from the body chart with different conditions of light and colors is 25. In this way, the color image is converted into a Boolean matrix where ones correspond to pixels with pain. An algorithm for extrusion and subsequent erosion is applied to the Boolean image in order to fill possible gaps that can happen when the user is using a sharp marker (Figure 1D). All contours of the pain spots and the potential holes in them are identified by means of the Canny edge detection algorithm [35]. The pain spots whose contours contain fewer than 20 points/pixels are removed. Likewise, the holes present in the spots, whose contours contain less than 15 points/pixels, are removed. Further, the pain spots smaller than 100 pixels are removed. Likewise, the holes smaller than 150 pixels are removed (Figure 1E). The individual pain spots are identified by an image segmentation algorithm, and for each spot, the area in pixels and the coordinates of the centroid are computed. The final result of the process is shown in Figure 1F.

Figure 1. Pain spot detection process. (A) The 4 markers at the corners and the QR code are identified. (B) The pain drawing is aligned and scaled. (C) The areas outside the body chart are removed. (D) Pain drawings are separated from the background and eroded in order to correct the imperfections due to pen drawing. (E) Pain spot contours are identified and small holes are removed. (F) Each pain spot is analyzed to extract area and position. PS: pain spot.

Generation of Artificial PDs

For the present protocol, 2 sets of PDs were generated with a homemade MATLAB (MathWorks) code. We decided to test the platform by using artificial PDs in order to have complete control of the process and of the analysis. For each of the pain spots generated randomly, we had information on pain location (barycenter of the pain spot), area in square pixels, and shade of color in the red, green, and blue (RGB) scale, and with these data, we could assess the performance of each of the scanning devices. A preliminary study was conducted on the platform using different scanning devices on PDs of real patients with similar results [36].

The body chart selected was a male frontal body chart, representing the contours of a full male body in frontal view (dimensions: 1536×2048 pixels).

Color Analysis

The first artificial PD was generated, adding 216 colored circles, which were 33 pixels in diameter, within the body chart map. The circles were randomly positioned within the body chart in order to be nonoverlapping and not touching each other. The colors were chosen in order to uniformly span the RGB color cube, using 6 different intensities for each color. Since the color depth was defined on a range from 0 to 255 (1 byte), the values of each color were 0, 51, 102, 153, 204, and 255. In this way, the total number of colors was 6^3=216, ranging from black (0, 0, 0) to white (255, 255, 255) and including all combinations of RGB (ie, [0, 0, 0], [0, 0, 51], [0, 0, 102],…[51, 0, 0],…[255, 255, 255]; Figure 2A).

Figure 2. Representation of the artificial pain drawings generated with MATLAB. (A) A total of 216 colored circles with a 30-pixel diameter were randomly located within the area of the body chart. The colors were uniformly distributed in the RGB color cube. (B) Five body charts with randomly generated shapes. RGB: red, green, and blue.

Area and Location Analysis

The second set of PDs was composed of 5 artificial PDs generated by adding several red shapes (ellipses, rectangles, and triangles) to the same body chart; the shapes were generated with random sizes, orientations, and positions and could overlap and be partially outside the body chart mask (Figure 2B). The red color was chosen mainly because it can be easily associated with pain in a body chart; in addition, in the RGB cube, the red color is located in one of the vertices (ie, it has the highest SD value among triplets of RGB values, together with yellow, magenta, cyan, blue, and green), and it is easy to find red pens or markers in common shops.

Each of the 2 sets was printed in color, using the same printer (Sharp MX-7580) to print 11 copies; markers were added at the 4 corners, and a QR code was added at the bottom left side. The markers and the QR code were included in order to allow the platform to align the images and add the PDs to the internal database.

Selection of Scanning Devices

The 11 sets of drawings were then scanned using different devices and apps (Table 1).

We selected 3 flatbed scanners with different sizes and prices: 1 professional office printer or scanner that was available in our university (Sharp MX-4070; price about US $5000), 1 portable flatbed scanner (Canon Lide 220; ~US $50), and a home printer or scanner (HP Envy 4500; ~US $300).

In addition, we selected 3 smartphones with different price ranges: iPhone 12 (~US $1000), Samsung Galaxy (~US $400), and Ulefone Armor (~US $100). All the 3 devices were using the same app for scanning images (vFlat scan), in order to compare only the hardware of the devices.

Moreover, for the cheapest smartphone, we selected 6 free apps available in the android apps Google Play repository. The apps were selected according to their popularity and ranking based on users’ comments.

For each scanner, a PDF file was generated including the corresponding set of images (1 with colored circles and 5 with red shapes). The PDF files were uploaded in the sketch your pain platform [32].

Table 1. List of devices used to scan the artificial pain drawingsa.
TypeDevice modelResolutionPrice (US $)App
FlatbedSharp MX-4070300 dpib~5000c
FlatbedCanon Lide 220300 dpi~50
FlatbedHP Envy 4500300 dpi~300
SmartphoneiPhone 1212 Mpxd~1000vFlat Scan
SmartphoneSamsung Galaxy S10 Lite32 Mpx~400vFlat Scan
SmartphoneUlefone Armor-X13 Mpx~100vFlat Scan
SmartphoneUlefone Armor-X13 Mpx~100TapScanner
SmartphoneUlefone Armor-X13 Mpx~100Simple Scan
SmartphoneUlefone Armor-X13 Mpx~100Fast Scanner
SmartphoneUlefone Armor-X13 Mpx~100CamScanner
SmartphoneUlefone Armor-X13 Mpx~100TurboScan

aThe flatbed devices generated the PDF files using proprietary software, while smartphones needed an app to generate the PDF using the camera.

bdpi: dots per inch.

cNot available.

dMpx: megapixels.

Image Processing

The original area of each pain spot generated with MATLAB was computed as well as the coordinates of the centroid of each pain spot. The sketch your pain platform identified the QR codes and allowed the processing of each identified pain spot, providing area in pixels and coordinates of the centroid.

For the analysis of colors, we analyzed whether the platform was able to identify a pain spot corresponding to each of the locations where colored circles were generated. If the area of the identified pain spot was larger than a fixed threshold (90% of the theoretical area; eg, 450 out of 500 pixels), then the pain spot was counted (Figure 3).

Figure 3. Examples of identification of pain spots from colored circles. (A) The original drawing and the output of the platform algorithm for 3 different devices are shown. The green color represents an identified pain spot. The purple circle on the bottom left corner (indicated with the thick arrow) was not identified by the iPhone (D) and Sharp scanner (B), but it was identified by the Armor phone (C). The black circle on the neck (indicated with the thin arrow) was not identified by any of the devices.

Statistical Analysis

The variables used for the statistics were the area (A) of pain spots in square pixels and the coordinates of the centroid (x, y) of each pain spot in pixels. The variables computed for each scanning device were compared with the variables computed for the corresponding pain spots on the original artificial PDs generated with MATLAB. The percentage area error (E) was computed as the difference between the 2 areas divided by the area computed on original PD and expressed as a percentage. The distance (D) between the theoretical centroid of the pain spot computed on original image and the centroid of the pain spot identified by each device was also computed and expressed in pixels.

Intraclass correlation coefficient (ICC) estimates (and their 95% CIs) of pain area and barycenter coordinates were calculated using MATLAB and a 1-way mixed-effects model.

In addition, standard error of measurement (SEM) and minimal clinical differences were computed for pain area and barycenter position.

Descriptive statistics is presented with box and whisker plots with median and IQR values.


Color Analysis

A Boolean table with the results of pain spot identification was generated with 216 lines (1 for each color) and 11 columns (1 for each scanner). A graphical representation of the table is represented in Figure 4.

Each color is represented by a circle in the 3D color cube, and the size of each circle is proportional to the number of scanners that were able to identify that color (ie, if the area of the identified pain spot was larger than 90% of the printed circle). The maximum circle diameter was set to two-thirds of the spacing between adjacent circles. In this way, it is easy see which colors are best for PDs. As expected, the colors with identical values in RGB triplets (ie, shades of gray, black, and white) were never identified by the software, and colors with low values of SD in RGB triplets (eg, low-saturation colors) were not identified with most devices. The colors that were identified with all devices are located close to the corners of the color cube (ie, high-saturation colors, such as red, cyan, magenta, and yellow).

Figure 4. Representation of the performance of the algorithm in identifying different color circles. The middle diagonal (from white to black where RGB components are equal) and the colors located close to the central diagonal are not identified by the algorithm, while colors such as red, magenta, and yellow are identified by all devices. RGB: red, green, and blue.

Area and Location Analysis

Figure 5 shows the distribution of areas of the pain spots generated artificially and randomly distributed on each of the 5 body charts. The pain spots were divided into 3 categories according to their area in square pixels (A<502: small; 502≤A<1002: medium; and A≥1002: large).

The ICC for pain area was 0.99, with a 95% CI 0.99‐0.99 (F74,750=1.44e+04). In addition, the ICC and CI values for barycenter coordinates were above 0.99 (x-coordinate: F84,930=4.55e+05; y-coordinate: F103,843=5.12e+05).

Table 2 shows the SEM and minimal detectable change values for each device compared with the theoretical value.

Figure 6A shows the percentage error of pain extent for each device for the 3 categories of pain spot areas. For all devices, the percentage error was below 20% for small, medium, and big pain spots, with lower values associated with bigger areas. A significant negative correlation was observed between percentage of error and spot size (R=−0.237; P=.04; Figure 6B).

Figure 7 shows the percentage error of distance between the theoretical location of the centroid of each pain spot and the location of the centroid of the identified pain spot. The distribution of the distances was always below 5 pixels except for the Armor device with the TurboScan app (11th column).

Figure 5. Representation of the distribution of shapes according to their size. The 3 colors are used to represent the 3 categories that were used for further analysis The image in the legend shows the thresholds used to divide the categories (as square shapes).
Table 2. Standard error of measurement and minimal detectable change for the identification of pain area and for the barycenter distance for each device compared with the theoretical value.
Device modelArea SEMa (pixels2)Area MDCb (pixels2)Barycenter distance SEM (pixels)Barycenter distance MDC (pixels)
Sharp MX-40702625131.52.9
Canon Lide 2203486821.53.0
HP Envy 45002194291.53.0
iPhone—vFlat1342621.02.1
Galaxy S10 lite—vFlat1723361.42.6
Armor X—vFlat1823561.32.5
Armor X—TapScanner2474852.14.2
Armor X—Simple Scan2514912.65.1
Armor X—Fast Scanner2875632.44.8
Armor X—CamScanner2274442.03.8
Armor X—TurboScan5059894.07.8

aSEM: standard error of measurement.

bMDC: minimal detectable change.

Figure 6. (A) Distribution of errors in identifying the pain spot areas expressed in percentages. The 3-color box and whisker plots for each device represent the distribution of area error for each of the 3 categories (small, medium, and large pain spots). (B) Correlation between percentage of error and spot size (the regression line is indicated as a dashed line).
Figure 7. Distribution of errors in the location of the pain spots expressed in pixels. The 3-color box and whisker plots for each device represent the distribution of barycenter distance error for each of the 3 categories (small, medium, and large pain spots). The image of the eye shows the actual dimensions of 5 pixels in the full body chart (paper dimensions: 2048×1536 pixels).

Principal Findings

The results of this study showed that the pen-and-paper drawings can be imported and processed with negligible differences using different devices. To our knowledge, there were no other studies focusing on the acquisition of paper pen drawings using mobile devices. Before conducting this study, we were largely using our platform, asking patients to use the red marker because we suspected that blue markers would be mistaken as black. As expected, indeed the red color is the best choice for multiple reasons: the red color is associated with the inflammatory process; thus, it is easy for a patient to visualize their own pain as a red spot (eg, compared with green or blue). Moreover, in shops, markers labeled as “red” are very similar to the theoretical value of (255, 0, 0), while markers labeled as “green,” or “blue” can have different darker or lighter shades.

Surprisingly, in our results, blue and green were identified only with few devices (mostly flatbed scanners). One of the reasons could be the ink of the printer, whose color was slightly different from what was expected. The illumination of the room could also have an impact, since the light emitted by different bulbs or neon could carry different wavelengths in different proportions. The light sensors of cameras and scanners could have different sensitivities to different wavelengths, and probably the red-light sensors have higher sensitivities.

When observing the printed page, we noticed that the blue ink was slightly darker than what we observed on the PC monitor, but it was difficult to objectively evaluate which one was correct, as we did not have a gold standard for each color. In summary, the results confirmed our initial prediction: the red marker is the optimal choice for PDs. However, in cases where the patient does not have a red marker available, we recommend using a flatbed scanner to generate the PDF for import to the platform. This approach helps minimize the bias resulting from external factors such as lights or photo LED sensitivity.

Regarding the accuracy of the pain spot identification, all the devices showed similar performance when using red color. The cheapest flatbed scanner showed larger errors probably due to the distortion of the image. When observing the digital image, we noticed that the proportions were slightly distorted (maybe due to the calibration of the motor or due to friction of the transmission chain). For this reason, the alignment process of the algorithm could not perfectly align the 4 markers in the corner; thus, the pain spot location had larger errors and extensions.

As expected, the smaller pain spots showed larger percentage errors but no differences in the barycenter location error, because of the distortion of pain spots due to the erosion process. The choice of the app for the mobile device had a significant effect on the percentage errors. In particular, the app named V-Flat showed better results than the others (when installed on the cheapest mobile device), leading to results that were comparable with high-rank mobile devices. The V-Flat app includes an algorithm that recognizes the corners of the paper and compensates for the distortions of the camera and even the distortions due to bent paper. For this reason, the results were as good as flatbed professional scanners (<5% of error). In general, the errors in identification of pain extent (<5%) and pain location (<5 pixels) were much lower than the precision of a subject in drawing or identifying their own pain and thus were lower than the minimum clinical significance of PDs [17,37,38].

Limitations

This study has some limitations that may introduce bias into the results. First, the acquisitions made with mobile devices were not conducted under controlled lighting conditions. Although all acquisitions took place during daylight hours without direct sunlight on the paper, variations in the time of day and weather conditions could have affected the colors identified by the devices. In addition, we did not calibrate the “white level” of the mobile phone camera. While some apps offered advanced camera settings for optimizing virtual scanners, we chose to use the default settings to maintain as close to a real-life environment as possible. As a result, we did not test under unusual lighting conditions (eg, candlelight, colored lamps, neon lights).

Another limitation is that we could not directly compare the performance of our platform with other existing methods, since the body charts are specific for each existing app.

Furthermore, we tested only 1 printer to print all the artificial PDs, which introduces potential bias. The colored circles in the first part of the study were positioned within the body chart but in different locations. This variation in placement could impact the results, as colors farther from the center of the paper may experience greater distortion due to misalignment. However, this approach was necessary to avoid printing an excessive number of papers.

The decision of using artificial PDs is due to the fact that we conducted 2 studies in parallel. The first study involved the use of actual PDs generated by humans [36], while in this study, we wanted to investigate different colors in order to investigate all the RGB space. In addition, the location of pain spots in PDs generated by humans depends on the pathology of patients, while in this case, we preferred to have a uniform distribution of pain spots with a priori known sizes and locations. Both studies showed that the platform has excellent results, but in this study, we were able to quantify the error since we knew the theoretical pain spot areas.

The sample size of mobile devices and flatbed scanners is small. However, the objective of the study was not to provide an exhaustive sample of devices but rather to demonstrate that even inexpensive devices are sufficient for accurately acquiring paper PDs.

Similarly, the sample of scan apps is also limited, and some of them are no longer available for free as of manuscript submission. The app market is continuously evolving, with new apps being released regularly. Our intention was to find a selection of free virtual scanners available in the Play Store, and again, our aim was to show that various apps perform similarly.

Conclusions

The system was already tested in real clinical settings and was shown to be easy to implement, easy to use, and well accepted. The acquisition of paper PDs using the proposed platform has been demonstrated to be robust and reliable across a wide range of scanning devices. The accuracy of pain extent and location analysis consistently falls within the error measurement range of PDs. The use of the proposed algorithm will enable the use of PD analysis in various clinical settings.

Authors' Contributions

CC participated in the conceptualization, methodology, software, data curation, and writing—original draft preparation. GL participated in the software and validation. NB led the visualization and investigation. VG participated in the investigation. MD contributed to data curation. EK participated in writing—review and editing. PM did the supervision. AER participated in project administration. MB participated in the conceptualization, funding acquisition, and writing—reviewing and editing.

Conflicts of Interest

None declared.

  1. Puntillo F, Giglio M, Paladini A, et al. Pathophysiology of musculoskeletal pain: a narrative review. Ther Adv Musculoskelet Dis. 2021;13:1759720X21995067. [CrossRef] [Medline]
  2. Ekman EF, Koman LA. Acute pain following musculoskeletal injuries and orthopaedic surgery: mechanisms and management. Instr Course Lect. 2005;54:21-33. [Medline]
  3. Graven-Nielsen T. Mechanisms and manifestations in musculoskeletal pain: from experimental to clinical pain settings. Pain. Nov 1, 2022;163(Suppl 1):S29-S45. [CrossRef] [Medline]
  4. Roulin MJ, Ramelet AS. Pain indicators in brain-injured critical care adults: an integrative review. Aust Crit Care. May 2012;25(2):110-118. [CrossRef] [Medline]
  5. Shafshak TS, Elnemr R. The visual analogue scale versus numerical rating scale in measuring pain severity and predicting disability in low back pain. J Clin Rheumatol. Oct 1, 2021;27(7):282-285. [CrossRef] [Medline]
  6. Galer BS, Jensen MP. Development and preliminary validation of a pain measure specific to neuropathic pain: the Neuropathic Pain Scale. Neurology. Feb 1997;48(2):332-338. [CrossRef] [Medline]
  7. Gracely RH, Kwilosz DM. The Descriptor Differential Scale: applying psychophysical principles to clinical pain assessment. Pain. Dec 1988;35(3):279-288. [CrossRef] [Medline]
  8. McCaffery M, Pasero C. Pain Clinical Manual. 2nd ed. Mosby; 1999. ISBN: 978-0815156093
  9. Melzack R, Katz J. McGill Pain Questionnaire. In: Encyclopedia of Pain. Springer; 1971:1792-1794. [CrossRef]
  10. Wong DL, Hockenberry MJ, Wilson D, Winkelstein ML, Schwartz P. Wong’s Essentials of Pediatric Nursing. 6th ed. Mosby; 2001. ISBN: 978-0-323-00989-8
  11. Shaballout N, Neubert TA, Boudreau S, Beissner F. From paper to digital applications of the pain drawing: systematic review of methodological milestones. JMIR Mhealth Uhealth. Sep 5, 2019;7(9):e14569. [CrossRef] [Medline]
  12. Schmid AB, Ridgway L, Hailey L, et al. Factors predicting the transition from acute to persistent pain in people with ‘sciatica’: the FORECAST longitudinal prognostic factor cohort study protocol. BMJ Open. Apr 5, 2023;13(4):e072832. [CrossRef] [Medline]
  13. Suvinen TI, Kemppainen P, Le Bell Y, Kauko T, Forssell H. Assessment of pain drawings and self-reported comorbid pains as part of the biopsychosocial profiling of temporomandibular disorder pain patients. J Oral Facial Pain Headache. Oct 2016;30(4):287-295. [CrossRef] [Medline]
  14. Unruh A, McGrath P, Cunningham JS, Humphreys P. Childrenʼs drawings of their pain. Pain. Dec 1983;17(4):385-392. [CrossRef] [Medline]
  15. Grunnesjö M, Bogefeldt J, Blomberg S, Delaney H, Svärdsudd K. The course of pain drawings during a 10-week treatment period in patients with acute and sub-acute low back pain. BMC Musculoskelet Disord. Dec 11, 2006;7:65. [CrossRef] [Medline]
  16. Margolis RB, Tait RC, Krause SJ. A rating system for use with patient pain drawings. Pain. Jan 1986;24(1):57-65. [CrossRef] [Medline]
  17. Barbero M, Moresi F, Leoni D, Gatti R, Egloff M, Falla D. Test-retest reliability of pain extent and pain location using a novel method for pain drawing analysis. Eur J Pain. Sep 2015;19(8):1129-1138. [CrossRef] [Medline]
  18. Balasch-Bernat M, Dueñas L, Aguilar-Rodríguez M, et al. The spatial extent of pain is associated with pain intensity, catastrophizing and some measures of central sensitization in people with frozen shoulder. J Clin Med. Dec 28, 2021;11(1):154. [CrossRef] [Medline]
  19. Ginzburg BM, Merskey H, Lau CL. The relationship between pain drawings and the psychological state. Pain. Nov 1988;35(2):141-146. [CrossRef] [Medline]
  20. Türp JC, Kowalski CJ, O’Leary N, Stohler CS. Pain maps from facial pain patients indicate a broad pain geography. J Dent Res. Jun 1998;77(6):1465-1472. [CrossRef] [Medline]
  21. Ali SM, Lau WJ, McBeth J, Dixon WG, van der Veer SN. Digital manikins to self-report pain on a smartphone: a systematic review of mobile apps. Eur J Pain. Feb 2021;25(2):327-338. [CrossRef] [Medline]
  22. Boudreau SA. Visualizing and quantifying spatial and qualitative pain sensations. Scand J Pain. Oct 26, 2022;22(4):681-683. [CrossRef] [Medline]
  23. Kanellopoulos AK, Kanellopoulos EK, Dimitriadis Z, et al. Novel software for pain drawing analysis. Cureus. Dec 2021;13(12):e20422. [CrossRef] [Medline]
  24. Neubert TA, Dusch M, Karst M, Beissner F. Designing a tablet-based software app for mapping bodily symptoms: usability evaluation and reproducibility analysis. JMIR Mhealth Uhealth. May 30, 2018;6(5):e127. [CrossRef] [Medline]
  25. Shaballout N, Aloumar A, Neubert TA, Dusch M, Beissner F. Digital pain drawings can improve doctors’ understanding of acute pain patients: survey and pain drawing analysis. JMIR Preprints. Preprint posted online on Jun 27, 2018. [CrossRef]
  26. Dixit A, Lee M. Quantification of digital body maps for pain: development and application of an algorithm for generating pain frequency maps. JMIR Form Res. Jun 24, 2022;6(6):e36687. [CrossRef] [Medline]
  27. Galve Villa M, Palsson TS, Cid Royo A, Bjarkam CR, Boudreau SA. Digital pain mapping and tracking in patients with chronic pain: longitudinal study. J Med Internet Res. Oct 26, 2020;22(10):e21475. [CrossRef] [Medline]
  28. Egsgaard LL, Christensen TS, Petersen IM, Brønnum DS, Boudreau SA. Do gender-specific and high-resolution three dimensional body charts facilitate the communication of pain for women? A quantitative and qualitative study. JMIR Hum Factors. Jul 20, 2016;3(2):e19. [CrossRef] [Medline]
  29. Campbell N, Ali F, Finlay AY, Salek SS. Equivalence of electronic and paper-based patient-reported outcome measures. Qual Life Res. Aug 2015;24(8):1949-1961. [CrossRef] [Medline]
  30. Noyes JM, Garland KJ. Computer- vs. paper-based tasks: are they equivalent? Ergonomics. Sep 2008;51(9):1352-1375. [CrossRef] [Medline]
  31. Touvier M, Méjean C, Kesse-Guyot E, et al. Comparison between web-based and paper versions of a self-administered anthropometric questionnaire. Eur J Epidemiol. May 2010;25(5):287-296. [CrossRef] [Medline]
  32. SYP Dashboard. SUPSI - Sketch Your Pain. 2020. URL: https://syp.spslab.ch [Accessed 2024-08-19]
  33. Luque-Suarez A, Falla D, Barbero M, et al. Digital pain extent is associated with pain intensity but not with pain-related cognitions and disability in people with chronic musculoskeletal pain: a cross-sectional study. BMC Musculoskelet Disord. Jul 30, 2022;23(1):727. [CrossRef] [Medline]
  34. Sketch Your Pain—tablet. SUPSI - Sketch Your Pain. 2020. URL: https://syp.spslab.ch/tablet [Accessed 2024-08-19]
  35. Canny J. A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell. Nov 1986;8(6):679-698. [Medline]
  36. Barbero M, Cescon C, Schneebeli A, et al. Reliability of the pen-on-paper pain drawing analysis using different scanning procedures. J Pain Symptom Manage. Feb 2024;67(2):e129-e136. [CrossRef] [Medline]
  37. Abichandani D, Barbero M, Cescon C, et al. Can people with chronic neck pain recognize their own digital pain drawing? Pain Physician. Mar 2020;23(2):E231-E240. [Medline]
  38. Leoni D, Falla D, Heitz C, et al. Test-retest reliability in reporting the pain induced by a pain provocation test: further validation of a novel approach for pain drawing acquisition and analysis. Pain Pract. Feb 2017;17(2):176-184. [CrossRef] [Medline]


ICC: intraclass correlation coefficient
PD: pain drawing
RGB: red, green, and blue
SEM: standard error of measurement


Edited by Lorraine Buis; submitted 26.09.23; peer-reviewed by Mohamed Estai, Parisa Gazerani; final revised version received 22.04.24; accepted 13.05.24; published 27.08.24.

Copyright

© Corrado Cescon, Giuseppe Landolfi, Niko Bonomi, Marco Derboni, Vincenzo Giuffrida, Andrea Emilio Rizzoli, Paolo Maino, Eva Koetsier, Marco Barbero. Originally published in JMIR mHealth and uHealth (https://mhealth.jmir.org), 27.8.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on https://mhealth.jmir.org/, as well as this copyright and license information must be included.