Published on in Vol 11 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/41551, first published .
Predictive Dispatch of Volunteer First Responders: Algorithm Development and Validation

Predictive Dispatch of Volunteer First Responders: Algorithm Development and Validation

Predictive Dispatch of Volunteer First Responders: Algorithm Development and Validation

Original Paper

1Department of Management, Hadassah Academic College, Jerusalem, Israel

2School of Public Health, Drexel University, Philadelphia, PA, United States

3School of Information, University of Michigan, Ann Arbor, MI, United States

4The Graduate School of Business Administration, Bar-Ilan University, Ramat Gan, Israel

Corresponding Author:

Michael Khalemsky, PhD

Department of Management

Hadassah Academic College

ha-Neviim 37

Jerusalem, 9101001

Israel

Phone: 972 26291941

Email: michaelkh@hac.ac.il


Background: Smartphone-based emergency response apps are increasingly being used to identify and dispatch volunteer first responders (VFRs) to medical emergencies to provide faster first aid, which is associated with better prognoses. Volunteers’ availability and willingness to respond are uncertain, leading in recent studies to response rates of 17% to 47%. Dispatch algorithms that select volunteers based on their estimated time of arrival (ETA) without considering the likelihood of response may be suboptimal due to a large percentage of alerts wasted on VFRs with shorter ETA but a low likelihood of response, resulting in delays until a volunteer who will actually respond can be dispatched.

Objective: This study aims to improve the decision-making process of human emergency medical services dispatchers and autonomous dispatch algorithms by presenting a novel approach for predicting whether a VFR will respond to or ignore a given alert.

Methods: We developed and compared 4 analytical models to predict VFRs’ response behaviors based on emergency event characteristics, volunteers’ demographic data and previous experience, and condition-specific parameters. We tested these 4 models using 4 different algorithms applied on actual demographic and response data from a 12-month study of 112 VFRs who received 993 alerts to respond to 188 opioid overdose emergencies. Model 4 used an additional dynamically updated synthetic dichotomous variable, frequent responder, which reflects the responder’s previous behavior.

Results: The highest accuracy (260/329, 79.1%) of prediction that a VFR will ignore an alert was achieved by 2 models that used events data, VFRs’ demographic data, and their previous response experience, with slightly better overall accuracy (248/329, 75.4%) for model 4, which used the frequent responder indicator. Another model that used events data and VFRs’ previous experience but did not use demographic data provided a high-accuracy prediction (277/329, 84.2%) of ignored alerts but a low-accuracy prediction (153/329, 46.5%) of responded alerts. The accuracy of the model that used events data only was unacceptably low. The J48 decision tree algorithm provided the best accuracy.

Conclusions: VFR dispatch has evolved in the last decades, thanks to technological advances and a better understanding of VFR management. The dispatch of substitute responders is a common approach in VFR systems. Predicting the response behavior of candidate responders in advance of dispatch can allow any VFR system to choose the best possible response candidates based not only on ETA but also on the probability of actual response. The integration of the probability to respond into the dispatch algorithm constitutes a new generation of individual dispatch, making this one of the first studies to harness the power of predictive analytics for VFR dispatch. Our findings can help VFR network administrators in their continual efforts to improve the response times of their networks and to save lives.

JMIR Mhealth Uhealth 2023;11:e41551

doi:10.2196/41551

Keywords



Background

Emergency response apps, commonly smartphone based, are increasingly being used to identify and dispatch volunteer first responders (VFRs) to the location of a medical emergency [1]. Automated dispatch algorithms generally rely on a simple estimated time of arrival (ETA) calculation based on the locations of the VFRs and the incident as well as the known modes of transport. A key aspect lacking in these algorithms is a consideration of the likelihood of response; for instance, given a set of potential VFRs with equivalent ETAs, which subset should be alerted to maximize the likelihood of response? The automated dispatch of VFRs to medical emergencies is suboptimal owing to a large percentage of alerts wasted on VFRs with shorter ETA but a low likelihood of response. This results in delays until a volunteer who will actually respond can be identified and dispatched. Using actual demographic and response data taken from a 12-month study of 112 VFRs alerted to respond to opioid overdose emergencies, we applied a series of analytical methods and advanced classification models to learn and predict volunteer response behaviors. Our findings can be used to improve dispatch algorithms in VFR networks to optimize dispatch decisions and increase the likelihood of timely emergency responses.

Medical Emergencies

A medical emergency is an acute injury or illness that can result in death or long-term health complications [2]. Some common medical emergencies include out-of-hospital cardiac arrest (OHCA), severe trauma, opioid overdose, and anaphylaxis. OHCA is a leading cause of death worldwide [3], with a poor survival rate (only 5.6% in adults) [4]. Major trauma is the sixth leading cause of death worldwide [5,6]. Opioid overdose is a severe public health problem that has been consistently rising for the past 20 years and in the United States is the leading cause of accidental death [7]. The incidence of anaphylaxis ranges from 1.5 to 7.9 per 100,000 population per year in Europe [8,9].

Networks of VFRs

The immediate provision of first aid is crucial in lowering mortality and improving long-term prognosis, particularly in regard to OHCA [10-12] and opioid overdose events [13]. Emergency medical services (EMS) are the primary first aid provider [14,15], but EMS response times vary significantly among countries and geographies [16,17]. Interventions to achieve faster response times include the deployment of automatic external defibrillators (AEDs) in public places [18-21] and the establishment of local networks of VFRs [22-30]. Recently, there was a concerted effort to use smartphone apps for faster emergency response, such as PulsePoint, HelpAround, Heartrunner, and UnityPhilly. An extensive review of emergency response apps can be found in the study by Gaziel-Yablowitz and Schwartz [31].

An emergency response community (ERC) [32], a subtype of a VFR network, is a social network of patients who are prescribed to carry life-saving medication for themselves and can potentially help other patients who are without their medication in a medical emergency. Two projects that apply the ERC approach are the subjects of recent field studies: EPIMADA, which focused on patients at risk of anaphylaxis and their parents [33]; and UnityPhilly, which focuses on people who have experienced an opioid overdose [34].

Willingness to Respond, Barriers, and Facilitators

Once a person becomes a volunteer, they are expected to respond if available when a relevant event occurs. However, the actual rates of response to emergency alerts are far from 100%. Brooks et al [35] reported a response rate of 23% among PulsePoint volunteers. In a recent study, the willingness of cardiopulmonary resuscitation (CPR)–trained bystanders to respond to an OHCA event was 46.6% [36]. Another study analyzed barriers to receiving notifications and reported that 32% of the responders who were sent notifications did not receive the notification because, for example, they were away from their device (21%), their device was switched off (8%), or their device was out of network range (4%) [35]. Stress levels among responders varied for different medical conditions, different locations, and different demographic groups [37]. Younger age, higher education level, shorter time since the last CPR training, and cardiac arrest event in a public location were good predictors of bystanders’ greater willingness to perform CPR. The main reasons for not performing CPR were panic, the perception of bystanders that they are not able to perform CPR correctly, and a fear of hurting the patient [38]. Familial experiences of receiving CPR were associated with an increase in responders’ willingness to perform CPR [39]. The UnityPhilly study, which established a network of volunteers to provide naloxone to those experiencing an opioid overdose, reported that 17% of the alerted volunteers accepted the alert, and 11.9% of the alerted volunteers arrived at the scene [34].

Dispatch Algorithms and Decision-Making

Complexity of VFR Dispatch and Decision-Making

The complexity of VFR dispatch stems from 2 sources: unknown resource location and uncertain response. Emergency response services that try to optimize their own resources to maximize their effectiveness can determine the allocation of their resources, such as ambulance dispatch stations or police patrol districts, subject to constraints (eg, budgets) [40-42]. The administrators of a VFR network are unable to plan and control the location of their resources because VFRs perform their daily activities until called to action: they can be anywhere, enter and exit the area that the network covers, switch on and off their mobile phones, and so on. In addition, although ambulance staff or a police patrol are expected to respond to any event that they are dispatched to, VFRs decide for themselves whether to respond to a specific event.

Usual Location–Based Dispatch Approach Using Pagers and SMS Text Messages

In a typical location-based approach, VFRs are alerted based on their usual location (eg, home or work address) and not their actual location at the moment of the alert. VFRs may not provide any feedback to the system regarding their availability to respond to the specific event and just show up on the scene if they can; for example, this approach was used by Zijlstra et al [43] who sent SMS text messages to volunteers living within a 1000-meter radius of an OHCA event.

Current Location–Based Dispatch Approach

A current location–based dispatch approach is based on a smartphone app that continuously sends VFRs’ locations (eg, geospatial coordinates) to a central server. When an emergency event is registered in the system, the dispatch algorithm selects volunteers based on their distance from the scene or, in a more advanced version, based on their ETA [44]. Such apps can also allow VFRs to set their availability status to control for their commitment, which was found to be an important factor of VFRs’ willingness to volunteer [45]. Location-based dispatch is widely used in VFR networks [34,36,46,47]. Usually, location-based algorithms dispatch >1 volunteer, if available, but still limit the number of volunteers who are dispatched to prevent burnout and a decrease in self-efficacy. Sending a large number of responders to each event can lead to the “diffusion of responsibility” phenomenon and reduce willingness to respond [48].

Autonomous Dispatch Versus EMS-Mediated Dispatch

Some VFR networks are managed by EMS and are integrated into their business processes. In this case, the dispatch of VFRs is at the discretion of a human dispatcher, and the VFR system serves as a decision support system that provides the dispatcher with the necessary information, such as location and ETA, of volunteers that can be compared with the location and ETA of an ambulance. Once alerts are sent, the system constantly updates its recommendations based on the feedback from the alerted volunteers. This approach is used by the Life Guardians project managed by Israeli National EMS [46] and in several AED and CPR projects [36].

An alternative approach is autonomous dispatch, where VFRs are selected and alerted by an autonomous system according to a predefined business logic. The system can dispatch additional volunteers if the alerted volunteers ignore the alert, refuse to respond, or linger on the way. This approach was used by the UnityPhilly project [34] and the PulsePoint project [35].

Both approaches can be either registered (usual or expected) location based or current (dynamic) location based; for example, UnityPhilly uses a current location–based autonomous dispatch approach.

Integration of Volunteers’ Feedback Into Dispatch Algorithms

Many smartphone apps for VFR networks allow alerted responders to accept or decline the alert. Such feedback lowers the uncertainty regarding the dispatcher, and, if a volunteer declines an alert, the dispatch algorithm can reconsider the selection of responders and send additional alerts to substitute volunteers (ie, to volunteers who were not initially selected by the algorithm [eg, because they had a longer ETA] but who, in the event that ≥1 of the initially selected volunteers decline or ignore the alert, can be dispatched to achieve the target number of responders). If an alerted volunteer ignores the alert and does not provide any feedback, the system waits for a set period of time and then considers the nonresponse a “no” and acts accordingly. Figure 1 depicts this process.

Figure 1. The dispatch process and feedback from alerted volunteers.

Profiling

Profiling is “the process of generating profiles from obtained data, associated to one or multiple subjects” [49]. Profiling of people is widely used in several areas, such as targeted advertising [50], donation solicitation [51], and volunteer recruitment [52]. Elsner et al [22,49] proposed to use the profiling of volunteers in dispatch algorithms to enhance the prediction of the volunteers’ position, trajectory, and constraints. In this study, we used classification techniques to generate different behavioral profiles of volunteers that serve as independent variables for predicting responses to alerts.

The Purpose of the Study

The challenge of improving volunteer dispatch speed and response rates is recognized in fields ranging from food rescue operations [53] to OHCA response in which the optimization of the responder network is now taking center stage [54,55]. Studies such as the one by Gregers et al [56] have attempted to determine the optimal number of responders to dispatch, yet such studies base response viability solely on current ETA with no consideration of responder history or other characteristics that could improve responsiveness. Currently used dispatch algorithms that select volunteers based on their ETA without considering the likelihood of response may be suboptimal owing to a large percentage of alerts wasted on VFRs with shorter ETA but a low likelihood of response. We build on prior work on VFR optimization by presenting a novel approach for predicting whether a VFR will respond to, or ignore, a given alert. As such, the enhanced algorithm reduces the time that the system unnecessarily spends waiting for a response from volunteers who are likely to ignore the alert. The amount of time wasted depends on the specific dispatch algorithm; for example, in UnityPhilly trials, the system waited 2 minutes before dispatching a substitute volunteer. A faster dispatch of substitute volunteers has the potential to reduce the response time of the VFR network as a whole and improve its effectiveness. However, overdispatch of more VFRs than necessary to secure an effective emergency response can have a negative impact on future willingness to respond [48].


Data

We used data from the UnityPhilly study that piloted a smartphone-based app for requesting and providing ERC assistance to those suspected of experiencing an opioid overdose in the neighborhood of Kensington, PA, over 12 months from March 1, 2019, to February 28, 2020. Kensington has Philadelphia’s highest concentration of overdose deaths and is also home to Prevention Point Philadelphia, which is a city-sanctioned syringe exchange program that also distributes naloxone and provides naloxone training. Recruitment occurred via face-to-face screening at Prevention Point’s drop-in center, Prevention Point’s substance use disorder treatment van, street intercepts, and chain referrals from enrolled participants. The inclusion criteria for participants were that they lived, worked, or used drugs within 4 zip codes around the Kensington neighborhood (19122, 19125, 19133, and 19134); possessed a smartphone with a data plan; were willing to have location and movements tracked via an app; were willing to carry naloxone; and were aged ≥18 years. Sampling purposefully targeted a mix of members of the Kensington community who used opioids nonmedically in the past 30 days and those who reported no nonmedical opioid use in the past 30 days. The study recruited 112 volunteers who were almost equally divided between people who reported opioid use in the past 30 days at baseline (n=57, 50.9%) and community members, that is, people who reported no opioid use at baseline (n=55, 49.1%).

At a research storefront in Kensington, the study enrollment procedure included obtaining written informed consent, the recording of contact information, structured baseline interviews, app installation and training, and naloxone distribution and training. During the informed consent procedure, participants agreed to participate in a baseline interview, monthly follow-up interviews, and brief surveys after overdose incidents. Project staff installed the app on the participant’s smartphone and provided app training, which included watching an animated training video explaining app use and practicing using the app to send and receive alerts with project staff. Naloxone training included recognizing the signs of opioid overdose, practicing rescue breathing on a CPR dummy, and demonstrating how to administer intranasal naloxone. All participants received a kit containing 2 doses of intranasal naloxone. The UnityPhilly app enabled them to report opioid overdose events and to receive notifications about opioid overdose events reported by other members in their proximity. Participants received US $25 in cash for the baseline interview and US $5 for each completed follow-up monthly interview or incident survey. No compensation was offered or given for the use of the app to signal or respond to overdose incidents. More details about the study are available in prior publications [34].

The data used for this analysis consist of 4 components (Textbox 1 and Figure 2).

Of the 112 volunteers recruited to UnityPhilly, 27 (24.1%) were completely inactive as either signaler or responder (ie, they did not send or respond to a single alert). Of the remaining 85 volunteers, 80 (94%) received at least 1 alert and were defined as responders, and 52 (61%) who signaled at least 1 event were defined as signalers (many volunteers served in both roles). Figure 3 presents the distribution of responders and signalers.

Events that were canceled by the signaler for any reason were considered false alarms. For this analysis, we excluded these events because we were not able to distinguish between alarms ignored by the responder and alarms that were canceled before the responder had a chance to respond. Figure 4 describes the sample.

We used alerts as a unit of analysis.

Textbox 1. The 4 components of the data used for analysis.

Event

  • This refers to an opioid overdose event. An event’s characteristics are true or false alarm, signaler, weekday or weekend, and day or night.

Signaler

  • This refers to a UnityPhilly user who witnesses an event and reports it to the system using the UnityPhilly app. A signaler’s characteristics are age, gender, housing status, employment status, naloxone carriage adherence before joining the UnityPhilly community, opioid overdose witnessing experience before joining the UnityPhilly community, and experience in administering naloxone to a person experiencing an overdose before joining the UnityPhilly community.

Responder

  • This refers to a UnityPhilly member who is selected by the UnityPhilly system (based on their location and estimated time of arrival) and notified in their UnityPhilly app about an event. The responder’s characteristics are the same as those of the signaler.

Alert

  • This refers to a notification sent to a specific responder about a specific event. The UnityPhilly app enables the responder to accept or decline an alert. However, many alerts are ignored, that is, neither accepted nor declined. An alert’s characteristics are distance between the potential responder and the event scene at the moment of the alert, the number of previous alerts received by the responder since joining UnityPhilly, the number of previous false alerts received by the responder since joining, the number of previous alerts received by the responder since joining that were initiated by the same signaler, the number of previous false alerts received by the responder since joining that were initiated by the same signaler, the number of previous responses by the responder since joining, the number of previous responses to false alerts received by the responder since joining, and the number of previous responses to false alerts initiated by the same signaler that were received by the responder since joining.
Figure 2. Entities in the UnityPhilly data set. M: many.
Figure 3. Distribution of responders and signalers in the UnityPhilly data set.
Figure 4. Sample used for this study. M: many.

Analytical Approach

We used multiple analytical methods to classify the behavior of each volunteer identified as being in the proximity of an overdose event. We integrated data on specific volunteers and events into the dispatch algorithm in such a way that for each dispatched volunteer who is most likely to ignore the alert, an additional volunteer is dispatched right away (if available), until the maximum number of volunteers to be dispatched is reached, or no more volunteers are available. Volunteers for whom the algorithm predicts a low probability of response are still dispatched and thus are given the chance to respond. Figure 5 depicts this process.

We tested 4 models, based on different configurations of variables, to predict whether a given responder is likely to respond to a given event (Textbox 2 and Table 1).

Figure 5. Integration of the probability to respond into the dispatch algorithm. ETA: estimated time of arrival.
Textbox 2. The 4 models tested in this study.

Model 1

  • This model is based solely on historic events and alerts data, incorporating no other data related to the potential responders.

Model 2

  • This model is based on the events and alerts data, but it also integrates data on the responders’ patterns of behavior through their previous experience in the volunteer first responder network, including previous alerts and false alerts, and previous responses, including responses to false alerts.

Model 3

  • This model is based on the events and alerts data, as well as respondents’ personal and demographic data, and ignores their previous experience in the network.

Model 4

  • This model is based on the events and alerts data, as well as respondents’ personal and demographic data, and dynamically calculates thefrequent responderindicator that represents the responder’s experience in the community before a specific alert. This indicator was calculated as follows:
    • <6 alerts: no
    • 6-10 alerts and response rate ≥50%: yes
    • 11-20 alerts and response rate ≥40%: yes
    • 21-30 alerts and response rate ≥30%: yes
    • ≥31 alerts and response rate ≥25%: yes
    • Otherwise: no
Table 1. Data used in each model.
DataModel

1234

Events and alerts data (weekday or weekend, day or night, and distance [m])
Responder’s previous experience in UnityPhilly (previous alerts, previous false alerts, previous alerts by the same signaler, previous false alerts by the same signaler, previous responses, previous responses to false alerts, and previous responses to false alerts by the same signaler)

Responders’ demographic data (age, gender, housing status, and employment status)

Responders’ condition-specific characteristics (naloxone carriage adherence, history of witnessing opioid overdoses before joining UnityPhilly, and history of administering naloxone before joining UnityPhilly)

Frequent responder indicator (recalculated after each alarm)


Classification

The classification analysis for all models was conducted using four classification algorithms suitable for binary classification: (1) the J48 decision tree algorithm, which is an extension of the C4.5 algorithm, implemented in Weka software (University of Waikato) used in the research; (2) random forest; (3) neural network (multilayer perceptron); and (4) logistic regression. The J48 algorithm creates univariate decision trees for classification and provides effective alternatives to other classification methods. The choice of the best classification model is based on the combination of different evaluation metrics. The main interest was to identify the model that succeeds in correctly classifying any answer class.

We used 4 evaluation metrics: accuracy, F-score, precision, and recall. Accuracy is the overall percentage of correctly classified instances. The F-score is the harmonic mean of the recall and precision metrics and can take values ranging between 0 (none of the instances were correctly classified) and 1 (all instances were correctly classified). Precision is the percentage of true positively classified instances out of all positively classified instances. Recall is the percentage of positively classified instances out of all positive instances. The best way to explain the trends found in this analysis is to explain the differences in the recall metric among the different classification algorithms and among the different classes.

Because of the relatively small overall number of cases in the data set, we did not use a percentage split for the training and test sets for models 1 to 3; instead, we used a cross-validation option with 10 folds. Model 4 includes the additional synthetic dichotomous variable called frequent responder that reflects the previous behavior of the responder. The variable is dynamically updated; therefore, a responder can change their behavior several times throughout the research period—from being active to inactive or vice versa. As the frequent responder variable cannot be treated as an independent sequence of values and behavioral patterns that must be preserved, there is no option to use cross-validation for classification analysis. For this reason, we split the data set into a training set with 66.9% (664/993) of the data and a test set with 33.1% (329/993) of the data.

All 4 algorithms were used for a binary classification task in a baseline analysis that included only the events and alerts data (model 1 in Table 1). The obtained results provide the baseline for comparison with the additional data related to the responder’s previous experience data (models 2, 3, and 4 in Table 1). We claim that building a model that considers the responders’ behavioral characteristics can improve the use of the dispatch algorithm. In this kind of analysis, precision in predicting nonresponse is more important than precision in predicting response because in the former case a mistake will delay the dispatch of a substitute responder, whereas in the latter case a mistake will result in the dispatch of too many volunteers.

The comparison between all classification techniques and all evaluation metrics for the 4 models is presented in Multimedia Appendix 1.

Ethical Considerations

All study procedures were approved by the Drexel University Institutional Review Board and registered with ClinicalTrials.gov (NCT03305497). Study enrollment included written informed consent. All data used for this research were deidentified. Participants received US $25 in cash for the baseline interview and US $5 for each completed follow-up monthly interview or incident survey. No compensation was offered or given for use of the app to signal or respond to overdose incidents.


The results of this study are derived from an analysis of emergency events, volunteer participants’ demographics, and behavior patterns.

Description of the Sample

Table 2 presents the characteristics of overdose events.

Table 3 presents the distribution and correlation of the responders’ characteristics. Cramér V was used for categorical variables, and Spearman ρ was used for ordinal variables. ANOVA tests for age differences among the different subgroups of categorical or ordinal variables did not reveal any significant differences at the 5% significance level.

Table 2. Description of overdose event characteristics (n=188).
VariablesValues
Weekdays and weekends, n (%)a

Weekday136 (72.3)

Weekend52 (27.7)
Days and nights, n (%)a

Day133 (70.7)

Night55 (29.3)
Distance (meters; n=162b), mean (SD); median (IQR)c3326 (2784); 2595 (955.09-5567.75)

aCramér correlation between weekday/weekend and day/night is 0.006.

bFor 26 (13.8%) of the 188 overdose events, distance data were not available.

cDistance during weekdays: mean 3611 (SD 2871) meters; distance during weekends: mean 2537 (SD 2384) meters; P=.03; distance during the day: mean 3507 (SD 2724) meters; distance during the night: mean 2870 (2910) meters; P=.19.

Table 3. Distribution and correlation of responders’ characteristics (n=80).
VariableValues, n (%)GenderNaloxone carriage adherenceHomelessnessEmploymentHistory of witnessing an opioid overdoseHistory of administering naloxoneAge
Agea

rb0.070.180.130.14−0.08−0.071

P value.54.12.26.220.52.54
Gender

r10.250.420.180.190.140.07

P value.27<.001.25.35.58.54

Male35 (44)

Female44 (55)

Intersex1 (1)
Naloxone carriage adherence

r0.2510.370.27−0.18−0.210.18

P value.27.02.16.05.05.12

All the time36 (45)

Often22 (28)

Sometimes10 (13)

Seldom2 (3)

Never10 (13)
Homelessness

r0.420.3710.370.220.090.13

P value<.001.02.004.16.46.26

Homeless22 (28)

Not homeless58 (73)
Employment

r0.180.270.3710.150.140.14

P value.25.16.004.16.58.22

Part time11 (14)

Full time18 (23)

Unemployed51 (64)
History of witnessing an opioid overdose (number of times)

r0.19−0.180.220.1510.63−0.08

P value.35.05.16.16<.001.52

≤2048 (60)

21-4020 (25)

>4012 (15)
History of administering naloxone (number of times)

r0.14−0.210.090.140.631−0.07

P value.58.05.46.58<.001.54

≤2061 (81)

21-407 (9)

>407 (9)

aAge (y): mean 40.31 (SD 10.41); median 39.5 (IQR 32-47.75).

bNot applicable.

Significant correlations were found between gender and homelessness (P<.001) as well as between history of witnessing an opioid overdose and history of administering naloxone (P<.001).

Response Patterns

Textbox 3 and Figure 6 present how the alerted volunteers responded (true alarms only; n=993). Responders could change their decision.

Textbox 3. Volunteers’ response patterns.

No answer

  • Responder ignored the alert. This was the final status in 60.3% (599/993) of the alerts.

No go

  • Responder notified the system that they are not able to respond. This was the final status in 23% (228/993) of the alerts.

En route

  • Responder notified the system that they are on the way to the scene. This was the final status in 5.1% (51/993) of the alerts.

On scene

  • Responder notified the system that they are on the scene. This status can be set automatically by the system (based on the responder’s location) or manually by the responder. This was the final status in 2.6% (26/993) of the alerts.

Done

  • Responder performed the treatment. This was the final status in 7.9% (79/993) of the alerts.

Canceled dispatch

  • This was the final status in 1% (10/993) of the alerts.
Figure 6. Response patterns (refer to Textbox 3 for an explanation of the terms used in this figure).

Classification Analysis of Response Patterns

Figure 7 presents the ability of each model to predict the responder’s behavior. To compare model 4 with the other models, all models were tested using the test set of alerts (n=329).

For the test set, model 4 provided the best classification accuracy both overall and for ignored alerts. Model 3 provided the same classification accuracy for ignored alerts, slightly lower accuracy overall, and lower accuracy for answered alerts. Model 2 provided the best classification accuracy for ignored alerts; however, its accuracy was lower overall and significantly lower for answered alerts. Model 1’s classification accuracy was the lowest.

Figure 8 presents the ability of models 1 to 3 to classify the responder’s behavior, using the full data set (n=993).

For the full set, model 3 provided the best classification accuracy. Model 2 had similar accuracy for ignored events and lower accuracy both overall and for answered events. Model 1’s classification accuracy was the lowest. Model 4 was not tested with the full set because the construction of the frequent responder variable requires training.

Figure 9 presents the J48 decision tree for model 4 for the test set.

Figure 7. Classification accuracy of models 1 to 4 using the test set (n=329).
Figure 8. Classification accuracy of models 1 to 3 using the full set (n=993).
Figure 9. J48 decision tree for model 4 for the test set.

The analysis of the classification tree reveals 5 possible routes to the response result: infrequent responders aged >54 years, frequent responders who administered naloxone <20 times, male frequent responders who administered naloxone 21 to 40 times, fully employed female frequent responders who administered naloxone 21 to 40 times, and unemployed female frequent responders who administered naloxone 21 to 40 times in situations where the distance to the scene was <272 meters.

We have to remember that the overall accuracy is not very high and that there are false-positive and false-negative statistical errors in the classification output. A false-positive error occurs when the ignored alert is classified as a responded alert, and a false-negative error occurs when the responded alert is classified as an ignored alert.

Potential Time Savings

Substitute responders (responders who were not initially selected by the algorithm) were used in 73.4% (138/188) of the events. Substitute responders received 33.6% (334/993) of the alerts. Figure 10 presents the lengths of the delays (in min) before substitute responders were dispatched.

Figure 10. Time before substitute dispatch (n=334).

Factors Affecting Willingness to Respond to an Opioid Overdose Event

Table 4 presents the analysis of differences between alerts that were ignored and alerts that resulted in some responses (en route, no go, or on scene).

Significant differences between responded alerts and ignored alerts were found for the following variables: gender (higher response rate by male volunteers; P=.05), naloxone carriage adherence (P<.001), employment (higher response rate by volunteers who were unemployed; P<.001), age (slightly higher average age among volunteers who responded; P=.003), the number of previous alerts (higher among volunteers who responded; P=.003), previous false alerts (higher among volunteers who responded; P=.003), previous false alerts by the same signaler (lower among volunteers who responded; P=.02), previous responses (higher among volunteers who responded; P<.001), and previous responses to false alerts (higher among volunteers who responded; P<.001).

Table 4. Differences between responded alerts and ignored alerts (n=993).
VariableResponded alerts (n=394)Ignored alerts (n=599)P value
Weekdays and weekends, n (%).49a

Weekday289 (73.4)451 (75.3)

Weekend105 (26.6)148 (24.7)
Days and nights, n (%).44a

Day282 (71.6)415 (69.3)

Night112 (28.4)184 (30.7)
Sex, n (%).05a

Male182 (46.2)239 (39.9)

Female212 (53.8)360 (60.1)
Naloxone carriage adherence, n (%)<.001a,b

All the time115 (29.2)241 (40.2)

Most of the time174 (44.2)150 (25)

Sometimes29 (7.4)59 (9.8)

Seldom4 (1)17 (2.8)

Never72 (18.3)132 (22)
Homelessness, n (%).18a

Yes68 (17.3)124 (20.7)

No326 (82.7)475 (79.3)
Employment, n (%)<.001a

Part time18 (4.6)70 (11.7)

Full time70 (17.8)110 (18.4)

Unemployed306 (77.7)419 (69.9)
Age (y), mean (SD)42.91 (11.86)40.47 (13.11).003c
Previous alerts, mean (SD)25.00 (20.96)21.09 (20.31).003c
Previous alerts by the same signaler, mean (SD)3.35 (5.12)3.52 (5.58).63c
Previous false alerts, mean (SD)7.70 (7.00)6.39 (6.46).003c
Previous false alerts by the same signaler, mean (SD)0.70 (1.35)0.96 (2.03).018c
Previous responses, mean (SD)14.27 (14.01)6.74 (10.40)<.001c
Previous responses to false alerts, mean (SD)1.39 (1.85)0.82 (1.59)<.001c
Previous false alerts by the same signaler, mean (SD)0.13 (0.42)0.13 (0.50).89c
Distance, mean (SD)1947.24 (2290.24)1726.98 (2127.02).16c
History of witnessed overdoses (number of times), n (%).54a

≤20247 (62.7)382 (63.8)

21-4080 (20.3)106 (17.7)

>4067 (17)111 (18.5)
History of naloxone administration (number of times), n (%).07a

≤20299 (86.4)453 (80.6)

21-4014 (4)38 (6.8)

>4033 (9.5)71 (12.6)

aP value for the chi-square test for the test of independence.

bP value for the Kendall τ test for ordinal variables.

cP value for the 2-tailed independent samples t test.


Principal Findings

To the best of our knowledge, this is the first study that integrates the predictions of VFRs’ response behavior into dispatch algorithms. We found that volunteers’ past response behavior is the most influential predictor of future response behavior. Our findings suggest that the behavior-based approach can be applied to VFR dispatch to achieve a better response rate of the network as a whole.

Profiling of Responders and Personalization of VFR Dispatch

Model 1 used events and alerts data only and completely ignored any volunteer-related data. The ability of this model to predict volunteers’ behavior was extremely low—only approximately 0.9% (3/329) of the responded alerts were classified correctly. This model would lead to the dispatch of all available volunteers, resulting in burnout, a “diffusion of responsibility,” and low willingness to respond. We conclude that this model is unacceptable.

Model 2 assumed that the volunteers are completely anonymous and that the algorithm knows only their previous behavior in the VFR network. Using these data as well as events and alerts data, model 2 correctly predicted 84.2% (277/329) of the cases in the test set in which responders ignored an alert. Lower prediction accuracy for the full data set (738/993, 74.3%) can be explained by the inclusion of early events for which the model did not have enough data about previous behavior. The ability of model 2 to predict that a volunteer will respond to an event was lower (153/329, 46.5% for the test set and 570/993, 57.4% for the full data set). On the one hand, this model is expected to improve the response rate of the network, but, on the other hand, in half of the cases in which a volunteer responds, another volunteer would be dispatched unnecessarily. We conclude that this model should be used only if the responders are completely anonymous.

Model 3 used events and alerts data, responders’ demographics, their prior experience of witnessing an opioid overdose, their prior experience in administering naloxone, and their naloxone carriage adherence before joining the VFR network. This model’s ability to predict that a volunteer will ignore an alert was similar to that of model 2, but its ability to predict that a volunteer will respond to an event was higher (202/329, 61.4% in the test set and 698/993, 70.3% in the full data set). A closer look at the decision tree of this model reveals that the most influential variables were related to the volunteer’s experience before joining the network: naloxone carriage adherence and the provision of naloxone to those experiencing an overdose. We conclude that this model can be used if data about a volunteer’s app use behavior in the network are not currently available (eg, during the period between recruiting the volunteer and until they receive enough alerts).

Model 4 used all available data, including the frequency of events and alerts, volunteers’ demographics, their prior overdose witnessing and naloxone provision experience before joining the VFR network, and their response behavior in the VFR network (according to the frequent responder indicator recalculated after each event). The ability of this model to predict that a volunteer will ignore an alert was similar to that of model 3, but its ability to predict that a volunteer will respond to an event was higher (225/329, 68.4% in the test set). The decision tree presented in Figure 9 reveals that the frequent responder indicator was the most influential variable. We conclude that model 4 achieves the best prediction accuracy and should be preferred whenever the necessary data are available.

Generalizability of the Proposed Approach

The dispatch of substitute responders is relevant whenever the initial subset of closest volunteers do not provide a response and is a common approach in VFR systems. However, valuable time is lost until nonresponse is identified. Although existing algorithms are based on technical variables such as distance from the scene and the ETA, this study introduces a completely new variable: volunteers’ behavior and their probability to respond to a specific alert. The demonstrated importance of considering multiple factors in volunteer demographics and behavioral characteristics and the insights from the models we have tested are applicable wherever volunteer dispatch optimization is important. Such challenges are found in areas as diverse as food rescue operations [53], OHCA response [54,56], and mass casualty events [57]. Following our approach, these domains and more may find value in testing different sets of demographic and behavioral factors. Predicting the response behavior of candidate responders in advance of dispatch can allow any VFR system to choose the best possible response candidates based not only on ETA or location but also on the probability of actual response. The potential time savings depend on the network-specific period of time until a nonresponsive volunteer is considered unavailable, and a substitute responder is dispatched. The longer this time period, the greater the potential savings provided by a predictive dispatch algorithm.

The data used in our algorithm can be divided into four categories (Table 1): (1) event characteristics, (2) past responder behavior, (3) demographics, and (4) certain parameters specific to a medical condition relevant to the VFR network. The first 3 categories are directly generalizable because most responder mobilization apps collect and store these data, which, based on our findings, can be harvested for improved dispatch algorithms. The fourth data category includes factors that may differ depending on the medical condition relevant to the VFR network.

Factors Affecting Volunteers’ Decision to Respond

Herein, we provide a brief discussion of the factors that affect volunteers’ decisions to respond. A full analysis of these factors is beyond the scope of this research and should be pursued using a larger sample that may provide generalizability.

Experience, including the experience gained both before and after joining the VFR community, was found to be the most influential factor in volunteers’ willingness to respond. In model 3, naloxone carriage adherence and experience in the provision of naloxone were the most influential factors, whereas in model 4, the frequent responder indicator was the most influential factor. Significant differences were found between responded alerts and ignored alerts for the following variables: naloxone carriage adherence, previous alerts, previous false alerts, previous false alerts by the same signaler, previous responses, and previous responses to false alerts.

Part-time employment led to lower willingness to respond.

The average age of volunteers who responded to alerts was a little higher than the average age of volunteers who ignored alerts. Model 4 revealed that age is an important factor for volunteers who are not frequent responders: volunteers aged >54 years are expected to respond.

Male volunteers had a higher willingness to respond, but this difference had borderline significance (P=.05). The results of model 4 were consistent with this difference.

Comparison With Prior Work

VFR dispatch has evolved in the last decades, thanks to technological advances and a better understanding of VFR network management. In a pretechnology era, VFRs (eg, volunteer firefighters) were alerted by sirens or other means rather than individually dispatched. Once pagers and SMS text messaging technology became available, the first generation of individual dispatch based on usual location was implemented (eg, the study by Zijlstra et al [43]). Further technological advances, including smartphone apps and GPS, enabled the second generation of individual dispatch based on current location [34,36,46,47] and the integration of VFRs’ feedback into the algorithm [34]. The integration of the probability to respond based on event characteristics as well as VFRs’ demographic data and previous behavior into the dispatch algorithm constitutes the third generation of individual dispatch, making this one of the first studies to harness the power of predictive analytics for VFR dispatch.

Limitations

A relatively small sample for a specific condition (opioid overdose) and a specific emergency intervention (the provision of naloxone) was used. The sample has specific socioeconomic characteristics: it included a significant proportion of people experiencing homelessness and those who were unemployed (volunteers may have lower motivation to help owing to these destabilizing factors), as well as a significant proportion of people dependent on drugs (volunteers may have lower response rates when intoxicated). The setting was very specific: a large number of outdoor opioid overdoses within a relatively small geographic area. The responders were aware that there were many trained bystanders nearby, and this could have led to the “diffusion of responsibility” phenomenon and reduced the willingness to respond. No randomization or control group was used.

Future Research

The proposed approach should be tested with a larger sample and for different conditions and interventions. A randomized study comparing the outcomes of the proposed dispatch algorithm with those of a regular location-based dispatch algorithm should be considered.

Machine learning techniques should be considered to calculate the frequent responder indicator. Future studies should examine whether the probability that a specific responder will respond to a specific event can be used instead of a binary indicator.

Further research is necessary on whether the proposed approach may have implications for multisided networks dispatching nonemergency services, such as ride sharing and package delivery.

Conclusions

In this research, we proposed a way to improve dispatch algorithms in VFR networks based on the individual characteristics of the volunteers and their behavior. We have shown that even in a relatively small sample, a classification model can predict with fair accuracy whether a specific volunteer will respond to a specific event or ignore it. Such prediction may improve the dispatchers’ decision-making process and enable the dispatch of substitute responders without delay.

Our findings can help VFR network administrators in their continual efforts to improve the response rates and response times of their networks and to save lives.

Acknowledgments

This research was supported by a grant from the National Institute on Drug Abuse (R34DA044758).

Data Availability

The data sets analyzed during this study are not publicly available owing to Health Insurance Portability and Accountability Act (HIPAA) guidelines regarding patient privacy and the sensitivity of raw location data. Requests for data can be sent to the corresponding author.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Comparison of different algorithms for models 1 to 4.

DOCX File , 30 KB

  1. Matinrad N, Granberg TA, Vogel NE, Angelakis V. Optimal dispatch of volunteers to out-of-hospital cardiac arrest patients. In: Proceedings of the 52nd Hawaii International Conference on System Sciences. Presented at: HICSS '19; January 8-11, 2019, 2019;4088-4097; Grand Wailea, HI. URL: https:/​/scholarspace.​manoa.hawaii.edu/​server/​api/​core/​bitstreams/​1c1e1ead-4a8d-4f19-a4d2-65d2aee5b105/​content [CrossRef]
  2. Ramanayake RP, Ranasingha S, Lakmini S. Management of emergencies in general practice: role of general practitioners. J Family Med Prim Care. 2014;3(4):305-308. [FREE Full text] [CrossRef] [Medline]
  3. Myat A, Song KJ, Rea T. Out-of-hospital cardiac arrest: current concepts. Lancet. Mar 10, 2018;391(10124):970-979. [CrossRef] [Medline]
  4. Berdowski J, Berg RA, Tijssen JG, Koster RW. Global incidences of out-of-hospital cardiac arrest and survival rates: systematic review of 67 prospective studies. Resuscitation. Nov 2010;81(11):1479-1487. [FREE Full text] [CrossRef] [Medline]
  5. Alberdi F, García I, Atutxa L, Zabarte M, TraumaNeurointensive Care Work Group of the SEMICYUC. Epidemiología del trauma grave. Med Intensiva. Dec 2014;38(9):580-588. [FREE Full text] [CrossRef] [Medline]
  6. Minei JP, Schmicker RH, Kerby JD, Stiell IG, Schreiber MA, Bulger E, et al. Resuscitation Outcome Consortium Investigators. Severe traumatic injury: regional variation in incidence and outcome. Ann Surg. Jul 2010;252(1):149-157. [FREE Full text] [CrossRef] [Medline]
  7. Drug overdose deaths. Centers for Disease Control and Prevention. URL: https://www.cdc.gov/drugoverdose/data/statedeaths.html [accessed 2020-12-20]
  8. Panesar SS, Javad S, de Silva D, Nwaru BI, Hickstein L, Muraro A, et al. EAACI Food Allergy and Anaphylaxis Group. The epidemiology of anaphylaxis in Europe: a systematic review. Allergy. Nov 2013;68(11):1353-1361. [FREE Full text] [CrossRef] [Medline]
  9. Turner PJ, Jerschow E, Umasunthar T, Lin R, Campbell DE, Boyle RJ. Fatal anaphylaxis: mortality rate and risk factors. J Allergy Clin Immunol Pract. Sep 2017;5(5):1169-1178. [FREE Full text] [CrossRef] [Medline]
  10. Hasselqvist-Ax I, Riva G, Herlitz J, Rosenqvist M, Hollenberg J, Nordberg P, et al. Early cardiopulmonary resuscitation in out-of-hospital cardiac arrest. N Engl J Med. Jun 11, 2015;372(24):2307-2315. [CrossRef] [Medline]
  11. Malta Hansen C, Kragholm K, Pearson DA, Tyson C, Monk L, Myers B, et al. Association of bystander and first-responder intervention with survival after out-of-hospital cardiac arrest in North Carolina, 2010-2013. JAMA. Jul 21, 2015;314(3):255-264. [CrossRef] [Medline]
  12. Sulovic LS, Pavlovic AP, Zivkovic JB, Zivkovic ZN, Filipovic-Danic SS, Trpkovic SV. Accidental drowning: the importance of early measures of resuscitation for a successful outcome. Case Rep Emerg Med. 2018;2018:7525313. [FREE Full text] [CrossRef] [Medline]
  13. Walley AY, Xuan Z, Hackman HH, Quinn E, Doe-Simkins M, Sorensen-Alawad A, et al. Opioid overdose rates and implementation of overdose education and nasal naloxone distribution in Massachusetts: interrupted time series analysis. BMJ. Jan 30, 2013;346:f174. [FREE Full text] [CrossRef] [Medline]
  14. Hanfling D, Altevogt BM, Viswanathan K, Gostin LO. Crisis Standards of Care: A Systems Framework for Catastrophic Disaster Response. Washington, DC. National Academies Press; 2012.
  15. Razzak JA, Kellermann AL. Emergency medical care in developing countries: is it worthwhile? Bull World Health Organ. 2002;80(11):900-905. [FREE Full text] [Medline]
  16. Chanta S, Mayorga ME, McLay LA. Improving emergency service in rural areas: a bi-objective covering location model for EMS systems. Ann Oper Res. Sep 27, 2011;221(1):133-159. [FREE Full text] [CrossRef]
  17. Roudsari BS, Nathens AB, Arreola-Risa C, Cameron P, Civil I, Grigoriou G, et al. Emergency Medical Service (EMS) systems in developed and developing countries. Injury. Sep 2007;38(9):1001-1013. [FREE Full text] [CrossRef] [Medline]
  18. Kitamura T, Kiyohara K, Sakai T, Matsuyama T, Hatakeyama T, Shimamoto T, et al. Public-access defibrillation and out-of-hospital cardiac arrest in Japan. N Engl J Med. Oct 27, 2016;375(17):1649-1659. [CrossRef] [Medline]
  19. Kiyohara K, Kitamura T, Sakai T, Nishiyama C, Nishiuchi T, Hayashi Y, et al. Public-access AED pad application and outcomes for out-of-hospital cardiac arrests in Osaka, Japan. Resuscitation. Sep 2016;106:70-75. [CrossRef] [Medline]
  20. Murakami Y, Iwami T, Kitamura T, Nishiyama C, Nishiuchi T, Hayashi Y, et al. Utstein Osaka Project. Outcomes of out-of-hospital cardiac arrest by public location in the public-access defibrillation era. J Am Heart Assoc. Apr 22, 2014;3(2):e000533. [FREE Full text] [CrossRef] [Medline]
  21. Nakahara S, Tomio J, Ichikawa M, Nakamura F, Nishida M, Takahashi H, et al. Association of bystander interventions with neurologically intact survival among patients with bystander-witnessed out-of-hospital cardiac arrest in Japan. JAMA. Jul 21, 2015;314(3):247-254. [CrossRef] [Medline]
  22. Elsner J, Meisen P, Thelen S, Schilberg D, Jeschke S. EMuRgency - a basic concept for an AI driven volunteer notification system for integrating laypersons into emergency medical services. Int J Adv Life Sci. 2013;5(3 & 4):223-236. [CrossRef]
  23. Berglund E, Claesson A, Nordberg P, Djärv T, Lundgren P, Folke F, et al. A smartphone application for dispatch of lay responders to out-of-hospital cardiac arrests. Resuscitation. May 2018;126:160-165. [FREE Full text] [CrossRef] [Medline]
  24. Folke F, Lippert FK, Nielsen SL, Gislason GH, Hansen ML, Schramm TK, et al. Location of cardiac arrest in a city center: strategic placement of automated external defibrillators in public locations. Circulation. Aug 11, 2009;120(6):510-517. [CrossRef] [Medline]
  25. Marshall AH, Cairns KJ, Kee F, Moore MJ, Hamilton AJ, Adgey AA. A Monte Carlo simulation model to assess volunteer response times in a public access defibrillation scheme in Northern Ireland. In: Proceedings of the 19th IEEE Symposium on Computer-Based Medical Systems. Presented at: CBMS '06; June 22-23, 2006, 2006;783-788; Salt Lake City, UT. URL: https://ieeexplore.ieee.org/document/1647666 [CrossRef]
  26. Pijls RW, Nelemans PJ, Rahel BM, Gorgels AP. A text message alert system for trained volunteers improves out-of-hospital cardiac arrest survival. Resuscitation. Aug 2016;105:182-187. [FREE Full text] [CrossRef] [Medline]
  27. Rea T, Blackwood J, Damon S, Phelps R, Eisenberg M. A link between emergency dispatch and public access AEDs: potential implications for early defibrillation. Resuscitation. Aug 2011;82(8):995-998. [CrossRef] [Medline]
  28. Sakai T, Iwami T, Kitamura T, Nishiyama C, Kawamura T, Kajino K, et al. Effectiveness of the new 'Mobile AED Map' to find and retrieve an AED: a randomised controlled trial. Resuscitation. Jan 2011;82(1):69-73. [CrossRef] [Medline]
  29. Smith CM, Wilson MH, Ghorbangholi A, Hartley-Sharpe C, Gwinnutt C, Dicker B, et al. The use of trained volunteers in the response to out-of-hospital cardiac arrest - the GoodSAM experience. Resuscitation. Dec 2017;121:123-126. [FREE Full text] [CrossRef] [Medline]
  30. Takei Y, Kamikura T, Nishi T, Maeda T, Sakagami S, Kubo M, et al. Recruitments of trained citizen volunteering for conventional cardiopulmonary resuscitation are necessary to improve the outcome after out-of-hospital cardiac arrests in remote time-distance area: a nationwide population-based study. Resuscitation. Aug 2016;105:100-108. [FREE Full text] [CrossRef] [Medline]
  31. Gaziel-Yablowitz M, Schwartz DG. A review and assessment framework for mobile-based emergency intervention apps. ACM Comput Surv. Jan 10, 2018;51(1):1-32. [FREE Full text] [CrossRef]
  32. Schwartz DG, Bellou A, Garcia-Castrillo L, Muraro A, Papadopoulos N. Exploring mHealth participation for emergency response communities. Health Inf Sci Syst. 2017;21:1378. [FREE Full text] [CrossRef]
  33. Khalemsky M, Schwartz DG, Silberg T, Khalemsky A, Jaffe E, Herbst R. Childrens' and parents' willingness to join a smartphone-based emergency response community for anaphylaxis: survey. JMIR Mhealth Uhealth. Aug 27, 2019;7(8):e13892. [FREE Full text] [CrossRef] [Medline]
  34. Schwartz DG, Ataiants J, Roth A, Marcu G, Yahav I, Cocchiaro B, et al. Layperson reversal of opioid overdose supported by smartphone alert: a prospective observational cohort study. EClinicalMedicine. Aug 2020;25:100474. [FREE Full text] [CrossRef] [Medline]
  35. Brooks SC, Simmons G, Worthington H, Bobrow BJ, Morrison LJ. The PulsePoint Respond mobile device application to crowdsource basic life support for patients with out-of-hospital cardiac arrest: challenges for optimal implementation. Resuscitation. Jan 2016;98:20-26. [FREE Full text] [CrossRef] [Medline]
  36. Andelius L, Malta Hansen C, Lippert FK, Karlsson L, Torp-Pedersen C, Kjær Ersbøll A, et al. Smartphone activation of citizen responders to facilitate defibrillation in out-of-hospital cardiac arrest. J Am Coll Cardiol. Jul 07, 2020;76(1):43-53. [FREE Full text] [CrossRef] [Medline]
  37. Riegel B, Mosesso VN, Birnbaum A, Bosken L, Evans LM, Feeny D, et al. Stress reactions and perceived difficulties of lay responders to a medical emergency. Resuscitation. Jul 2006;70(1):98-106. [CrossRef] [Medline]
  38. Swor R, Khan I, Domeier R, Honeycutt L, Chu K, Compton S. CPR training and CPR performance: do CPR-trained bystanders perform CPR? Acad Emerg Med. Jun 2006;13(6):596-601. [FREE Full text] [CrossRef] [Medline]
  39. Chew KS, Ahmad Razali S, Wong SS, Azizul A, Ismail NF, Robert SJ, et al. The influence of past experiences on future willingness to perform bystander cardiopulmonary resuscitation. Int J Emerg Med. Dec 12, 2019;12(1):40. [FREE Full text] [CrossRef] [Medline]
  40. Zaffar MA, Rajagopalan HK, Saydam C, Mayorga M, Sharer E. Coverage, survivability or response time: a comparative study of performance statistics used in ambulance location models via simulation–optimization. Oper Res Health Care. Dec 2016;11:1-12. [FREE Full text] [CrossRef]
  41. Başar A, Çatay B, Ünlüyurt T. A taxonomy for emergency service station location problem. Optim Lett. Aug 9, 2011;6(6):1147-1160. [FREE Full text] [CrossRef]
  42. Camacho-Collados M, Liberatore F. A decision support system for predictive police patrolling. Decis Support Syst. Jul 2015;75:25-37. [FREE Full text] [CrossRef]
  43. Zijlstra JA, Stieglis R, Riedijk F, Smeekes M, van der Worp WE, Koster RW. Local lay rescuers with AEDs, alerted by text messages, contribute to early defibrillation in a Dutch out-of-hospital cardiac arrest dispatch system. Resuscitation. Nov 2014;85(11):1444-1449. [FREE Full text] [CrossRef] [Medline]
  44. Rao G, Choudhury S, Lingras P, Savage D, Mago V. SURF: identifying and allocating resources during out-of-hospital cardiac arrest. BMC Med Inform Decis Mak. Dec 30, 2020;20(Suppl 11):313. [FREE Full text] [CrossRef] [Medline]
  45. Timmons S, Vernon-Evans A. Why do people volunteer for community first responder groups? Emerg Med J. Mar 2013;30(3):e13. [FREE Full text] [CrossRef] [Medline]
  46. Jaffe E, Dadon Z, Alpert EA. Wisdom of the crowd in saving lives: the life guardians app. Prehosp Disaster Med. Oct 2018;33(5):550-552. [CrossRef] [Medline]
  47. Metelmann C, Metelmann B, Kohnen D, Brinkrolf P, Andelius L, Böttiger BW, et al. Smartphone-based dispatch of community first responders to out-of-hospital cardiac arrest - statements from an international consensus conference. Scand J Trauma Resusc Emerg Med. Feb 01, 2021;29(1):29. [FREE Full text] [CrossRef] [Medline]
  48. Van de Velde S, Roex A, Vangronsveld K, Niezink L, Van Praet K, Heselmans A, et al. Can training improve laypersons helping behaviour in first aid? A randomised controlled deception trial. Emerg Med J. Apr 2013;30(4):292-297. [FREE Full text] [CrossRef] [Medline]
  49. Elsner J, Meisen P, Ewert D, Schilberg D, Jeschke S. Prescient profiling – AI driven volunteer selection within a volunteer notification system. In: Jeschke S, Isenhardt I, Hees F, Henning K, editors. Automation, Communication and Cybernetics in Science and Engineering. Cham, Switzerland. Springer; 2014;597-607.
  50. van Dam JW, van de Velden M. Online profiling and clustering of Facebook users. Decis Support Syst. Feb 2015;70:60-72. [FREE Full text] [CrossRef]
  51. Schetgen L, Bogaert M, Van den Poel D. Predicting donation behavior: acquisition modeling in the nonprofit sector using Facebook data. Decis Support Syst. Feb 2021;141:113446. [FREE Full text] [CrossRef]
  52. Ward AM, McKillop DG. Profiling: a strategy for successful volunteer recruitment in credit unions. Financ Account Manag. 2010;26(4):367-391. [FREE Full text] [CrossRef]
  53. Shi ZR, Yuan Y, Lo K, Lizarondo L, Fang F. Improving efficiency of volunteer-based food rescue operations. Proc AAAI Conf Artif Intell. Apr 03, 2020;34(08):13369-13375. [FREE Full text] [CrossRef]
  54. Bray JE, Smith CM, Nehme Z. Community volunteer responder programs in cardiac arrest: the horse has bolted, it's time to optimize. J Am Coll Cardiol. Jul 18, 2023;82(3):211-213. [CrossRef] [Medline]
  55. Jonsson M, Berglund E, Baldi E, Caputo ML, Auricchio A, Blom MT, et al. ESCAPE-NET Investigators. Dispatch of volunteer responders to out-of-hospital cardiac arrests. J Am Coll Cardiol. Jul 18, 2023;82(3):200-210. [CrossRef] [Medline]
  56. Gregers MC, Andelius L, Kjoelbye JS, Juul Grabmayr A, Jakobsen LK, Bo Christensen N, et al. Association between number of volunteer responders and interventions before ambulance arrival for cardiac arrest. J Am Coll Cardiol. Mar 21, 2023;81(7):668-680. [FREE Full text] [CrossRef] [Medline]
  57. Yafe E, Walker BB, Amram O, Schuurman N, Randall E, Friger M, et al. Volunteer first responders for optimizing management of mass casualty incidents. Disaster Med Public Health Prep. Apr 4, 2019;13(2):287-294. [CrossRef] [Medline]


AED: automatic external defibrillator
CPR: cardiopulmonary resuscitation
EMS: emergency medical services
ERC: emergency response community
ETA: estimated time of arrival
OHCA: out-of-hospital cardiac arrest
VFR: volunteer first responder


Edited by L Buis; submitted 30.07.22; peer-reviewed by A Mavragani, L Wu, X Chen; comments to author 12.04.23; revised version received 03.06.23; accepted 17.10.23; published 28.11.23.

Copyright

©Michael Khalemsky, Anna Khalemsky, Stephen Lankenau, Janna Ataiants, Alexis Roth, Gabriela Marcu, David G Schwartz. Originally published in JMIR mHealth and uHealth (https://mhealth.jmir.org), 28.11.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on https://mhealth.jmir.org/, as well as this copyright and license information must be included.