Published on in Vol 5, No 9 (2017): September

Tablet-Based Patient-Centered Decision Support for Minor Head Injury in the Emergency Department: Pilot Study

Tablet-Based Patient-Centered Decision Support for Minor Head Injury in the Emergency Department: Pilot Study

Tablet-Based Patient-Centered Decision Support for Minor Head Injury in the Emergency Department: Pilot Study

Original Paper

1Medical College of Georgia, AU/UGA Medical Partnership, Athens, GA, United States

2Department of Emergency Medicine, Yale School of Medicine, New Haven, CT, United States

3Department of Emergency Medicine, Mayo Clinic, Rochester, MN, United States

4Department of Research and Evaluation, Kaiser Permanente Southern California, Pasadena, CA, United States

5Department of Emergency Medicine, Los Angeles Medical Center, Kaiser Permanente Southern California, Los Angeles, CA, United States

6School of Visual Arts, New York, NY, United States

Corresponding Author:

Edward Melnick, MHS, MD

Department of Emergency Medicine

Yale School of Medicine

464 Congress Ave, Suite 260

New Haven, CT, 06519

United States

Phone: 1 203 737 6454

Fax:1 203 785 4580

Email: edward.melnick@yale.edu


Background: The Concussion or Brain Bleed app is a clinician- and patient-facing electronic tool to guide decisions about head computed tomography (CT) use in patients presenting to the emergency department (ED) with minor head injury. This app integrates a patient decision aid and clinical decision support (using the Canadian CT Head Rule, CCHR) at the bedside on a tablet computer to promote conversations around individualized risk and patients’ specific concerns within the ED context.

Objective: The objective of this study was to describe the use of the Concussion or Brain Bleed app in a high-volume ED and to establish preliminary efficacy estimates on patient experience, clinician experience, health care utilization, and patient safety. These data will guide the planning of a larger multicenter trial testing the effectiveness of the Concussion or Brain Bleed app.

Methods: We conducted a prospective pilot study of adult (age 18-65 years) patients presenting to the ED after minor head injury who were identified by participating clinicians as low risk by the CCHR. The primary outcome was patient knowledge regarding the injury, risks, and CT use. Secondary outcomes included patient satisfaction, decisional conflict, trust in physician, clinician acceptability, system usability, Net Promoter scores, head CT rate, and patient safety at 7 days.

Results: We enrolled 41 patients cared for by 29 different clinicians. Patient knowledge increased after the use of the app (questions correct out of 9: pre-encounter, 3.3 vs postencounter, 4.7; mean difference 1.4, 95% CI 0.8-2.0). Patients reported a mean of 11.7 (SD 13.5) on the Decisional Conflict Scale and 92.5 (SD 12.0) in the Trust in Physician Scale (both scales range from 0 to 100). Most patients were satisfied with the app’s clarity of information (35, 85%), helpfulness of information (36, 88%), and amount of information (36, 88%). In the 41 encounters, most clinicians thought the information was somewhat or extremely helpful to the patient (35, 85%), would want to use something similar for other decisions (27, 66%), and would recommend the app to other providers (28, 68%). Clinicians reported a mean system usability score of 85.1 (SD 15; scale from 0 to 100 with 85 in the “excellent” acceptability range). The total Net Promoter Score was 36.6 (on a scale from –100 to 100). A total of 7 (17%) patients received a head CT in the ED. No patients had a missed clinically important brain injury at 7 days.

Conclusions: An app to help patients assess the utility of CT imaging after head injury in the ED increased patient knowledge. Nearly all clinicians reported the app to be helpful to patients. The high degree of patient satisfaction, clinician acceptability, and system usability support rigorous testing of the app in a larger multicenter trial.

JMIR Mhealth Uhealth 2017;5(9):e144

doi:10.2196/mhealth.8732

Keywords



One-third of patients with minor head injury receive head computed tomography (CT) that may not be clinically indicated [1-6]. These potentially avoidable CTs do not change management. However, they do increase health care costs, exposure to ionizing radiation, and length of stay in the emergency department (ED) [7]. The American Board of Internal Medicine and the American College of Emergency Physicians’ Choosing Wisely initiative have recognized this and recommend avoiding unnecessary head CTs in patients with minor head injuries as the top national priority for addressing CT overuse in emergency care [1]. The Canadian CT Head Rule (CCHR) is a clinical decision rule that was developed using a rigorous, evidence-based derivation and validation process to identify minor head injury in patients at risk for clinically important structural brain injuries and the need for neurosurgical intervention. A total of 7 history and physical criteria are used as indications for CT based on their association with these risks. This rule was designed to safely reduce head CT use in patients with minor head injury. It has been validated to be 100% sensitive in detecting patients needing neurosurgical intervention. Additionally, the CCHR outperforms other decision rules with the highest specificity in its class [8-11].

Implementing the CCHR with traditional computerized clinical decision support (CDS) has had a modest effect (5%-8%) on decreasing CT use in these patients [12,13]. Since one-third of CTs in minor head injury patients are potentially avoidable and traditional CDS has had limited effect on reducing these scans, it has been hypothesized that nonclinical factors (such as fear of litigation, physician personality, fear of missed diagnoses, financial incentives, paucity of information, and patient expectations) also contribute to CT overuse in these patients [14,15]. Qualitative research on this topic revealed that physician-based empathic factors such as establishing trust and engaging patients by identifying and addressing their concerns are essential to reduce CT overuse [15,16].

We previously developed a clinician- and patient-facing electronic tool to guide decisions about CT use in ED patients with minor head injury, called Concussion or Brain Bleed [17,18]. This app integrates a patient decision aid and CDS (using the CCHR) at the bedside on a tablet computer to promote conversations around individualized risk and patients’ specific concerns within the ED’s clinical constraints [19,20]. Although intended primarily for use in low-risk patients, the app includes pathways for moderate- and high-risk patients as well. Figure 1 presents the conceptual workflow of the app: (1) welcome screen, (2) injury evaluator (CDS portion), (3) risk visualization, (4) risk discussion with conversation prompts such as “You can’t see concussion on CT?,” (5) considerations, and (6) integration back to traditional workflow (the paper handout given to patients after using the intervention is publicly available [21]).

Figure 2 presents the risk visualization screen for the low-risk pathway. The long-term implementation goal for this patient-centered decision support tool is to safely and effectively reduce CT use for patients with minor head injury while simultaneously improving the patient experience. In the trial presented here, our objective was to describe the use of the Concussion or Brain Bleed app in a high-volume ED and to establish preliminary efficacy estimates on patient experience, clinician experience, health care utilization, and patient safety.

Figure 1. Conceptualization of the workflow and potential pathways for the Concussion or Brain Bleed app. CT: computed tomography; EHR: electronic health record.
View this figure
Figure 2. Risk visualization screen shot for low-risk patients from the Concussion or Brain Bleed app. CT: computed tomography.
View this figure

Study Design, Setting, and Population

We performed a prospective pilot study with a convenience sample of 41 ED patients with minor head injury. Patients were enrolled over a 6-week period (May 23 to July 3, 2017). Patients and clinicians who were eligible and willing to participate used the Concussion or Brain Bleed app and completed a survey to determine the app’s baseline efficacy on patient experience, clinician experience, health care utilization, and patient safety. Participants were patients and clinicians recruited from an urban, academic level I trauma center ED with 103,000 patient-visits per year. Eligible patients were adults (age 18-65 years) presenting to the ED who had experienced blunt head injury within the last 24 hours who were determined to be at low risk by the CCHR (see Figure 3) and were being considered for head CT imaging by the treating clinician. Patients who were pregnant, non-English speaking, in police custody, undergoing psychiatric evaluation, or found to have drug or alcohol intoxication were excluded. Eligible clinicians were attending physicians, fellows, residents, and midlevel providers caring for eligible patients. We recruited clinicians from the 48 attending physician faculty, 58 resident physicians, and 47 midlevel providers. The study protocol was approved by our institutional review board (IRB), the Yale Human Investigation Committee.

Participant Identification, Recruitment, and Enrollment

A research assistant (RA; NS) reviewed an electronic patient tracking board at regular intervals to identify potentially eligible patients based on a chief complaint potentially consistent with head trauma. Next, the RA worked with the clinician assigned to the patient’s care team to determine whether the patient met inclusion criteria (either before or after the initial clinical evaluation). Next, the clinician and patient were informed about the study and asked if they would be willing to participate. The participating clinician and patient provided verbal consent as specified by the IRB-approved protocol. We collected all data using the Web-based survey tool Qualtrics Survey Tool (Qualtrics, LLC), on Yale’s electronic patient health information-approved and certified, licensed online platform. Clinicians were compensated for participation in the study with a US $10 gift card to a coffee shop.

Figure 3. Flow diagram for patient identification and enrollment in the flow of patient care with number of patients identified, enrolled, and receiving computed tomography (CT) in the emergency department (ED). GCS: Glasgow Coma Scale; RA: research assistant; TBI: traumatic brain injury.
View this figure

Training

The RA gave participating clinicians a brief (<2 minute) tutorial of the Concussion or Brain Bleed app prior to using it the first time. This individualized, just-in-time training provided an opportunity to highlight each section of the app and to demonstrate its navigation. The clinician was given an opportunity to ask any additional questions or to repeat sections of the training as needed until they felt comfortable with its use. Although RAs were available at the point of care to assist with any technical issues or difficulty navigating the app on an as-needed basis, they refrained from interfering with the actual use of the app to observe an accurate representation of its use in routine care.

Patient and Clinician Characteristics

We collected patient demographics by self-report at the time of enrollment, including age, sex, race, ethnicity, highest level of education, insurance status, and household income. Patient literacy and numeracy were assessed immediately before use of the app using the validated Subjective Literacy Scale and Subjective Numeracy Scale [22-24]. The Subjective Literacy Scale comprises 3 items, each rated on a 5-point Likert scale and summed into a total score of 3-15. The Subjective Numeracy Scale consists of 8 items that assesses comfort in working with numbers, each rated on a 6-point Likert scale with an overall score ranging from 6 to 48.

We collected clinician characteristics by self-report following the clinician’s first use of the tool. Clinician characteristics that were collected included demographics, years practicing emergency medicine, medical degree or role, and details on personal technology use.

Outcome Measures

Outcome selection was informed by a similar study performed by Hess et al [25] using a paper-based, shared decision-making aid in a pediatric population to compare the decision aid’s effectiveness with usual care on (1) parent knowledge regarding their child’s risk, diagnostic options, and risks associated with CT, (2) parent engagement in the decision-making process, (3) degree of conflict parents experience related to feeling uninformed, (4) patient and clinician satisfaction, (5) rate of clinically important traumatic brain injury at 7 days, (6) proportion of patients in whom a CT scan was obtained, and (7) 7-day health care utilization [25]. That study selected outcomes based on input from key stakeholders, including patient representatives, practicing clinicians, researchers (including shared decision-making experts), and health policy decision makers. Patient knowledge was selected as the primary outcome for that study based on input from patient representatives. For our study reported here, we selected patient knowledge as the primary outcome and other secondary outcomes based on this precedent from the pediatric shared decision-making study, including the Decisional Conflict Scale, the Trust in Physician Scale, similar satisfaction, health care utilization, and patient safety outcomes [25].

Patient Outcomes

Patient Knowledge

We assessed patient knowledge using a pre- and postvisit survey administered immediately before and after the clinical encounter (Multimedia Appendix 1) [25]. In the survey, 9 questions assessed patients’ knowledge regarding concussion, their individual risk of structural brain injury, the available diagnostic options, the risks related to radiation exposure associated with a head CT scan, the potential for a CT scan to identify incidental abnormalities that may require further investigation, and reasons to return to the ED for reevaluation should their symptoms worsen after ED discharge. We calculated the percentage of knowledge questions answered correctly to determine the mean difference between knowledge scores before and after use of the intervention.

Decisional Conflict

We measured the patient’s degree of conflict with the decision of whether to get a CT scan using the validated Decisional Conflict Scale [25-28]. The 16 items on this scale are scored on a scale 0 to 4; the items are summed, divided by 16, and then multiplied by 25. The scale ranges from 0 to 100, where higher scores reflect patient uncertainty about the choice.

Trust in the Physician

We measured patients’ trust in their clinician using the validated Trust in Physician Scale [25,28-30]. This scale has 10 items, which are scored on a scale of 1 to 5; the items are summed, divided by 10, and then multiplied by 100. The scale ranges from 0 to 100, where higher values reflect higher levels of trust in their clinician.

Patient Satisfaction

We measured patients’ satisfaction with the way information was shared during the encounter by asking 5 questions using a 7-point Likert scale. For the analysis, we classified satisfaction into satisfied/very satisfied versus other responses.

Clinician Outcomes

Clinician Satisfaction

We assessed clinician satisfaction immediately after the patient encounter via a questionnaire regarding the helpfulness of the app and the clinician’s satisfaction with the way information was shared on a 7-point Likert scale. For the analysis, we classified satisfaction into satisfied/very satisfied versus other responses.

System Usability Scale

The System Usability Scale consists of a 10-item questionnaire on a 5-point Likert scale that gives a reliable assessment of usability [31]. The 10 items of the System Usability Scale are scored on a scale of 0 to 4, with each even-numbered question reverse coded. The items are summed and then multiplied by 2.5. Scores range from 0 to 100, where higher scores indicate higher usability.

Net Promoter Score

The Net Promoter Score has been employed across industries to measure how willing a user is to recommend a product or service to others [32]. A higher score on this scale ranging from –100 to 100 can indicate a greater growth rate of the corresponding product or service. We determined the score by first asking the clinician user on a scale from 0 to 10 (0=not likely at all, 10=extremely likely) “How likely are you to recommend the Concussion or Brain Bleed application to a colleague?” If a clinician answered 9 or 10, we categorized them as a “promoter”—someone who would enthusiastically recommend the app to others. If a clinician answered 6 or lower, we considered them to be a “detractor”—someone who would potentially give a negative review to others. The Net Promoter Score is calculated by subtracting the percentage of promoters from the percentage of detractors. We calculated a total Net Promoter Score factoring in all encounters in which the app was used, as well as a first-time user Net Promoter Score and a second-time user Net Promoter Score.

Fidelity Score

We assessed the fidelity with which the intervention was delivered and used as intended using a fidelity checklist of 8 intended actions (see Multimedia Appendix 2). The fidelity checklist has been used in the absence of the intervention to check for contamination in the usual-care arm of a trial [25].

Health Care Utilization and Patient Safety

CT scans were obtained at the ED clinicians’ discretion and interpreted by site faculty radiologists. The main health care utilization outcome was the proportion of patients for whom head CT was obtained in the ED. We also collected data at the time of the ED visit (and confirmed by chart review) on (1) whether the patient was admitted to the hospital, (2) acute findings on CT if obtained, and (3) whether the clinician reported that they would have made the same decision regarding CT imaging without using the app. The RA contacted enrolled patients by telephone or email starting at 7 days after the index ED visit to ensure no outcomes were missed. The 7-day follow-up was based on timing of delayed clinical deterioration and our previous work [8,25].

Analysis Plan

Results are reported using descriptive statistics and stratified by patient and clinician outcomes. The unit of analysis was the ED encounter. We defined change in patient knowledge as the mean difference of questions answered correctly pre- and postencounter. We performed analysis in Microsoft Excel (version 2016; Microsoft Corporation) on data exported from the Qualtrics Survey Tool. We made every effort to minimize the occurrence of missing data. We attempted to verify (or ascertain, if missing) items self-reported by patients at the 7-day follow-up by medical record review. We report rates of missing data as well as known reasons for missing data. We conducted secondary exploratory analyses of variables predictive of the odds of CT imaging, patient knowledge, and trust in physician using univariate logistic and linear regression with SAS (version 9.3; SAS Institute).


Patient and Clinician Characteristics

We enrolled 41 of 43 identified patients (see Figure 1; recruitment rate 95%) in the 6-week study period with a mean age of 34.9 years (range 18-59; see Table 1). The majority of patients were female (26, 63%), were not of Hispanic or Latino origin (31, 76%), and identified high school or general educational diploma or less as their highest level of education (24, 59%). The mean patient subjective literacy score was 12.4 (SD 2.8), and mean subjective numeracy score was 30.4 (SD 8.5).

Of 33 eligible clinicians, 29 (recruitment rate 88%) caring for eligible patients agreed to participate. The mean clinician age was 34 years (range 24-51; see Table 2). The majority of clinicians were female (15, 52%), not of Hispanic or Latino origin (36, 90%), white (20, 69%) and physicians (MDs) (16, 55%). There were 11 (38%) clinicians with a Physician Assistant degree and 2 (7%) with an Advanced Practice Registered Nurse (nurse practitioner) degree. The mean (range) years of experience practicing emergency medicine (including residency) was 5.8 (0-24). All clinicians owned a personal smartphone (29, 100%) and most owned a personal tablet computer (21, 72%). The majority of clinicians (24, 83%) also indicated they spent over 30 hours a week on a computer, tablet, or smartphone.

Patient Experience

Mean (SD) knowledge assessment scores increased from 3.3 (1.9) out of 9 pre-encounter to 4.7 (2.1) postencounter (Multimedia Appendix 1), with mean difference of 1.4 (95% CI 0.8-2.0, see Table 3). The mean (SD) patient decisional conflict score was 11.7 (13.5), and the mean (SD) trust in physician score was 92.5 (12). Both scales are from 0 to 100. Patient satisfaction scores showed that a majority of patients were satisfied with the clarity of information (35, 85%), helpfulness of the information (36, 88%), and amount of information (36, 88%). The majority of patients also said that they would recommend the app to others (36, 88%) and would want to use something similar for other clinical decisions (26, 63%).

The mean (SD) fidelity score was 6.7 (1.8; see Table 5) out of the 8 intended actions that the app aimed to elicit. Clinicians most consistently described the different risk levels portrayed on the risk visualization pictograph (95%). Clinicians least frequently elicited the patient or caregiver’s concerns (61%).

Health Care Utilization and Patient Safety

In the 41 encounters in which the app was used, 7 patients (17%; see Table 6) received a head CT in the ED. Since these patients were at low risk, all 7 CTs scans were not recommended based on the CCHR criteria. Of the 7 CTs, the 3 most frequently cited reasons for obtaining CT were referring physician request (5/7, 71%), mechanism of injury (3/7, 43%), and headache (3/7, 43%). In 100% of cases in which the app was used, clinicians reported they would make the same decision without the app. No patients were admitted to the hospital (0, 0%). Follow-up data were collected via phone call from 34 patients (83%), email from 4 patients (10%), and chart review for the remaining 3 patients (7%). At 7-day follow-up, 4 patients (10%) had returned to an ED, 14 patients (34%) had visited a physician's office or clinic, 1 patient (2%) did both, and 22 patients (54%) did neither. Further testing or procedures were obtained for 5 patients (12%) within 7 days following the encounter, and 2 patients (5%) underwent neuroimaging within 7 days. No patient had acute findings on CT in the ED or on follow-up imaging (0%).

Secondary Analyses

On secondary analyses of variables predictive of the odds of CT imaging, fidelity with the concerns portion of the intervention (odds ratio 0.19, 95% CI 0.03-1.15, P=.07), not having low literacy (odds ratio 0.23, 95% CI 0.04-1.26, P=.09), and system usability score above average (odds ratio 0.24, 95% CI 0.03-1.83, P=.17) trended toward significance but these results were not statistically significant. Patient knowledge and trust in the physician yielded no statistically significant results. Variables that trended toward significance for change in patient knowledge from pre- to postencounter in univariate analysis were white patient race (variable 1.14, 95% CI –0.14 to 2.41, P=.08), fidelity with the concerns portion of the intervention (variable –0.83, 95% CI –2.13 to 0.47, P=.21), not having low literacy and not having low numeracy (both with variable 0.64, 95% CI –0.66 to 1.94, P=.33) but these results were not statistically significant. Variables that trended toward significance for trust in physician on univariate analysis were white patient race (variable 5.89, 95% CI –1.29 to 13.08, P=.11), fidelity with the concerns portion of the intervention (variable 3.59, 95% CI –3.74 to 10.91, P=.34), and not having low literacy (variable 3.83, 95% CI –4.02 to 11.68, P=.34) but these results were not statistically significant.

Table 1. Patient characteristics.
CharacteristicsData
Participants recruited, n43
Participants enrolled, n (%)41 (95)
Age (years), mean (range)34.9 (18-59)
Female, n (%)26 (63)
Race, n (%)

Black or African American15 (37)

White17 (42)

Asian1 (2)

American Indian or Alaska Native1 (2.4)

Other9 (22)
Ethnicity, n (%)

Hispanic or Latino origin10 (24)

Not of Hispanic or Latino origin31 (76)
Education, n (%)

Some high school or less4 (10)

High school graduate20 (49)

Some college12 (29)

College graduate or more5 (12)
Insurance, n (%)

Private/HMOa21 (51)

Medicaid only17 (42)

Medicare only0 (0)

Medicare + Medicaid1 (2)

Uninsured2 (5)
Annual household income (US $), n (%)

<20,0008 (20)

20,000-29,9996 (15)

30,000-39,9996 (15)

40,000-59,9994 (10)

60,000-79,9997 (17)

80,000-99,9995 (12)

≥100,000 or more5 (12)
Subjective Literacy Scale score, mean (SD)12.4 (2.8)
Subjective Numeracy Scale score, mean (SD)30.4 (8.5)

aHMO: health maintenance organization.

Table 2. Clinician characteristics.
CharacteristicsData
Participants recruited, n33
Participants enrolled, n (%)29 (88)
Age (years), mean (range)34 (24-51)
Female, n (%)15 (52)
Race, n (%)

White20 (69)

Asian8 (28)

Other2 (7)
Ethnicity, n (%)

Hispanic or Latino origin3 (10)

Not of Hispanic or Latino origin36 (90)
Medical degree, n (%)

Advanced Practice Registered Nurse2 (7)

Physician Assistant11 (38)

Physician (MD)16 (55)
Experience (years), mean (range)5.8 (0-24)
Technology use, n (%)

Time (hours) spent on a computer, tablet, or smartphone per week


<100 (0)


10-202 (7)


20-303 (10)


30-408 (28)


>4016 (55)

Preferred method of contact


Call on landline0 (0)


Call on mobile phone9 (31)


Email0 (0)


Text20 (69)


Other0 (0)

Mobile technology use


Personal tablet computer21 (72)


Personal smartphone29 (100)
Table 3. Patient experience outcomes and results.
OutcomeData
Patient knowledge

Knowledge (no. questions correct out of 9), mean (SD)


Pre-encounter3.3 (1.9)


Postencounter4.7 (2.1)


Mean difference (95% CI)1.4 (0.8-2.0)
Decisional conflict and trust

Decisional Conflict Scale (scale of 0-100), mean (SD)11.7 (13.5)

Trust in Physician Scale (scale of 0-100), mean (SD)92.5 (12.0)
Patient satisfaction, n (%)

Amount of information


Too little (1-2)0 (0)


Just right (3-5)36 (88)


Too much (6-7)5 (12)

Clarity of information


Satisfied (5-7)35 (85)


Unsatisfied (1-4)6 (15)

Helpfulness of information


Satisfied (5-7)36 (88)


Unsatisfied (1-4)5 (12)

Would recommend to others


Yes (1-3)36 (88)


Not sure/no (4-7)5 (12)

Would want to use for other decisions


Yes (1-3)26 (63)


Not sure/no (4-7)15 (37)
Table 4. Clinician experience outcomes and results.
OutcomeData
System usability and net promoter scores

System Usability Scale score (scale of 0-100), mean (SD)85.1 (15.0)

Total Net Promoter Score (scale of –100 to 100)36.6

First-time user Net Promoter Score31.0

Second-time user Net Promoter Score50.0
Clinician acceptability, n (%)

Helpfulness of the information


Not helpful at all (1-2)1 (2)


Somewhat helpful (3-5)16 (39)


Extremely helpful (6-7)24 (59)

Would want to use for other decisions


Yes (1-2)27 (66)


Not sure (3-5)13 (32)


No (6-7)1 (2)

Would recommend to others


Yes (1-2)28 (68)


Not sure (3-5)13 (32)


No (6-7)0 (0)
Table 5. Fidelity score and compliance with delivery of the intervention as intended.
Fidelity of Use Assessment Questionn (%)
Did the clinician describe how the severity of the injury was evaluated using the Canadian CTaHead Rule?37 (90)
Did the clinician describe the risk as a natural frequency (eg, “of 100 people like you, 6 will...”)?37 (90)
Did the clinician describe the different risk levels portrayed on the risk visualization pictograph?39 (95)
Did the clinician explain the difference between concussion and brain bleed?31 (76)
Did the clinician explain what kinds of injuries can and cannot be seen on a CT scan?33 (81)
Did the clinician elicit the patient and/or caregiver’s concerns?25 (61)
Did the clinician discuss the patient and/or caregiver’s specific concerns?35 (85)
(Follow-up Discussion)

(If no CT performed) Did the clinician discuss what to expect after leaving the ED?36 (88)

(If CT performed) Did the clinician discuss issues to consider before getting a CT scan?0 (0)
Total score out of 8 possible, mean (SD)6.7 (1.6)

aCT: computed tomography.

Table 6. Health care utilization and patient safety results.
Outcomen (%)
Head CTaobtained in the EDb7 (17)
Clinician would make same decision without the app41 (100)
Admitted to the hospital0 (0)
Acute findings on CT in ED0 (0)
ED return visit within 7 days4 (10)
Physician office or clinic visit within 7 days14 (34)
Both ED return visit and physician office or clinic visit within 7 days1 (2)
Neither ED return visit nor physician office or clinic visit within 7 days22 (54)
Neuroimaging within 7 days2 (5)
Acute findings on neuroimaging within 7 days0 (0)

aCT: computed tomography.

bED: emergency department.


In patients with low-risk minor head injury who were being considered for CT head imaging in the ED, use of the Concussion or Brain Bleed app in this prospective interventional pilot study resulted in increased patient knowledge and was associated with a low rate of CT use, high trust in the physician, low patient decisional conflict, high clinician Net Promoter Score, and high system usability score without any adverse events in patients. We found the app to be acceptable to both patients and clinicians.

Comparison With Other Studies

Our trial’s setup was similar to those of other ED shared decision-making trials for adult patients with chest pain and pediatric patients with head injury [25,28]. The high trust in physician and low decisional conflict scores reported here establish baseline efficacy of the Concussion or Brain Bleed app. These scores are consistent with those of previous ED trials of paper-based decision aids for adult ED patients with chest pain (trust in physician: mean 89.5, SD 13.4 versus this study, 92.5, SD 12.0; decisional conflict: mean 43.5, SD 11.3 versus this study, 11.7, SD 13.5) and parents of pediatric ED patients with head injury (results to be reported soon) [25,28]. Although the results have not yet been formally reported, our population had similar but slightly lower literacy and numeracy than the trial studying parents of pediatric ED patients with head injury described in the Outcome Measures subsection above [25].

Traditional implementation strategies lead to increased CT use in minor head injury [33]. On the other hand, traditional CDS has had only a modest effect (5%-8%) on decreasing the rate of CT overuse (35%) in these patients [2,5]. The overuse rate in our study of 17% cuts this rate in half. Based on our previous qualitative work, we hypothesize that this additional decrease was due to the intervention’s ability to engage patients and address nonclinical factors (eg, identifying and addressing patient concerns and increasing physician trust). However, the number of patients enrolled in this study was limited and was a convenience sample [15].

The intervention’s System Usability Scale and Net Promoter scores were also high. To put them in context, a system usability score of 85.1 has been correlated with the adjective rating of “excellent” or a grade of A+ [34,35]. Amazon.com is a frequently used website that has been found to have a similar system usability score [36]. Furthermore, the Net Promoter Score of 36.6 indicates a greater rate of users who were promoters than detractors of the product and, therefore, suggests the product’s growth potential [32].

Meaning of the Study

Overuse of CT in minor head injury is complex and multifactorial, including both clinical and nonclinical contributing factors [14,15]. Traditional implementation strategies such as CDS can address clinical factors such as a lack of awareness of the evidence [37]. However, these strategies have had limited success for this decision, likely due to nonclinical factors such as patients’ concerns with their condition and care [12-15]. Findings of this study suggest that patients can be educated and engaged in the ED setting in decisions about CT imaging for low-risk minor head injury using a health information technology interface that supports the clinician-patient relationship (rather than getting in its way) [17,38,39]. Specifically, if these findings are confirmed in a larger effectiveness trial, it would imply that successful adoption of the Concussion or Brain Bleed app could help address nonclinical factors that contribute to overuse of CT in minor head injury that are not addressed with traditional implementation strategies and traditional CDS.

Strengths and Weaknesses of the Study

Unlike traditional implementation efforts, this intervention systematically aims to use technology at the bedside to engage, educate, and reassure patients. This pilot study took place at a single site, so the results may not be generalizable to other EDs. Similarly, unique infrastructure already in place in our ED (but not part of the intervention) could have contributed to the app’s success. This study was conducted by 1 RA who was responsible for enrollment, clinician training, and data collection. An RA provides internal consistency but could be prone to bias based on the RA’s level of performance. Enrollment primarily occurred in the evenings, which is similar to our previous findings on enrollment for head injury patients in the ED [40]. The patients enrolled were representative of the patient population seen in our ED, which serves an urban, underserved population with low literacy and numeracy.

This pilot study has shown that it is feasible to use an integrated decision aid with CDS on a tablet computer at the bedside in the ED to engage, educate, and reassure low-risk minor head injury patients about CT and concussion. This finding is promising but, without a control arm, a conclusion cannot be drawn regarding the intervention’s efficacy in reducing potentially avoidable CT scans. Although only 2 patients and 4 clinicians declined to participate, enrollment of a convenience sample may also have introduced a self-selection bias of clinicians and patients who were more amenable to this type of approach. For example, the clinicians were relatively young and tech savvy (average age 34 years and 69% using text messaging as their preferred method of personal communication). Since diffusion of innovations benefits from early adoption by a population that is likely to be receptive to change and technology, we believe this is a necessary first step to adoption [41].

One of the top priorities of the Concussion or Brain Bleed app is to have the clinician identify and address the patient’s specific concerns. Therefore, we were troubled to note that fidelity with eliciting concerns was the lowest fidelity score of the 8 intended actions that the intervention aimed to elicit (Table 5). To address this, we revised the Risk Discussion screen as discussed in Multimedia Appendix 3.

Based on the secondary analyses, fidelity with identifying and addressing patients’ concerns trended to significance for being predictive of CT imaging rate (odds ratio 0.19, P=.07), change in patient knowledge (variable –0.83, P=.21), and trust in physician (variable 5.89, P=.11) but these results were not statistically significant. These findings are consistent with our qualitative research that identifying and addressing patients’ concerns influences overuse of CT in low-risk minor head injury patients [15]. The findings of the secondary analysis of fidelity with the intervention were consistent with verbal feedback we received from users that it was difficult to both educate and address patient concerns due to time constraints. This study was not powered to detect which variables were predictive of outcomes. However, these estimates give us a sense of the direction of association that may exist. The results reported here will help to determine the sample size of future effectiveness research comparing this intervention versus usual care.

Unanswered Questions and Future Research

In this pilot study, research staff were available to coordinate use of the Concussion and Brain Bleed app in appropriate patients. Given the competing demands in the ED context, in the absence of research staff there would be multiple barriers to its use, adoption, and integration into routine ED care. Although clinicians reported in every use of the intervention that the app did not affect their clinical decision whether to obtain CT imaging, we maintain that the Concussion or Brain Bleed app has the potential to safely reduce CT imaging in low-risk minor head injury patients. Future research should focus on assessing and optimizing the context for implementation of the Concussion or Brain Bleed app into routine ED care. Identifying barriers and facilitators for how best to embed this complex innovation as part of routine care could optimize its reach, effectiveness, adoption, implementation, and maintenance in routine care [42,43]. For example, a qualitative analysis could explore the reasons that some physicians approved of the tool but would not recommend it to others. Once these factors are identified and optimized, our plan to compare the effectiveness of the app versus usual care could more fully determine its effects on patient experience, clinician experience, health care utilization, and patient safety. If the app is effective, our next goal would be to scale the intervention for dissemination and implementation to outside sites. At the time of this publication, the Concussion or Brain Bleed app is also being adapted for use in Canada with plans to study it there in a comparative effectiveness trial as well. Another category of unanswered questions to explore further would be the concept of patient-centered decision support—for example, which clinical decisions are appropriate for patient decision aids versus traditional CDS versus patient-centered integrated solutions like the one presented here.

Conclusion

An app to help patients assess the utility of CT imaging after head injury in the ED increased patient knowledge, was associated with a low rate of CT overuse, and was reported to be “extremely helpful” to patients. The high degree of patient satisfaction and clinician acceptability, and high system usability scores are evidence to support the need for rigorous testing of the app in future research that could optimize its implementation into routine ED care and measure its effectiveness compared with usual care.

Acknowledgments

This project was supported by grant number K08HS021271 from the Agency for Healthcare Research and Quality. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality. The authors would like to express their gratitude to Kevin Lopez for computer programming of the intervention and Caitlin Johnson for facilitating the IRB process.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Patient Knowledge Assessment Questionnaire.

JPG File, 53KB

Multimedia Appendix 2

Fidelity checklist.

PDF File (Adobe PDF File), 20KB

Multimedia Appendix 3

Revisions to Risk Discussion screen based on pilot user feedback.

PDF File (Adobe PDF File), 247KB

  1. American College of Emergency Physicians. Choosing Wisely. Philadelphia, PA: ABIM Foundation; 2014. Ten things physicians and patients should question   URL: http://www.choosingwisely.org/societies/american-college-of-emergency-physicians/ [accessed 2017-09-19] [WebCite Cache]
  2. Melnick ER, Szlezak CM, Bentley SK, Dziura JD, Kotlyar S, Post LA. CT overuse for mild traumatic brain injury. Jt Comm J Qual Patient Saf 2012 Nov;38(11):483-489. [Medline]
  3. Korley FK, Morton MJ, Hill PM, Mundangepfupfu T, Zhou T, Mohareb AM, et al. Agreement between routine emergency department care and clinical decision support recommended care in patients evaluated for mild traumatic brain injury. Acad Emerg Med 2013 May;20(5):463-469 [FREE Full text] [CrossRef] [Medline]
  4. Parma C, Carney D, Grim R, Bell T, Shoff K, Ahuja V. Unnecessary head computed tomography scans: a level 1 trauma teaching experience. Am Surg 2014 Jul;80(7):664-668. [Medline]
  5. Sharp AL, Nagaraj G, Rippberger EJ, Shen E, Swap CJ, Silver MA, et al. Computed tomography use for adults with head injury: describing likely avoidable emergency department imaging based on the Canadian CT Head Rule. Acad Emerg Med 2017 Dec;24(1):22-30. [CrossRef] [Medline]
  6. Melnick ER. Big versus small data and the generalizability of the rate of computed tomography overuse in minor head injury. Acad Emerg Med 2017 Mar;24(3):391-392. [CrossRef] [Medline]
  7. Korley FK, Pham JC, Kirsch TD. Use of advanced radiology during visits to US emergency departments for injury-related conditions, 1998-2007. JAMA 2010 Oct 06;304(13):1465-1471. [CrossRef] [Medline]
  8. Stiell IG, Wells GA, Vandemheen K, Clement C, Lesiuk H, Laupacis A, et al. The Canadian CT Head Rule for patients with minor head injury. Lancet 2001 May 05;357(9266):1391-1396. [Medline]
  9. Stiell IG, Wells GA. Methodologic standards for the development of clinical decision rules in emergency medicine. Ann Emerg Med 1999 Apr;33(4):437-447. [Medline]
  10. Papa L, Stiell IG, Clement CM, Pawlowicz A, Wolfram A, Braga C, et al. Performance of the Canadian CT Head Rule and the New Orleans Criteria for predicting any traumatic intracranial injury on computed tomography in a United States level I trauma center. Acad Emerg Med 2012 Jan;19(1):2-10 [FREE Full text] [CrossRef] [Medline]
  11. Smits M, Dippel DWJ, Nederkoorn PJ, Dekker HM, Vos PE, Kool DR, et al. Minor head injury: CT-based strategies for management--a cost-effectiveness analysis. Radiology 2010 Feb;254(2):532-540. [CrossRef] [Medline]
  12. Sharp AL, Huang BZ, Tang T, Shen E, Melnick ER, Venkatesh AK, et al. Implementation of the Canadian CT Head Rule and its association with use of computed tomography among patients with head injury. Ann Emerg Med 2017 Jul 21;33:1505-1514. [CrossRef] [Medline]
  13. Ip IK, Raja AS, Gupta A, Andruchow J, Sodickson A, Khorasani R. Impact of clinical decision support on head computed tomography use in patients with mild traumatic brain injury in the ED. Am J Emerg Med 2015 Mar;33(3):320-325. [CrossRef] [Medline]
  14. Probst MA, Kanzaria HK, Schriger DL. A conceptual model of emergency physician decision making for head computed tomography in mild head injury. Am J Emerg Med 2014 Jun;32(6):645-650 [FREE Full text] [CrossRef] [Medline]
  15. Melnick ER, Shafer K, Rodulfo N, Shi J, Hess EP, Wears RL, et al. Understanding overuse of computed tomography for minor head injury in the emergency department: a triangulated qualitative study. Acad Emerg Med 2015 Dec;22(12):1474-1483 [FREE Full text] [CrossRef] [Medline]
  16. Melnick ER. How to make less more: empathy can fill the gap left by reducing unnecessary care. BMJ 2015 Nov 04;351:h5831. [Medline]
  17. Melnick ER, Lopez K, Hess EP, Abujarad F, Brandt CA, Shiffman RN, et al. Back to the bedside: developing a bedside aid for concussion and brain injury decisions in the emergency department. EGEMS (Wash DC) 2015;3(2):1136. [CrossRef] [Medline]
  18. Melnick ER, Hess EP, Guo G, Breslin M, Lopez K, Pavlo AJ, et al. Patient-centered decision support: formative usability evaluation of integrated clinical decision support with a patient decision aid for minor head injury in the emergency department. J Med Internet Res 2017 May 19;19(5):e174 [FREE Full text] [CrossRef] [Medline]
  19. Montori VM, Breslin M, Maleska M, Weymiller AJ. Creating a conversation: insights from the development of a decision aid. PLoS Med 2007 Aug;4(8):e233 [FREE Full text] [CrossRef] [Medline]
  20. Melnick ER, Probst MA, Schoenfeld E, Collins SP, Breslin M, Walsh C, et al. Development and testing of shared decision making interventions for use in emergency care: a research agenda. Acad Emerg Med 2016 Dec;23(12):1346-1353. [CrossRef] [Medline]
  21. Consumer Reports, Choosing Wisely, American College of Emergency Physicians. ConsumerHealthChoices. Yonkers, NY: Consumer Reports, Inc Do you need a CT scan for a head injury?   URL: http://consumerhealthchoices.org/wp-content/uploads/2016/10/ChoosingWiselyHeadInjuryACEP-ER.pdf [accessed 2017-09-20] [WebCite Cache]
  22. Fagerlin A, Zikmund-Fisher BJ, Ubel PA, Jankovic A, Derry HA, Smith DM. Measuring numeracy without a math test: development of the Subjective Numeracy Scale. Med Decis Making 2007;27(5):672-680. [CrossRef] [Medline]
  23. Zikmund-Fisher BJ, Smith DM, Ubel PA, Fagerlin A. Validation of the Subjective Numeracy Scale: effects of low numeracy on comprehension of risk communications and utility elicitations. Med Decis Making 2007;27(5):663-671. [CrossRef] [Medline]
  24. McNaughton C, Wallston KA, Rothman RL, Marcovitz DE, Storrow AB. Short, subjective measures of numeracy and general health literacy in an adult emergency department. Acad Emerg Med 2011 Nov;18(11):1148-1155 [FREE Full text] [CrossRef] [Medline]
  25. Hess EP, Wyatt KD, Kharbanda AB, Louie JP, Dayan PS, Tzimenatos L, et al. Effectiveness of the head CT choice decision aid in parents of children with minor head trauma: study protocol for a multicenter randomized trial. Trials 2014 Jun 25;15:253 [FREE Full text] [CrossRef] [Medline]
  26. O'Connor AM. Validation of a decisional conflict scale. Med Decis Making 1995;15(1):25-30. [Medline]
  27. Koedoot N, Molenaar S, Oosterveld P, Bakker P, de Graeff A, Nooy M, et al. The decisional conflict scale: further validation in two samples of Dutch oncology patients. Patient Educ Couns 2001 Dec 01;45(3):187-193. [Medline]
  28. Hess EP, Hollander JE, Schaffer JT, Kline JA, Torres CA, Diercks DB, et al. Shared decision making in patients with low risk chest pain: prospective randomized pragmatic trial. BMJ 2016 Dec 05;355:i6165 [FREE Full text] [Medline]
  29. Anderson LA, Dedrick RF. Development of the Trust in Physician scale: a measure to assess interpersonal trust in patient-physician relationships. Psychol Rep 1990 Dec;67(3 Pt 2):1091-1100. [CrossRef] [Medline]
  30. Thom DH, Ribisl KM, Stewart AL, Luke DA. Further validation and reliability testing of the Trust in Physician Scale. The Stanford Trust Study Physicians. Med Care 1999 May;37(5):510-517. [Medline]
  31. Brooke J. SUS-a quick and dirty usability scale. Usability Eval Ind 1996;189(194):4-7.
  32. Reichheld F. The one number you need to grow. Harvard Bus Rev 2003;81(12):46-55.
  33. Stiell IG, Clement CM, Grimshaw JM, Brison RJ, Rowe BH, Lee JS, et al. A prospective cluster-randomized trial to implement the Canadian CT Head Rule in emergency departments. CMAJ 2010 Oct 05;182(14):1527-1532 [FREE Full text] [CrossRef] [Medline]
  34. Bangor A, Kortum P, Miller J. Determining what individual SUS scores mean: adding an adjective rating scale. J Usability Stud 2009;4(3):114-123.
  35. Sauro J, Lewis JR. Quantifying the User Experience: Practical Statistics for User Research. Waltham, MA: Morgan Kaufmann; 2012.
  36. Bangor A, Kortum P. Usability ratings for everyday products measured with the System Usability Scale. Int J Hum Comput Interact 2013;29(2):76.
  37. Glasziou P, Haynes B. The paths from research to improved health outcomes. Evid Based Nurs 2005 Apr;8(2):36-38 [FREE Full text] [Medline]
  38. Gellert G, Webster S, Gillean J, Melnick E, Kanzaria H. Should US doctors embrace electronic health records? BMJ 2017 Dec 24;356:j242. [Medline]
  39. Ratanawongsa N, Barton JL, Lyles CR, Wu M, Yelin EH, Martinez D, et al. Association between clinician computer use and communication with patients in safety-net clinics. JAMA Intern Med 2016 Jan;176(1):125-128 [FREE Full text] [CrossRef] [Medline]
  40. Mbachu S, Pieribone V, Bechtel KA, McCarthy ML, Melnick E. Optimizing recruitment and retention of adolescents in emergency department research: findings from concussion biomarker pilot. Am J Emerg Med 2017 Sep 14. [CrossRef] [Medline]
  41. Rogers E. Diffusion of Innovations. New York, NY: Free Press Glencoe; 1962.
  42. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health 1999 Sep;89(9):1322-1327. [Medline]
  43. Glasgow RE, Lichtenstein E, Marcus AC. Why don't we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. Am J Public Health 2003 Aug;93(8):1261-1267. [Medline]


CCHR: Canadian CT Head Rule
CDS: clinical decision support
CT: computed tomography
ED: emergency department
IRB: institutional review board
RA: research assistant


Edited by G Eysenbach; submitted 14.08.17; peer-reviewed by T Agresta, F De Backere; comments to author 06.09.17; revised version received 07.09.17; accepted 08.09.17; published 28.09.17

Copyright

©Navdeep Singh, Erik Hess, George Guo, Adam Sharp, Brian Huang, Maggie Breslin, Edward Melnick. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 28.09.2017.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mhealth and uhealth, is properly cited. The complete bibliographic information, a link to the original publication on http://mhealth.jmir.org/, as well as this copyright and license information must be included.