Next Article in Journal
Deep Learning-Based Accuracy Upgrade of Reduced Order Models in Topology Optimization
Previous Article in Journal
Health Hazards Assessment and Geochemistry of ElSibai-Abu ElTiyur Granites, Central Eastern Desert, Egypt
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Tool to Retrieve Alert Dwell Time from a Homegrown Computerized Physician Order Entry (CPOE) System of an Academic Medical Center: An Exploratory Analysis

1
Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan
2
International Center for Health Information and Technology, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan
3
Department of Medicine, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA 02115, USA
4
Big Data Institute, University of Oxford, Oxford OX3 7LF, UK
5
St. John’s College, University of Oxford, Oxford OX1 3JP, UK
6
Information Technology Office in Taipei Municipal Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
7
Department of Radiation Oncology, Taipei Municipal Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
8
Office of Public Affairs, Taipei Medical University, Taipei 110, Taiwan
9
Department of Dermatology, Taipei Municipal Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(24), 12004; https://doi.org/10.3390/app112412004
Submission received: 23 September 2021 / Revised: 6 December 2021 / Accepted: 13 December 2021 / Published: 16 December 2021
(This article belongs to the Topic eHealth and mHealth: Challenges and Prospects)

Abstract

:
Alert dwell time, defined as the time elapsed from the generation of an interruptive alert to its closure, has rarely been used to describe the time required by clinicians to respond to interruptive alerts. Our study aimed to develop a tool to retrieve alert dwell times from a homegrown CPOE (computerized physician order entry) system, and to conduct exploratory analysis on the impact of various alert characteristics on alert dwell time. Additionally, we compared this impact between various professional groups. With these aims, a dominant window detector was developed using the Golang programming language and was implemented to collect all alert dwell times from the homegrown CPOE system of a 726-bed, Taiwanese academic medical center from December 2019 to February 2021. Overall, 3,737,697 interruptive alerts were collected. Correlation analysis was performed for alerts corresponding to the 100 most frequent alert categories. Our results showed that there was a negative correlation (ρ = −0.244, p = 0.015) between the number of alerts and alert dwell times. Alert dwell times were strongly correlated between different professional groups (physician vs. nurse, ρ = 0.739, p < 0.001). A tool that retrieves alert dwell times can provide important insights to hospitals attempting to improve clinical workflows.

1. Introduction

Interruptive alerts, in the form of suspended pop-up windows, have been widely used in clinical decision support (CDS) and computerized physician order entry (CPOE) systems to prevent medical errors during diagnosis and treatment [1,2,3]. In contrast to soft-stop and passive alerts, interruptive alerts require the physician to take one or more actions to close the alert dialog box. Although several studies have shown that interruptive alerts are an effective way to prevent dangerous prescriptions by ensuring that physicians are likely to see them, they can also lead to unintended consequences [4,5,6,7], such as additional cognitive burden, delays in the delivery of appropriate therapy [8,9,10], and general alert fatigue.
Alert fatigue, defined as “the mental state that is the result of too many alerts consuming time and mental energy [11,12]”, usually occurs when interruptive alert systems unnecessarily and frequently distract clinicians from their thought processes [13]. Consequently, both clinically important and unimportant alerts may be ignored. Causes of alert fatigue include, but are not limited to: (1) a large number of alerts [14]; (2) clinically irrelevant alerts or alerts of low clinical value [15,16]; and (3) miscommunication or lack of communication of the meaning of the alert [17].
Previous studies have primarily used alert override rates (total number of alerts that do not change the physician’s prescribing behavior divided by the total number of alerts presented to the physician) to evaluate alert efficiency [18,19,20]. However, using alert override rates to evaluate alert efficiency still assumes that users read alerts to determine their relevance [21]. In this study, we assess “alert dwell time” instead, defined as the time elapsed from an interruptive alert being generated to it being closed or overridden, which more broadly quantifies the time required for clinicians to respond to interruptive alerts, whether they read them or not [22].
To date, few studies have used alert dwell time to evaluate the efficiency of their alert system, with the majority focusing only on clinical alerts (e.g., prescribing and drug-drug interactions) [21,22,23]. However, CPOE systems like ours generate both clinical and administrative (low or no clinical relevance) alerts [24,25,26,27], with up to 80% being administrative as reported in our previous study [16]. Therefore, neglecting administrative alerts may limit the improvements that can be made to the alert system, especially in relation to alert fatigue.
Here, we studied alert dwell times for both administrative and clinical alerts. To do so, we developed a window retrieval technique, which detects how much time the user stays in every program window while operating the computer. Using data collected by the alert log collector (ALC, a method used to collect alerts in a homegrown CPOE system) [16], the specific alert dwell times for each alert could be identified. Additionally, we conducted an exploratory analysis of the impact of common alert characteristics (i.e., number of alerts and alert message length) on the alert dwell time, as well as potential differences between professional groups (physician, nurse, other).

2. Materials and Methods

2.1. The Implementation of Alert Log Collector and Dominant Window Detector

The implementation of an alert log collector was described in our previous study [16]. The Go (Golang) programming language (Version 1.15.7) was used to build the dominant window detector, operating independently from the hospital’s CPOE system. The database was maintained on MySQL. A demonstration of the dominant window detector program is on GitHub: https://github.com/alanjian/DominantWindowDetector (accessed on 16 March 2021). The dominant window detector was first implemented in the Taipei Municipal Wan Fang Hospital (WFH) in December 2019, running continuously on the computer unless terminated by the user or from an unexpected system crash. The dominant window is defined as the one that is currently active at the forefront of the computer screen. The principal processes of the dominant window detector are in Supplementary Materials. When the user changed the dominant window, the dominant window detector recorded data relevant to this study, and transmitted them to the database for further analysis.

2.2. Data Combination Process

We used an alert log collector alongside a dominant window detector to collect relevant data [16]. However, the alert log collector only logs the alerts without their respective alert dwell times; meanwhile, the dominant window detector records all alert dwell times without the contents of the alert messages (Figure 1). Therefore, we combined these data.
First, the data merging program checks for consistency in the computer’s IP address and dates between records of user-selected windows (collected by the dominant window detector) and alert log records (collected by the alert log collector) (Figure 2). If consistent, the time gap between the two records is calculated.
The process of calculating the time gaps between the respective ALC (Alert Log Collector) and DWD (Dominant Window Detector) records is shown in Figure 3. The smallest time gaps between the dominant window records and the alert log records were interpreted to indicate that these records had been derived from the same alert window (thicker line).
For each alert category, titles have corresponding message contents. For example, the message content corresponding to “Alternative medication remind” is “[Drug A] has been suspended. Do you want to use [Drug B] as an alternative option?” (Table S1). However, the content may contain different drugs. Thus, we use “(*)” as a Regular Expression to indicate that this position can be any drug name. Finally, the data merging program checks whether the title of the dominant window and the alert content correspond to predefined pairs of categories. There were 501 predefined pairs in this study. If the window title and alert content were validated, the records were merged successfully. The data merging program keeps checking and merging data until all records have been compared. For our study, the data combination process took about four days to complete.

2.3. Data Analysis

The MySQL Relational Database Management System (RDBMS) was used to store and manage all data. The outpatient clinic alerts and window-switching event records in WFH were stored. We tested for normality among variables related to different alert characteristics using Kolmogorov-Smirnov and Shapiro-Wilk tests. We assessed correlation using Spearman’s rho (ρ) coefficient. Ranges of absolute ρ (rho) values were used to define strong correlation (0.70–0.99), moderate correlation (0.30–0.69), and weak correlation (0.0–0.29) [28]. IBM’s SPSS Statistics 19 (Armonk, NY, USA) was used to compute descriptive statistics, normality tests, and correlation analyses.

3. Results

3.1. Alert Distribution

We successfully developed a dominant window detector using the Golang programming language, and retrieved alert dwell times in the homegrown CPOE system of an academic medical center in Taiwan. The alert log collector was implemented on 11 November 2017, in WFH: in total, 3,737,697 triggered alerts were collected in the 14-month study period (from 25 December 2019, to 20 February 2021). The numbers of alerts were highly concentrated to the top 100 most frequent categories, consistent with the results of our previous study [16]. First, in the data combination process, 462,463 alerts were excluded due to missing alert dwell time values because some clinicians would terminate the dominant window detector before starting the visits. The missingness is 12% for all data, so we believed that removing these data with missing values would not affect our results. A further 28,763 alerts were excluded for being outside the top 100 most frequent categories. Ultimately, 3,246,471 alerts relating to the top 100 most frequent categories were included in our study (Figure 4).

3.2. Normality Test and Descriptive Statistics

The normality tests showed that the alert dwell time, the number of alerts, the length of the alert message, alert dwell time (physician), alert dwell time (nurse), and alert dwell time (other professional group) were not normally distributed (Kolmogorov-Smirnov test statistic p = 0.001; Shapiro-Wilk normality test p = 0.001) (Table S2). Thus, the median was used for average values of each alert characteristic.
The descriptive statistics are shown in Table 1. All the different alert characteristics were represented across 100 unique alert categories. The median number of alerts was 5380 times (range, 1201–760,690 times), and the mean number of alerts was 32,465 times. In order to know whether users spend more time reading alerts with longer messages, the alert message length was calculated as the number of words in the message. Only meaningful words were counted; blanks, punctuation marks, and line breaks were excluded. The median alert message length was 15 words (range, 4–278 words). The mean alert message length was 23 words. The median alert dwell time was 1.3 s (range, 0.03–3 s), and the mean alert dwell time was 1.3 s. The median alert dwell time for physicians, nurses, and other professional groups were 1.3 s (range, 0.03–2.3 s), 1.4 secs (range, 0.03–3.1 s), and 1.3 secs (range, 0.00–3.1 s), respectively. The mean alert dwell times for physicians, nurses, and other professional groups were 1.3 s, 1.5 s, and 1.3 s, respectively.

3.3. Top 10 Most Frequent Alert Categories and Correlation Analysis

The top 10 most frequent alert categories are displayed in Table 2. Most of the alerts were triggered on the physician’s computer (8 out of 10), and most of the alert dwell times were between 1 s and 10 s. For the top 10 most frequent alert categories, the median alert dwell times ranged from 0.027 s to 2.635 s. However, the maximum alert dwell time was up to 9186.9 s, a clear outlier that occurred due to the user being away from the keyboard.
There was a weak negative correlation (ρ = −0.244, p = 0.015 < 0.05) between the number of alerts and alert dwell time. This suggests that clinicians tend to spend less time reading the alerts that were triggered more frequently. The examples of less frequent alert categories with longer alert dwell times are in Table S3. Examples of alert categories with higher clinical relevance are in Table S4.
In terms of the impact of alert message length on alert dwell time, this was not statistically significant (p = 0.097 > 0.05). However, there was a strong positive correlation between the alert dwell times of different professional groups, namely physician vs. nurse (ρ = 0.739, p < 0.001), physician vs. other (ρ = 0.811, p < 0.001), and nurse vs. other (ρ = 0.733, p < 0.001). Alert dwell times were strongly correlated between different professional groups. There was a weak positive correlation between alert message length and the number of alerts (ρ = 0.277, p = 0.005 < 0.01).

4. Discussion

In this study, we successfully developed a dominant window detector through the Golang programming language, which was utilized to retrieve the alert dwell times within the CPOE system of an academic medical center in Taiwan. Few studies to date have reported alert dwell times [21,22,23]. We showed that alert dwell times can be used to assess which categories of administrative and clinical alerts are most time-consuming, and/or are being ignored. By looking for potentially resolvable problems of the CPOE alert system in this way, the clinical workflow can ultimately be improved.
Most CPOE systems use alerts to prevent clinical errors and to optimize the safety and quality of clinical decisions [29], such as improving physician compliance with guidelines [30] and reducing unintentional duplicate orders [31]. Previous studies have shown that it takes physicians 8 to 10 s to completely read alerts. However, in our study, the most common alert dwell time was around one second, suggesting that our physicians had become victims of alert fatigue, or that they had learned to reflexively disregard messages of low or no clinical importance [32]. One indication of alert fatigue is when clinicians spend less time per alert, sometimes going as far as not reading the alert at all [33]. The most frequent alerts in our CPOE alert system were administrative, highlighting the need to optimize them preferentially.
Intuitively, the most frequent alerts have the advantage of clinician familiarity. Our study showed that more frequent alerts have lower alert dwell times, which may be suggestive of the phenomenon of alert fatigue existing in relation to our hospital’s current CPOE system. Previous studies have indicated that alert relevance (i.e., accuracy of automation) had no impact on alert dwell time [21]. Interestingly, we found that there was also no association between alert message length and alert dwell time. We hypothesize that this was because physicians did not read the alert message completely no matter how long the content. Since physicians frequently received clinically irrelevant alerts, they probably learned to override them expeditiously to optimize clinical workflow.
Our results also showed that the alert dwell times among different professional groups remained consistent. One possible contributor to this observation is that clinical processes affecting different professional groups in the hospital are analogous to one another. Another potential explanation is that the different professional groups receive alerts with a similar distribution of clinical importance. By identifying alerts with low dwell times owing to negligible clinical importance, dwell times can provide additional support for changing or even removing them [34,35].
Numerous studies of physician behavior have employed a wide array of analytical methods, including questionnaires, interviews, and observations [36,37,38]. The eventual goal of these studies is to understand the features of alerts that contribute to alert fatigue, and thus help to optimize the overall clinical workflow. The dominant window detector that we developed for this study not only retrieves the alert dwell times, but also analyzes the time spent by clinicians in different program windows during the consultation. This opens up the opportunity to analyze various elements of the clinical workflow, such as the time spent on different programs, and the identification of the most time-consuming software. Furthermore, by visualizing how computers are being used during various digitized clinical processes, any bottlenecks in the computerized elements of clinical workflow potentially become easier to discover and improve. By using a dominant window detector like ours in studies of clinician behavior on CPOE systems, the medical prescribing process may be ameliorated by decreasing alert fatigue, with potentially beneficial consequences for patient safety [39,40,41].

Limitations

This study has several limitations. First, although the dominant window detector is convenient to implement, additional data combination procedures were needed to calculate the alert dwell times. An automated, AI-based alert classification method will be applied in the next version of the dominant window detector. Second, the present study only showed the impact of fundamental alert characteristics on the alert dwell time; more variables such as medical specialties, individual variables (age, gender, clinical experience), categories, and clinical importance of alerts should be explored. Third, we focused on the alert dwell time, although alert acceptance is also an efficiency indicator of CPOE alert system performance and could not be recorded through our current program. Lastly, although we have collected and analyzed a large number of alerts, our data were collected from a single academic medical center in Taiwan. This may limit the generalizability of our findings. To address this, datasets of alerts collected in other hospitals should be used for external validation.

5. Conclusions

We have successfully developed a novel method to retrieve the alert dwell times from a homegrown CPOE system, and demonstrated its ability to help analyze the impact of several alert characteristics on alert dwell time, including categories of alert content and alert message length. Additionally, our study suggested that alert fatigue might occur for higher-frequency alerts with lower alert dwell times, and that the alert dwell times remained consistent across different clinical professional groups. Our dominant window detector can potentially expand the functionality of the hospital’s homegrown CPOE system, providing important insights for system designers and hospital administrators looking to optimize clinical workflow.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/app112412004/s1, Figure S1: Framework of the dominant window detector (HWND= Handle to the window), Table S1: The example of alert title and alert content in predefined pairs, Table S2: Normality test result, Table S3: The example of less frequent alert categories with longer alert dwell times, Table S4: The example of alert categories with highly clinically relevant.

Author Contributions

S.-C.C. and Y.-C.L.: Conception or design of the work; S.-C.C., C.-Y.C. and C.-K.H.: Data collection; S.-C.C. and C.-H.C.: Data analysis and interpretation; S.-C.C., Y.-P.C., C.-H.Y. and Y.-C.L.: Final approval of the version to be published; Y.-P.C.: Critical revision of the article; Y.-P.C., C.-Y.C., C.-K.H. and C.-H.C.: Drafting the article; C.-H.Y.: Interpretation of results; C.-H.Y.: Revision of the article. All authors reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the Ministry of Science and Technology (grant number: MOST 110-2622-E-038-003-CC1 and MOST 110-2320-B-038-029-MY3).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of Taipei Medical University Research Ethics Board (protocol code N202010047 and 20201027 of approval).

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from Wan Fang Hospital Institutional Data Access but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of Wan Fang Hospital Institutional Data Access. If readers are interested in the original dataset, please contact [email protected] (Y.-Y.L.).

Acknowledgments

We want to thank Ruey-Jing Wu (Information Technology Office in Taipei Municipal Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan) for her administrative support and Hsuan-Chia Yang (Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan) for his help during the revision process.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Koppel, R.J.C.C. What do we know about medication errors made via a CPOE system versus those made via handwritten orders? Crit. Care 2005, 9, 427. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Riedmann, D.; Jung, M.; Hackl, W.O.; Stühlinger, W.; van der Sijs, H.; Ammenwerth, E. Development of a context model to prioritize drug safety alerts in CPOE systems. BMC Med. Inform. Decis. Mak. 2011, 11, 35. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Galanter, W.; Falck, S.; Burns, M.; Laragh, M.; Lambert, B.L. Indication-based prescribing prevents wrong-patient medication errors in computerized provider order entry (CPOE). J. Am. Med. Inform. Assoc. 2013, 20, 477–481. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Abookire, S.A.; Teich, J.M.; Sandige, H.; Paterno, M.D.; Martin, M.T.; Kuperman, G.J.; Bates, D.W. Improving allergy alerting in a computerized physician order entry system. In Proceedings of the AMIA Symposium, Virtual, 14–18 November 2020; p. 2. [Google Scholar]
  5. Bates, D.W.; Gawande, A.A. Improving safety with information technology. N. Engl. J. Med. 2003, 348, 2526–2534. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Bright, T.J.; Wong, A.; Dhurjati, R.; Bristow, E.; Bastian, L.; Coeytaux, R.R.; Samsa, G.; Hasselblad, V.; Williams, J.W.; Musty, M.D.; et al. Effect of clinical decision-support systems: A systematic review. Ann. Intern. Med. 2012, 157, 29–43. [Google Scholar] [CrossRef] [PubMed]
  7. Jaspers, M.W.M.; Smeulers, M.; Vermeulen, H.; Peute, L. Effects of clinical decision-support systems on practitioner performance and patient outcomes: A synthesis of high-quality systematic review findings. J. Am. Med. Inform. Assoc. 2011, 18, 327–334. [Google Scholar] [CrossRef] [Green Version]
  8. Phansalkar, S.; van der Sijs, H.; Tucker, A.D.; Desai, A.A.; Bell, D.S.; Teich, J.M.; Middleton, B.; Bates, D.W. Drug-Drug interactions that should be non-interruptive in order to reduce alert fatigue in electronic health records. J. Am. Med. Inform. Assoc. 2012, 20, 489–493. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Horsky, J.; Phansalkar, S.; Desai, A.; Bell, D.; Middleton, B. Design of decision support interventions for medication prescribing. Int. J. Med. Inform. 2013, 82, 492–503. [Google Scholar] [CrossRef]
  10. Blecker, S.; Austrian, J.S.; Horwitz, L.I.; Kuperman, G.; Shelley, D.; Ferrauiola, M.; Katz, S.D. Interrupting providers with clinical decision support to improve care for heart failure. Int. J. Med. Inform. 2019, 131, 103956. [Google Scholar] [CrossRef]
  11. Carspecken, C.W.; Sharek, P.J.; Longhurst, C.; Pageler, N.M. A clinical case of electronic health record drug alert fatigue: Consequences for patient outcome. Pediatrics 2013, 131, e1970–e1973. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Van Der Sijs, H.; Aarts, J.; Vulto, A.; Berg, M. Overriding of drug safety alerts in computerized physician order entry. J. Am. Med. Inform. Assoc. 2006, 13, 138–147. [Google Scholar] [CrossRef]
  13. Ratwani, R.M.; Gregory Trafton, J. A generalized model for predicting postcompletion errors. Top. Cogn. Sci. 2010, 2, 154–167. [Google Scholar] [CrossRef] [PubMed]
  14. Topaz, M.; Seger, D.L.; Lai, K.; Wickner, P.G.; Goss, F.; Dhopeshwarkar, N.; Chang, F.; Bates, D.W.; Zhou, L. High Override Rate for Opioid Drug-allergy Interaction Alerts: Current Trends and Recommendations for Future. Stud. Health Technol. Inform. 2015, 216, 242–246. [Google Scholar] [CrossRef] [PubMed]
  15. Miller, A.M.; Boro, M.S.; Korman, N.E.; Davoren, J.B. Provider and pharmacist responses to warfarin drug-drug interaction alerts: A study of healthcare downstream of CPOE alerts. J. Am. Med. Inform. Assoc. 2011, 18, i45–i50. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Chien, S.-C.; Chin, Y.-P.; Yoon, C.H.; Islam, M.M.; Jian, W.-S.; Hsu, C.-K.; Chen, C.-Y.; Chien, P.-H.; Li, Y.-C. A novel method to retrieve alerts from a homegrown Computerized Physician Order Entry (CPOE) system of an academic medical center: Comprehensive alert characteristic analysis. PLoS ONE 2021, 16, e0246597. [Google Scholar] [CrossRef] [PubMed]
  17. Chused, A.E.; Kuperman, G.J.; Stetson, P.D. Alert Override Reasons: A Failure to Communicate. In Proceedings of the AMIA Annual Symposium Proceedings, Washington, DC, USA, 8–12 November 2008; p. 111. [Google Scholar]
  18. Van Der Sijs, H.; Aarts, J.; Van Gelder, T.; Berg, M.; Vulto, A. Turning off frequently overridden drug alerts: Limited opportunities for doing it safely. J. Am. Med. Inform. Assoc. 2008, 15, 439–448. [Google Scholar] [CrossRef] [Green Version]
  19. Isaac, T.; Weissman, J.S.; Davis, R.B.; Massagli, M.; Cyrulik, A.; Sands, D.Z.; Weingart, S.N. Overrides of medication alerts in ambulatory care. Arch. Intern. Med. 2009, 169, 305–311. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Shah, N.R.; Seger, A.C.; Seger, D.L.; Fiskio, J.M.; Kuperman, G.J.; Blumenfeld, B.; Recklet, E.G.; Bates, D.W.; Gandhi, T.K. Improving Acceptance of Computerized Prescribing Alerts in Ambulatory Care. J. Am. Med. Inform. Assoc. 2006, 13, 5–11. [Google Scholar] [CrossRef]
  21. Baysari, M.T.; Zheng, W.Y.; Tariq, A.; Heywood, M.; Scott, G.; Li, L.; Van Dort, B.A.; Rathnayake, K.; Day, R.; Westbrook, J.I. An experimental investigation of the impact of alert frequency and relevance on alert dwell time. Int. J. Med. Inform. 2019, 133, 104027. [Google Scholar] [CrossRef]
  22. McDaniel, R.B.; Burlison, J.D.; Baker, D.K.; Hasan, M.R.M.; Robertson, J.; Hartford, C.; Howard, S.C.; Sablauer, A.; Hoffman, J.M. Alert dwell time: Introduction of a measure to evaluate interruptive clinical decision support alerts. J. Am. Med. Inform. Assoc. 2015, 23, e138–e141. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Schreiber, R.; Gregoire, J.A.; Shaha, J.E.; Shaha, S.H. Think time: A novel approach to analysis of clinicians’ behavior after reduction of drug-drug interaction alerts. Int. J. Med. Inform. 2017, 97, 59–67. [Google Scholar] [CrossRef] [PubMed]
  24. Balasuriya, L.; Vyles, D.; Bakerman, P.; Holton, V.; Vaidya, V.; Garcia-Filion, P.; Westdorp, J.; Sanchez, C.; Kurz, R. Computerized Dose Range Checking Using Hard and Soft Stop Alerts Reduces Prescribing Errors in a Pediatric Intensive Care Unit. J. Patient Saf. 2017, 13, 144–148. [Google Scholar] [CrossRef] [PubMed]
  25. Simpao, A.F.; Ahumada, L.M.; Desai, B.R.; Bonafide, C.P.; Gálvez, J.A.; Rehman, M.A.; Jawad, A.F.; Palma, K.L.; Shelov, E.D. Optimization of drug-drug interaction alert rules in a pediatric hospital’s electronic health record system using a visual analytics dashboard. J. Am. Med. Inform. Assoc. 2015, 22, 361–369. [Google Scholar] [CrossRef] [Green Version]
  26. Pirnejad, H.; Amiri, P.; Niazkhani, Z.; Shiva, A.; Makhdoomi, K.; Abkhiz, S.; van der Sijs, H.; Bal, R. Preventing potential drug-drug interactions through alerting decision support systems: A clinical context based methodology. Int. J. Med. Inform. 2019, 127, 18–26. [Google Scholar] [CrossRef] [PubMed]
  27. Pfistermeister, B.; Sedlmayr, B.; Patapovas, A.; Suttner, G.; Tektas, O.; Tarkhov, A.; Kornhuber, J.; Fromm, M.F.; Bürkle, T.; Prokosch, H.-U.; et al. Development of a Standardized Rating Tool for Drug Alerts to Reduce Information Overload. Methods Inf. Med. 2016, 55, 507–515. [Google Scholar] [CrossRef] [PubMed]
  28. Akoglu, H. User’s guide to correlation coefficients. Turk. J. Emerg. Med. 2018, 18, 91–93. [Google Scholar] [CrossRef]
  29. Berner, E.S.; La Lande, T.J. Overview of Clinical Decision Support Systems. In Clinical Decision Support Systems; Springer: New York, NY, USA, 2007; pp. 3–22. [Google Scholar]
  30. Kuperman, G.J.; Gibson, R.F. Computer physician order entry: Benefits, costs, and issues. Ann. Intern. Med. 2003, 139, 31–39. [Google Scholar] [CrossRef] [PubMed]
  31. Horng, S.; Joseph, J.W.; Calder, S.; Stevens, J.P.; O’Donoghue, A.L.; Safran, C.; Nathanson, L.A.; Leventhal, E.L. Assessment of Unintentional Duplicate Orders by Emergency Department Clinicians Before and After Implementation of a Visual Aid in the Electronic Health Record Ordering System. JAMA Netw. Open 2019, 2, e1916499. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Khajouei, R.; Jaspers, M.J.M. The impact of CPOE medication systems’ design aspects on usability, workflow and medication orders. Methods Inf. Med. 2010, 49, 3–19. [Google Scholar]
  33. Ancker, J.S.; Edwards, A.; Nosal, S.; Hauser, D.; Mauer, E.; Kaushal, R. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med. Inform. Decis. Mak. 2017, 17, 36. [Google Scholar] [CrossRef] [Green Version]
  34. Zimmerman, J.; deMayo, R.; Perry, S. Novel Cross-Care Continuum Medication Alerts Decrease Duplicate Medication Errors For Pediatric Care Transitions. Am. Acad. Pediatr. 2019, 144, 35. [Google Scholar]
  35. Ai, A.; Wong, A.; Amato, M.G.; Wright, A. Communication failure: Analysis of prescribers’ use of an internal free-text field on electronic prescriptions. J. Am. Med. Inform. Assoc. 2018, 25, 709–714. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Smith, W.R. Evidence for the Effectiveness of Techniques To Change Physician Behavior. Chest 2000, 118, 8S–17S. [Google Scholar] [CrossRef] [PubMed]
  37. Beckman, H.B.; Frankel, R.M. The effect of physician behavior on the collection of data. Ann. Intern. Med. 1984, 101, 692–696. [Google Scholar] [CrossRef]
  38. McDonald, C.J. Use of a computer to detect and respond to clinical events: Its effect on clinician behavior. Ann. Intern. Med. 1976, 84, 162–167. [Google Scholar] [CrossRef] [PubMed]
  39. Burgener, A.M. Enhancing communication to improve patient safety and to increase patient satisfaction. Health Care Manag. 2020, 39, 128–132. [Google Scholar] [CrossRef] [PubMed]
  40. Lee, E.K.; Mejia, A.F.; Senior, T.; Jose, J. Improving Patient Safety through Medical Alert Management: An Automated Decision Tool to Reduce Alert Fatigue. In Proceedings of the AMIA Annual Symposium Proceedings, Washington, DC, USA, 13–17 November 2010; pp. 417–421. [Google Scholar]
  41. Rush, J.L.; Ibrahim, J.; Saul, K.; Brodell, R.T. Improving Patient Safety by Combating Alert Fatigue. J. Grad. Med. Educ. 2016, 8, 620–621. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The process of DWD and ALC running on the computer.
Figure 1. The process of DWD and ALC running on the computer.
Applsci 11 12004 g001
Figure 2. The flowchart of the data merging process.
Figure 2. The flowchart of the data merging process.
Applsci 11 12004 g002
Figure 3. A schematic diagram of the time gap calculation. Rectangles with different colors, red circles, and blue dashed arrows represent dominant window records, alert log records, and time gaps, respectively. In this example, the time gaps were −118 s, −62 s, −27 s, +0.2 s, etc. In this case, the minimal time gap between the ALC and DWD records of the relevant alert is 0.2 s.
Figure 3. A schematic diagram of the time gap calculation. Rectangles with different colors, red circles, and blue dashed arrows represent dominant window records, alert log records, and time gaps, respectively. In this example, the time gaps were −118 s, −62 s, −27 s, +0.2 s, etc. In this case, the minimal time gap between the ALC and DWD records of the relevant alert is 0.2 s.
Applsci 11 12004 g003
Figure 4. Schematic outline of inclusion and exclusion criteria in this study.
Figure 4. Schematic outline of inclusion and exclusion criteria in this study.
Applsci 11 12004 g004
Table 1. The descriptive statistics for each alert characteristic (N = 100).
Table 1. The descriptive statistics for each alert characteristic (N = 100).
MeanSD(σ)MINQ1MEDQ3MAX
Number of alerts32,465111,83612012420538011,927760,690
Alert Message Length23204101528278
Alert Dwell Time1.30.40.031.21.31.43
Physician1.30.30.031.21.31.42.3
Nurse1.50.50.031.21.41.73.1
Other1.30.40.001.21.31.43.1
Note: SD = Standard Deviation; MIN = Minimum; Q1 = First quartile; MED = Median; Q3 = Third quartile; MAX = Maximum; N = Categories.
Table 2. Different alert characteristics for the top 10 categories of alert message content.
Table 2. Different alert characteristics for the top 10 categories of alert message content.
#Title & Message Content (Categories)Number of AlertsDwell Time (Secs)
Total (100%)Professional GroupDwell TimeMEDMeanMINMAX
PHYNUOTH<1 sec1~10 Secs10~100 Secs>100 Secs
1Notification of online health insurance open function!760,690516,099123,857120,734231,505516,16011,12219031.1692.400.0176283.24
Failed to open the online medication record, please check the card reader!67.8%16.3%15.9%30.4%67.9%1.5%0.3%
2TOCC fill733,417565,79165,850101,776151,243564,20015,93520391.2972.800.0164761.46
Please fill in the patient’s TOCC record indeed!77.1%9.0%13.9%20.6%76.9%2.2%0.3%
3Remind348,655151,904156,21840,533174,764169,62338054631.0001.860.0154855.09
Are you sure you want to cancel? It will not save while the health insurance card is removed.43.6%44.8%11.6%50.1%48.7%1.1%0.1%
4Remind220,9161914210,237876554,44496,75660,74389732.63519.540.0189186.90
The data has been saved on the health insurance card, now you may take it out.0.9%95.2%4.0%24.6%43.8%27.5%4.1%
5Remind127,62798,305556123,76141,50183,81420492631.1602.290.0161038.24
This is a primary healthcare diagnosis!77.0%4.4%18.6%32.5%65.7%1.6%0.2%
6Alternative medication remind96,08482,349353910,19615,46679,0861467651.3912.190.020726.14
[Drug A] has been suspended. Do you want to use [Drug B] as an alternative option?85.7%3.7%10.6%16.1%82.3%1.5%0.1%
7Remind!87,56172,612330311,646774477,9121867381.4762.420.0201284.53
Do you want to prescribe the drug at the patient’s own expense?82.9%3.8%13.3%8.8%89.0%2.1%0.0%
8Remind!78,55451,95615,26411,33476,38421591100.0270.120.01028.49
Do you want to reprint the invoice?66.1%19.4%14.4%97.2%2.7%0.0%0.0%
9Notification of Online health insurance open function!59,91946,07839109931788849,32125341761.4453.580.0183284.53
The information of the patient and IC card did not match. Please confirm whether the IC card is his/her card?76.9%6.5%16.6%13.2%82.3%4.2%0.3%
10Remind!45,78039,953515531213,59430,8921222721.3382.700.018600.54
[Drug A] was prescribed by another physician to [YYYYMMDD] and still has [N] days of medication remaining. Do you want to continue prescribing?87.3%1.1%11.6%29.7%67.5%2.7%0.2%
Note: PHY = Physician; NU = Nurse; OTH = Other; MIN = Minimum; MAX = Maximum; MED =Median.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chien, S.-C.; Chin, Y.-P.; Yoon, C.-H.; Chen, C.-Y.; Hsu, C.-K.; Chien, C.-H.; Li, Y.-C. A Tool to Retrieve Alert Dwell Time from a Homegrown Computerized Physician Order Entry (CPOE) System of an Academic Medical Center: An Exploratory Analysis. Appl. Sci. 2021, 11, 12004. https://doi.org/10.3390/app112412004

AMA Style

Chien S-C, Chin Y-P, Yoon C-H, Chen C-Y, Hsu C-K, Chien C-H, Li Y-C. A Tool to Retrieve Alert Dwell Time from a Homegrown Computerized Physician Order Entry (CPOE) System of an Academic Medical Center: An Exploratory Analysis. Applied Sciences. 2021; 11(24):12004. https://doi.org/10.3390/app112412004

Chicago/Turabian Style

Chien, Shuo-Chen, Yen-Po Chin, Chang-Ho Yoon, Chun-You Chen, Chun-Kung Hsu, Chia-Hui Chien, and Yu-Chuan Li. 2021. "A Tool to Retrieve Alert Dwell Time from a Homegrown Computerized Physician Order Entry (CPOE) System of an Academic Medical Center: An Exploratory Analysis" Applied Sciences 11, no. 24: 12004. https://doi.org/10.3390/app112412004

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop