William Hyman: Learning from Nonactionable Clinical Alarms

One of the accepted challenges in better utilization of clinical alarm systems is the large number of nonactionable alarms and their adverse effect on overall patient care. Nonactionable alarms are those that summon a caregiver to the bedside, but do not result in clinical intervention once the caregiver is at the bedside and can assess the situation. Such alarms contribute to the excessive number of alarms and in turn alarm fatigue. They can also lead to the sometimes inappropriate manipulation of the system to reduce or eliminate alarm generation, or to just make the alarms inaudible. Additionally, excessive alarms can result in alarm sounds becoming background noise in which caregivers don’t register the alarm at all or do not register it as something that needs their attention.

While often called “false alarms,” nonactionable alarms are better understood if “false” is distinguished from “true but nonactionable.” A false alarm is one in which the specific data triggering the alarm is not correct, or the alarm processing generates spurious results. An example of bad data is when interference generates artifacts that are processed as if they were actual physiologic output, i.e., the system cannot distinguish between real data and non-data. Alarm triggering also can occur from internal processing glitches that create an alarm in the absence of data that meets the alarm criteria.

True but nonactionable alarms are those in which real data is captured and correctly processed to generate an alarm according to the predefined criteria, but for which it turns out no clinical or technical action is necessary. For example, this could be the result of a brief and non-consequential event (e.g., a cough or sudden movement) which affects the measured clinical parameter only momentarily after which the physiology and the data returns to normal.

One popular idea to address this type of alarm is to build in delays between the onset of an alarm condition and the creation of the alarm. If the alarm condition self-corrects during the delay period, then no alarm sounds. Some care must be taken here with respect to rapidly repeated events and self-corrections which might be both true and bad. Another approach is to adjust the alarm limits so that greater variation from a nominal set-point is allowed before an alarm is initiated. This type of adjustment can be patient specific such that different patients have different limits and allowable ranges. Patient-specific limits require much more effort on the front-end to assess each patient and determine alarm limits that are appropriate. Such adjustments are a core question in The Joint Commission’s Alarms National Patient Safety Goal, which requires that policies be created addressing who can adjust which alarms.

A more manual approach to nonactionable alarms that I once observed was a central station electrocardiogram (ECG) monitoring operation is which paper ECG strips were automatically printed based on predetermined criteria. In my naiveté at that time, I was surprised to see the monitoring tech tear off some strips and throw them away with no further action. The explanation was that some patients generated “abnormalities” on a regular basis and that there was no need to respond to, or even save, each one. In this case, there was a presumably appropriately trained human in the loop who intercepted the alarm and did not allow it to lead to an unnecessary call for bedside attention.

A more sophisticated approach to the challenge is to use data from multiple sensors in an integrated manner to obtain a better assessment of patient condition than is provided by any single parameter. Building and validating algorithms for this type of integrated approach can be a complex and challenging task, and might increase the relative risk of missing an important event being attributed to the manufacturer as opposed to the caregiver. Risk avoidance by manufacturers in general tends toward creating a greater, rather than lesser, number of alarms and letting the clinical staff deal—or not deal—with it. In this approach, any inaction on an alarm that should have led to intervention can be blamed on the caregiver rather than the technology. This manufacturer risk-avoidance approach is also why so many alarms make the same sound at the same volume.

One approach to a greater understanding of true but nonactionable alarms is to actively identify, collect, and study why each such alarm was triggered in order to determine what could be done to eliminate that type of alarm without incurring unacceptable risk. This is quite different than simply noting that there was a nonactionable alarm—or not even noting it but just accepting it as the normal course of events.

It first must be determined whether the alarm was true or false. If true, then why did the clinical facts create an alarm even though no intervention was required? Based on the answer to this question, what adjustment to the system could be made such that this alarm would not have occurred? If that adjustment were made would an actionable event–if any–have been missed? The latter question could be facilitated by having continuous past data such that a “simulation” could be run on the patient’s actual monitored history. For example, if the limits for an arbitrary variable had been 40-70 instead of 50-60, what other alarms would not have occurred? If some would not have occurred, of what consequence would that have been?

From outside the clinical environment, it can be stated too easily that more alarms are better because you certainly don’t want to miss an opportunity for a positive intervention. This perspective ignores the clinical reality that arises from an excessive number of alarms that disrupt otherwise useful clinical care, and lead to alarm fatigue and perhaps inappropriate alarm adjustments. The related reality is also that the clinical area may be understaffed relative to the number and types of tasks that the caregivers are providing. In this regard, it must be remembered that when assessing alarms per nurse per shift, the ratio can be lowered by fewer alarms (the numerator) or by more nurses (the denominator). It might also be remembered that caregivers do not have the luxury of being presented with alarms only at the average rate, i.e., the peak rate might be of greater concern. Similarly, finding new ways to transmit alarms to the nurses does not reduce the number of alarms that occur.

William Hyman, ScD, is professor emeritus of biomedical engineering at Texas A&M University. He now lives in New York where he is adjunct professor of biomedical engineering at The Cooper Union.

One thought on “William Hyman: Learning from Nonactionable Clinical Alarms

  1. @WilliamHyman, Thanks for your excellent comments on the issue of alarm fatigue and your analysis. Your “simulation” suggestion is exactly the approach we have taken with our alarm consulting team at Philips. Having studied this for a number of years, it has become clear that there are some significant benefits to this approach as well as pitfalls that could potentially lead to false conclusions. The analysis of this “historical” data requires a deep knowledge of the alarm states and the underlying algorithms that trigger them and this will be significantly different between monitors and manufacturers. Overall, I completely agree with the use of these data to help reach conclusions but it must be done expertly and with significant knowledge of the devices under study.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s