David Dickey: Proposing New MEMP Metrics

Is your medical equipment management program (MEMP) of high quality? How should we define this?

Perhaps we can start by establishing quality as “the totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs.” From there, we could determine the degree of excellence using a grade, which I propose should be based upon a mathematical scoring of various components of the MEMP.

These methods to define and quantify both quality and effectiveness of a MEMP should be related to metrics and goals that strive to maximize the safe use, availability, and operation of patient care equipment and not cause or contribute to negative impacts on patient care outcomes or increase in length of stay.

In 2016, my organization developed a new corporate continuing excellence policy which outlines how we were going to define, measure, and report quantitative indicators related to both quality and effectiveness of our MEMP. These new metrics allow us to quantify how our delivery and management of the MEMP impacts the delivery of patient care, safety, and outcomes.

For the major defined elements of your MEMP, our new assessments are based on asking, how effective is (or was) each component of your MEMP on minimizing or preventing:

  • Patient injury or death due to malfunction of medical equipment managed by our MEMP?
  • Patient injury or death due to improper use of medical equipment managed by our MEMP?
  • The unforeseen extension in a patient’s length of stay (LOS) due to our medical equipment use or operation?

Stated differently: How effective is your device repair program, scheduled inspection program, clinical staff equipment operation education program, and incoming inspection of new equipment program on minimizing patient injury, patient death, or any increase in patient LOS?

The data required to make these new program metric assessments comes from multiple sources, one being from our Safety First reporting system, whereby we are alerted to any patient incident (or near miss) that involved, or had a suspected contribution from, a medical device. We have to then evaluate each event and make a determination if the device, if it truly failed, was the result of a maintenance event, lack of proper maintenance, our AEM program, user error, a product defect, or a random failure.

This is a manual review process that takes a lot of time to perform, but without reading all of the free-form text notes contained in the reportable event, I see no easier way to make the judgment call as to the cause of the event, in terms of the equipment contribution.

A few of our new metrics are based on a selection of goal targets which were arbitrarily selected. For example, we set a goal that limits the number of devices (managed by our MEMP) that failed and found to be the cause of a patient’s increased length of stay to be one percent or less of the total device failures reported for the year. Why one percent? There’s no real reason other than we had to start with something. While an ideal goal may be zero percent, this may not be realistic, and time will tell as we eventually tighten up this goal in future years.

Obtaining the information for this metric (impact on LOS) has been found to be a bit challenging, as we had to train our call center staff to always ask this question. In addition, we asked our computerize maintenance management system (CMMS) vendor to modify their online work request software to make this a mandatory field in order to be able to submit a work request. We have the ability to extract these fields via a download to spreadsheet format, for analysis. In addition, our clinical engineering (CE) and healthcare technology management (HTM) staff have access to our Safety First reporting system where we can read the text submitted on each equipment related reported event to determine if increase in patient length of stay was reported.

The same situation holds true for defining the goal for our metric that requires that the total number of device preventive maintenance (PM) procedures completed not ending up causing a post-PM device failure on more than one percent of all PM procedures performed. We developed a new metric that states that the net number of device types that are found to be having an increase in the number of user issues (i.e., which could be resolved by more user training) should be no more than five percent over the number of device types having user issues from the previous year. As with a few of the other new metrics, these goals were arbitrarily selected to determine if they can be measured and met. Our intent is to maintain these targets for another year, then eventually raise the bar.

Getting at the data has required our CMMS vendor to develop custom secure query language statements that generate the raw data which is then be manually reviewed and analyzed. While this does take some work effort, I believe that a “human review” of the information is essential in order to fully determine what is going on in terms of cause and effect (i.e., to be able to determine if the device failure or patient impact could be identified to have a contribution from the implementation of the MEMP).

A few of my professional CE and HTM colleagues have suggested that “challenging the Center for Medicare & Medicaid Services or the Joint Commission on the merits of PM completion metrics may not be wise.” Well, if that’s the case then perhaps my retirement could come earlier than planned!

All joking aside, I truly believe that the HTM profession has historically not done a very good job in being able to quantify our impact on patient care, and, while counting the number of tasks we do, and the associated costs is important, these are not adequate indicators of our contribution to patient safety and outcomes.

I encourage all members of the HTM community to put some thought into what they are measuring and why. Perhaps CMS and other agencies will eventually catch on and start asking more pertinent questions about our contribution to healthcare. If not, then I assume the “sacred cows” metrics related to PMs completions will survive, until they are eventually put out to pasture.

David M. Dickey, MS, FACHE, CCE, CHTM, is vice president of McLaren Health Care Clinical Engineering Services.

One thought on “David Dickey: Proposing New MEMP Metrics

  1. A risk of setting one percent (or any number) as an acceptable rate has inherent traps. One is the question of “acceptable to who” which is unlikely to include the person (patient) directly effected. A second is the notion that less than 1% is “OK”. I note in this regard that that only LOS and lack of PM were given a number here.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s