Larry Fennigkoh: It’s Past Time for HTM to Turn Data into Knowledge

How do you scientifically determine the effect of something on something else?

The answer (in whatever form it may take and regardless of the branch of science from which it originates) may often be answered through use of a well-established statistical technique referred to as regression analysis. It is through the application and appropriate use of such a method, along with many others, that we can derive value and make sense of mere data, answer questions, and—most importantly—gain new knowledge. Virtually all of the other learned disciplines make use of these tools. Why, then, has clinical engineering and the healthcare technology management (HTM) community yet to embrace and apply such powerful statistical tools to the masses of available data within our computerized maintenance management systems (CMMSs)? As a discipline, we have decades of accumulated data, but in essence still very little knowledge on the effects of one thing on another.

For example, we really do not know—beyond conjecture and the anecdotal—the effects of ventilator preventive maintenance labor hours (or inspection volume) on unscheduled ventilator corrective maintenance costs; or, the effects of device manufacturer on device cost of ownership; or, is there any meaningful relationship between hospital bed size and needed HTM full-time equivalents; or, is the apparent upward trend in total work order volume statistically significant? Such questions and more cannot be answered through the use of simple descriptive statistics (i.e., by just looking at sample averages and their associated standard deviations). Here, we need to apply and use the inferential statistics, such as t-tests, ANOVA (analysis of variance) regression, and/or chi-square analysis. Contrary to what we may initially think, these techniques are not at the level of “rocket science” in complexity. We already have the data. It’s now just a matter of properly formatting and importing them into (preferably) one of many available statistical software packages (even Excel would do). As such, what is the community and especially many of our great CMMS vendors waiting for?

Again, we have the data. So, let’s start to seriously and appropriately mine and interrogate this stuff and see what—if anything—emerges; even findings of no difference, no effect, and no significance, are meaningful. Such findings may not feel good, but they still represent and may provide new knowledge, which then often begets and prompts new and even more potentially revealing questions. This is the essence and goal of doing good science.

Even with the inherent “messiness” or variability associated with any given CMMS database, the beauty and elegance of these statistical techniques is that they will not only work in spite of it but also tell us the proportion of such variability that remains unexplained. That itself is a measure of just how messy the data is—again, new knowledge.

So, let’s start to move beyond merely reporting and describing the things we do and report the effects of what we did. Then and only then can we continue to evolve as a profession.

Larry Fennigkoh, PhD, is professor of biomedical engineering at the Milwaukee School of Engineering and a member of the BI&T Editorial Board for AAMI.

,

Connect

Subscribe to our RSS feed and social profiles to receive updates.

5 Comments on “Larry Fennigkoh: It’s Past Time for HTM to Turn Data into Knowledge”

  1. Larry Fennigkoh Says:

    As Dr. Ridgeway correctly notes, any data mining expeditions first starts with a well-defined question. It is then that we can select an available study period, e.g., 3-5 years, sort the data, and then run the appropriate statistical tests. The power and flexibility associated with one such test – multiple regression – is that our independent or predictor variables can be on any measurement scale. So in a study of infusion pumps for example, such predictor variables could include PMs on a pass/fail scale, along with age of the device, manufacturer, location, etc. The response or dependent variable could be total corrective maintenance costs, labor hours, or service calls. The output from such an analysis would then tell us not only how much of the variability in the response variable is being explained by each of the predictor variables but whether it was also statistically significant. If there is nothing there, the results will tell us that as well.

    In regards to Dr. Hyman’s concerns, and even within a single hospital system, such a study could still reveal the presence of any relationships between response and predictor variables. If needed, a meta-analysis across multiple hospitals using the same regression techniques could be used to answer the same questions.

    Dr. Makar’s interest in including other variables in such a study could also be done.

    Continued thanks for your thoughtful comments on this issue. The primary point I was hoping to share was let’s give some of these powerful inferential statistical tools a try and see what emerges. If there’s nothing there, we’ll be at least be able to quantify the amount of the nothing.

    Reply

  2. Matt Baretich Says:

    The HTM community has tons of data but, as far as I know, it has never been analyzed using advanced statistical techniques or “big data” methodologies. We should take this opportunity to support Dr. Fennigkoh’s work!

    Reply

  3. Malcolm Ridgway Says:

    I would like to go on the record as supporting the very important points being made by both Dr. Fennigkoh and Dr. Hyman – and add my own observations about the richness of the untapped treasure that we should be able to mine from our maintenance data. My point is that you have to ask the right questions. If there is no “meat” in the data to start with, then no end of processing will produce any kind of treasure.

    Case in point: We are still struggling to answer questions about the value of PM and it seems that we are no closer to an answer today than we were 20 years, 30 years or even further back when we first debated this issue. Some members of AAMI’s Maintenance Practices Task Force are again trying to make the case for standardizing our maintenance reports. If we coded every repair call with a simple answer to the question – In your judgment was this particular failure PM-preventable? – and we aggregated all of that data by manufacturer-model, and PM procedure from a large number of organizations, then a relatively simple analysis could give us a fairly solid indication of the effectiveness of that particular PM procedure for that particular manufacturer-model version of that type of device.

    Similarly, we need to upgrade our reporting of what we find when we do PMs. At the moment, the near universal entry is simply Pass or Fail. Here is the “meat” that we need: If we restored any of the device’s non-durable parts, did we find that one or more of those parts was past or well past its optimum restoration point? If so, add details in a short narrative field. And if we did any performance or safety testing, did the device fail to meet any of its specifications? If so, add details in a short narrative field. A positive answer to either of these two questions indicates a hidden (latent) failure which should be tallied along with the number of overt failures that were diagnosed in the repair work orders.

    For more on this, visit the “Welcome Package” for data aggregators on the home page of the Task Force’s website at http://www.HTMCommunitydB.org.

    Reply

  4. Ellen Makar Says:

    I so agree! I think it would be interesting to then take this a step further and see if we can relate maintenance schedule and staffing to clinical staff satisfaction, patient satisfaction, and even clinical outcomes.

    We know that a hospital is a complex organization; the butterfly effect of correct staffing for BME and maintenance is not always apparent to those in decision-making roles.

    Having reliable equipment that works well when you need it is an important factor in keeping nurses and physicians. They might not even recognize this “fact,” but I know this from my years of hospital nursing leadership experience. Having the data to prove it would be awesome.

    Feel free to reach out to me to discuss further.This is one of the main reasons I am involved with AAMI: my belief that equipment, biomedical engineering, medical device choice, and EHR usability has an as-yet unquantified impact on the ability of the nurse to nurse effectively.To me, it’s a no-brainer.

    Dr Ellen Makar
    Makarelv@gmail.com

    Reply

  5. William Hyman Says:

    An inherent problem here is that any one institution has only its own data which reflects only what it does. You can’t determine the effect of maintenance time on downtime if you only have one maintenance schedule. You can do a future experiment of less or more maintenance, but this is different from scrutinizing the data that you already have. You might then say to compare between institutions. This would be possible if they had different maintenance plans and if you could account for the many other variables.

    What seems more feasible to me is to examine each field failure to determine if that failure had anything to do with the maintenance plan. This may be required anyway under an AEM plan, and sounds to me like good practice in any case.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: