Russell Furst: ‘Objective’ Technology Assessments Can Miss the Mark

Healthcare technology management (HTM) professionals usually encourage their organizations (or clients) to use objective criteria when prioritizing equipment replacement. This desire for objectivity commonly leads to some type of technology assessment using a scoring system based on criteria such as age, reliability, overall condition, utilization, state of the technology (from cutting edge to obsolete), useful life, service costs, serviceability, and OEM end-of-life. Some also add a score-weighting mechanism to give certain criteria more influence over the outcome than others. The theory behind this scoring exercise is that the resulting numerical value fairly represents replacement need. The reports generated from these efforts are substantive, comprehensive, allow for easy prioritization, and suggest a level of sophistication and objective analysis. They are also not very useful.

There are several weaknesses with this type of technology assessment and replacement planning tool.  Although the process of scoring and tabulating has the appearance of objectivity, the assignation of values to variables such as useful life, condition, and degree of obsolescence is subjective. Weighting the scores adds another layer of subjectivity. Even if useful and objective criteria are included, such as whether a device can be repaired or whether a networked device meets the organization’s network security requirements, the inclusion of all the other specious indicators leaves the outcome suspect. What these tools often do is simply package our subjectivity into something that looks objective.

Assessment criteria are also often irrelevant or ill-defined. For instance, equipment age is almost always used as part of a replacement justification even though age is a poor predictor of replacement necessity. A device’s “useful life” is rarely defined in a meaningful way. The American Hospital Association’s (AHA) useful life guide is sometimes referred to without acknowledging that the AHA guidelines are designed to schedule depreciation and not as an indicator of how long a device might be useful to the owner or how long it may function as designed. The clearest indicator of the useful life of a device is whether it is still in use. Scores indicating equipment is in poor condition are the numerical equivalent of saying, “this is a bunch of junk”, which may be our opinion, but it is not objective nor is it useful in determining replacement need.

Another  weakness of equipment scoring matrices is that they represent the way we as HTM professionals think about replacement necessity and do not  account for the decision criteria that clinicians or administrators use  and that can include such disparate issues as the recruitment or retention of a key physician, marketing opportunities, competitive external environment,  business growth or contraction, strategic initiatives, internal politics, patient experience, changes in reimbursements, and regulatory requirements.

Finally, I would suggest that most technology assessment tools are reductive and devalue our critical thinking skills. Capital funding decisions are largely value judgements made in the context of competing interests. Our contribution to equipment replacement decisions ought to be more than a number at the bottom of a spreadsheet.

Russell Furst is director of clinical technology assessment and planning with ISS Solutions—Geisinger Health System. He is a member of the Editorial Board for AAMI’s journal, BI&T.

, ,

Connect

Subscribe to our RSS feed and social profiles to receive updates.

5 Comments on “Russell Furst: ‘Objective’ Technology Assessments Can Miss the Mark”

  1. Anonymous Says:

    I have operated under several capital acquisition structures over the years. While scoring might seem reasonable, and methodical, it rarely predicted the outcome. My efforts now are focused on creating real comparisons and real dialogue so that the decision is a conscious choice, and the best choice for the users, purchasers and maintainers. For many items, the end user must always be the focus, but they should be asked to justify a preference, or clearly state the choices are equivalent, and they can accept a choice based on other criteria.

    Reply

  2. Ted Cohen Says:

    If numeric “scoring” is used for technology planning, it needs to be multi-disciplinary, not just HTM folks doing the scoring. Although this scoring is certainly not absolute, when you add in (simplified) scoring by clinical folks, it shortens the list of items that require further discussion. The ones near the top of the list are high priority for almost everyone, the ones on the bottom of the list are low priority for almost everyone, and the ones in the middle (the shorter list) can be opened for a more thorough discussion and evaluation. When there are long request lists from multiple, disparate requesters, this methodology can develop a consensus on what to fund and what to not fund or delay. Finance always has the largest “weight.”

    Reply

    • Russell Furst Says:

      Ted, I’d like to expand on your comment that finance always has the largest weight, which I agree with. What is often missed when HTM folks make recommendations for replacement is what is even financially possible. I suspect that in most organizations the capital requirement to replace all the equipment that “ought’ to be replaced based on our technical assessments far exceeds the capital that is available. It seems reasonable then that financial feasibility should be a primary factor we consider when making replacement recommendations.

      Reply

  3. William A Hyman Says:

    I hope this isn’t a plea for a lack of objectivity and less than rational thought. At a minimum, you have to be able to explain the basis for the decision made presumably as result of analyzing an enumerable set of criteria. However, I agree that scoring and adding can be fake rationality. Note that among other weaknesses, many less important criteria can collectively out score a few highly important criteria. I also think that lists of attributes should include “other” as a reminder that all such lists are inherently incomplete. The process should be viewed as a guide to analysis and a basis for discussion, not as an algorithm that produces a definitive answer. It can also help answer the retroactive poet-failure and/or post-injury question: Why did you buy this one and not that one?

    Reply

    • Russell Furst Says:

      It is a plea for more rational thought. I am suggesting that traditional scoring systems generally minimize critical thinking and are less rational than conversations that include all the factors that the clinical/administrative/financial teams use to make replacement decisions, many of which are subjective.

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: