Lisa Simone and Daniel Rubery: A Tower of Babel with Medical Device Software Failures

Another set of health information is compromised, another medical device unexpectedly reboots, another set of patient data results is mixed up, another device fails to perform as intended. News stories bring us reports of medical device failures, and increasingly, failures appear to be related to software quality. Use of software in healthcare continues to grow. The popularity of medical and health-related applications is exploding due to the ubiquity of portable devices and platforms, creating new business opportunities, and expanding the diversity of software developers and manufacturers.

Hacking, privacy, and stolen patient information are obvious new threats. However, analysis of past failures over the last several decades shows that similar root causes of software failures continue to occur despite new tools and methods to improve software quality. What do all of these failures have in common? Despite a handful of retrospective studies, it’s not clear because no common language has been adopted and used across the industry to discuss and characterize failures. Knowledge learned is not shared or leveraged across the medical device sector to improve the software quality. We aim to change that.

In 2011, the Institute of Medicine issued a report called Health IT and Patient Safety: Building Safer Systems for Patient Care. The report called for the development of measures to assess and monitor the safety of health IT, including a recommendation that “the ONC should work with the private and public sectors to make comparative user experiences across vendors publicly available.”

The report added: “Another area necessary for making health IT safer is the development of measures. Inasmuch as the committee’s charge is to recommend policies and practices that lead to safer use of health IT, the nation needs reliable means of assessing the current state and monitoring for improvement. Currently, no entity is developing such measures.”

In order to gather useful data on the causes of safety-related software failures, there needs to be a common way to identify and report the software defects that lead to the failures. Existing standards for classifying defects focus on capturing the attributes about the defect such as priority, severity, probability of recurrence, and insertion activity. The values of theses attribute are bounded easily. When attempting to describe the type of defect, the values are defined less clearly.

Companies often have an enterprise-wide method to describe the defects that they find. However, how they identify and characterize a defect may vary from company to company. This makes it impossible to gather and analyze common defect and failure information in a broader context, and provides no way to tell if the type and number of defects seen by a company is unusual. It may be that all companies are experiencing the same types of failures and that industry-wide training and tools could be developed to easily eliminate the problem. Without a way to gather the information, the strength and usefulness of such methods and tools cannot be fully realized.

To address this issue, AAMI has established a task force to identify a common language to describe software defects. This language should be able to capture the defects currently being identified and be flexible enough to allow for expansion as technology and development methods evolve. It must be methodology and programming-language neutral so that all manufacturers can use and share information.

Once individuals, companies, and communities use the same language, we can find means to aggregate the data in a way that benefits not only the manufacturers, but also the tool developers, consultants, trainers, and regulators. If done correctly, manufacturers will not fear that their data will be used against them. Instead, they can compare their data and methods with peers to identify where their processes can be improved. Companies can monitor the improvements across multiple projects and years in order to ensure these improvements are effective. The end result is higher quality and safer software for the users and patients of the systems that we develop.

If you would like to help define a common language for defect classification and how it might be used, not only within medical device companies but across the sector to improve public health, consider joining the AAMI Software Defect Classification Task Force. To do so, please contact AAMI Standards Director Wil Vargas at

Lisa Simone, a biomedical engineer with the U.S. Food and Drug Administration, and Daniel Rubery, senior principal engineer with NxStage Medical, Inc., are the co-chairs of the AAMI Software Classification Task Force.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s