I want to tell a story.
I grew up in Cleveland Ohio, a city that has always held its place in the top two or three cities that everyone likes to poke fun at for being a place where you do not want to live—or so it was for most of my life. Thankfully, it has gotten much better, and in 2016 we won a world championship in basketball, which was the first (and only) championship for my home team in my life. I literally shed tears of joy.
Chief among the reasons Cleveland was the butt of many a standup comedy joke was the fact that, in 1969, Time magazine published a picture of the Cuyahoga River on fire. Yes, for you youngsters that are too young to remember, a fire burned on the river. This was truly a devastating event that is reputed to have led to the creation of the Clean Water Act and fueled the EPA. The article created a lot of public outcry, and, thankfully, the Cuyahoga River has remained inferno free since.
However, doing a little research on this topic revealed a more interesting story. The picture in the Time article was not of the 1969 fire, which was put out before news photographers got a picture. Back then nobody had camera phones. The picture in the article was from another Cuyahoga river fire taken 17 years earlier, in 1952.
So I know what you may be thinking. It is devastating enough that the river caught on fire at all, but for it to catch on fire twice is absolutely insane. I mean, what does it take for us to change our ways and learn?
Hold on; it gets better.
I did a little more research and discovered that the 1969 fire was not the second time the river had gone ablaze. In fact, it was the 13th time the Cuyahoga had burned, with the first recorded incident occurring in 1868! Keep in mind that we are talking about recorded incidents, and we all know that not all bad things make it to the public record.
So where am I going with this?
Everyone knew what we needed to do to stop the river from catching on fire. It was quite simple really: stop dumping pollutants into the water. This, however, seemed like a silly idea to at least 100 years of very productive industrialists. After all, according to this article on the Cleveland Historical website, “water pollution was viewed as a necessary consequence of the industry that had brought prosperity to the city.”
The truth is that Clevelanders did not seem very bothered with pollution. As long as everyone made money and could buy all the nice things they wanted, it was OK.
Fast forward to modern healthcare and the wonderful prosperity technology has created. Software companies are cashing in on the technology craze that has created lots of new possibilities in the world of healthcare. It seems like everything is now connected to a network, and the list of network connected devices and systems grows every day—and so does the pollution.
I am talking about software pollution, or, perhaps more appropriately, known software pollutants.
It may (or may not) surprise you to know that the FDA is still approving medical devices running legacy operating systems with known security vulnerabilities, such as Windows XP. Moreover, the agency simply has no authority to reject such submissions, and while some at the FDA may argue they can, they simply do not.
The creation of the EPA and the Clean Water Act forced organizations to both clean up pollution and to test for known pollutants going forward. Guidance on how to avoid pollution existed long before the first river fire occurred, and for the areas we cared about avoiding pollution we managed fairly well. While the local river may have been a cesspool, the mountain retreat remained clean. People knew what to do to keep things clean, it comes down to how prevalent industry is, and how much those who live in the environment value hygiene.
So let’s bring this back to healthcare. The regulatory environment surrounding healthcare is a bit complicated and challenging, but does exist. Perhaps the most complex thing of all is the disconnect between who regulates healthcare delivery organizations (HDOs), who regulates software manufacturers, and who regulates medical device manufacturers. On the HDO side, we have The Joint Commission and the American Hospital Association working in concert with HHS and state level regulators. On the software front, we essentially have nobody regulating security (except where specific HIPAA privacy issues are at stake). On the medical device front, we have the FDA. I mention these three categories because they all represent areas where software pollution can introduce itself into an environment.
Today we have a situation where the HDOs are facing enormous new challenges, not the least of which is the recent WannaCry and Petya attacks, yet still have no regulations requiring them to test for known and unknown vulnerabilities, nor patch them when discovered. The patching issue was well manifested by the WannaCry attack, which exploited a known vulnerability that Microsoft had released a patch for, and for systems that are not regulated by the FDA (or anyone for that matter). I have to tip my hat to Microsoft for having the patch out, but keep in mind that this does not represent the normal state of affairs when it comes to patching systems, as most vendors are either unaware of security issues, or are aware and choose not to patch them.
On the FDA front, we have a situation where they have moved toward a state of decreased regulation of software systems, mostly at the insistence of the software industry which has effectively lobbied Congress to force the FDA to back off. A perfect example is how electronic health record systems are exempt from FDA regulations regardless of how tightly they integrate with medical devices (and many of them are tightly integrated), as well as the deregulation of medical device data systems, which went from being Class 3 devices prior to 2010, to Class 1 devices, and then to completely deregulated in 2015. Now, is it a shock to anyone knowing this that the WannaCry and Petya attacks were on these very same systems?
Additionally, while the FDA claims that it can enforce some of the tenets of the guidance it has released related to medical device security, it simply has not done so in any meaningful way. In fact, the FDA, to this very day, still approves medical devices running Windows XP, a known vulnerable operating system, because the current regulatory scheme simply does not give it the authority to do otherwise. Couple that with a recent Ponemon report Synopsys sponsored titled Medical Device Security: An Industry Under Attack and Unprepared to Defend, which showed that 43% of medical device manufacturers and 53% of HDOs either do not test for vulnerabilities or are unsure if testing occurs, and we are left with a very disconcerting situation.
In my experience, organizations that have had to address the software pollution head-on due to hacking incidents, bad press, and regulatory pressure (as rarely as that occurs) end up doing a very good job at managing security issues. Everyone else is hit or miss, and a lot more misses than I or anyone should be comfortable with. What this tells me is that the tools and techniques needed to manage security adequately are all there, but the willingness to make it all happen is simply not there.
That is where the regulators need to step in. Unless all entities involved in building out our vast technological capabilities in a connected healthcare world are literally forced to up the ante with respect to security management, we are going to continue to see the sea of connected healthcare become more and more polluted, until we get to the point where we simply will not be able to safely navigate, and many would argue we are already there.
WannaCry was a big “river fire” in the world of healthcare. Petya followed afterwards, and the river of polluted software is still very vulnerable. It will take a while to clean it up even if everyone pulls together now and agrees on moving forward. Unless the regulatory authorities step in right now and force a cleanup, we are going to see things get progressively worse. It’s time for regulations to change.
Mike Ahmadi is global director of critical systems security with the Synopsys Software Integrity Group.