CyberMDX made news recently when its research and analysis team identified and responsibly disclosed two serious and hitherto unknown medical device vulnerabilities.
Because the vulnerabilities discovered affect so many facilities and because both official ICS-CERT disclosures were issued so closely together, there's been a lot of interest generated as a result — both in terms of the specific vulnerabilities and in terms of the responsible disclosure process in general.
In an effort to address that interest and to answer some of the most common questions that accompany it, I decided to put pen to paper — or finger to key, as the case so happens — and provide you with more information. To do that, I turned to Elad Luz, Head of Research at CyberMDX and the man chiefly responsible for the recent disclosures.
While I originally intended to distill his insights into an explainer article, after speaking with Elad, I knew that I would be remiss if I did not let you hear (or read) from him directly. With that in mind, I've reproduced for you below parts of our conversation — transcribed in interview form.
Behind the Scenes of a Zero-Day Vulnerability Disclosure
How did you discover these vulnerabilities?
Well it involves some trade secrets and proprietary techniques, so I can't really share the whole story — at least not if you plan on publishing it...
But, there's a lot that I can share with you. Take the Becton Dickinson (BD) device vulnerability, for example. As Head of Research, I am tasked with collecting general information and conducting analyses on the types of connected devices that are present in our customers' facilities. There are a number of approaches I take to this research. One of them is to crawl the internet and review references to the device. In the course of an exercise along this approach, I came upon a medical device forum where some information on BD's Alaris® TIVA Syringe Pump was shared.
To begin with, this is a problem since some of the information shared was of a sensitive nature and really ought not to be shared beyond the device's manufacturer, owners, and operators. In this specific case, the information included some specs describing the communication protocol used by the pump.
Even though what I found didn't outline the entirety of the communication protocol, it included enough information to constitute a serious problem. To a malicious and skilled actor, the available protocol elements serve as a key, capable of opening a door to the inner workings of your healthcare operation.
In examining the information found, I noticed that while the device was designed for direct administrative control, the communication protocol on which it runs supports remote control. From a structural point of view, this is asking for trouble. You're effectively introducing functionality that cannot be used without being abused. So as soon as someone can figure out how to open up a line of communication with the device, they can hijack it.
Building on the elements of the communication protocol I found, I was able to use my knowledge of medical device protocols in conjunction with my white hat hacking skills to satisfy the required connection parameters. That was kind of proof positive of the vulnerability — I as a third-party could send totally unauthorized and potentially fatal commands to the pump.
What does the process of reporting a vulnerability for responsible disclosure look like?
Since this sort of thing is obviously sensitive, strong governments are wise enough to insert themselves into the process and establish well-defined procedures for disclosure. In the US, this is overseen by ICS-CERT, the Industrial Control Systems Computer Emergency Response Team. ICS-CERT is a division of National Cybersecurity and Communications Integration Center (NCCIC), which is itself a division of Homeland Security.
Established in 2009, ICS-CERT's stated goal is to reduce the risk of systemic cybersecurity and communications threats to the public.
ICS-CERT asks that when people discover vulnerabilities, they be formally reported to them. In hopes of avoiding an adversarial dynamic, I decided that it would be best for us to first reach out to the device manufacturers, informing them of the issue, and then with their support and collaboration bring the information to ICS-CERT.
So that's pretty much what we did. I contacted the vendors, discussed and verified the vulnerability with them, and secured their support in reporting the issue to ICS-CERT. I think approaching it in this was important — first, so they wouldn't be caught off guard, and second, so they could get a head-start on fixing or patching the underlying problem and issuing an advisory to their customers.
Both vendors were extremely professional and collaborative in working through the responsible disclosure process to close the security gap.
So, I submitted the vulnerability report to ICS-CERT and waited for their reply. ICS-CERT sets a goal of issuing disclosures within 45 days after initial contact is made.
Correct me if I'm wrong, but ultimately, most if not all of the steps required to actually resolve a threat fall to the vendor. What type of enforcement mechanisms does ICS-CERT have in its arsenal to apply pressure to vendors and ensure that they behave responsibly and conscientiously?
You're not wrong. From where I'm standing, the most powerful tool in ICS-CERT's arsenal is its voice as a trusted authority. I don't know about enforcement measures or penalties, but they can make you look really really bad if they want to, which would doubtless affect your reputation and in turn your business.
The 45-day goal I mentioned before is actually not a hard and fast rule, but more of a carrot and stick instrument. On ICS-CERT's website, they write something along the lines of "when a vendor does not provide a reasonable timeframe for remediation, ICS-CERT may disclose vulnerabilities 45 days after initial contact, regardless of whether patches or customer advisories have been issued".
I don't remember the exact phrasing, but if you pay attention to the language, it's very clear that if a vendor does not behave in a serious and scrupulous manner, ICS-CERT will issue their disclosure with appropriate transparency and honesty and allow the natural consequences of your actions or inactions to take affect. On the flip side, it's also implied that if you work with them in good faith, they can be lenient with the timeframe to allow for everything to be handled in the best, least damaging way possible.
Since these vulnerabilities pertain to critical infrastructure points in hospitals, I have no doubt that other government bodies are also involved and communicating with ICS-CERT. I don't know the particulars of those interactions and how they work, but obviously the FDA, for example, is notified of and monitors vulnerabilities affecting approved devices. I'm sure there are also other government agencies in the mix — perhaps even some that we don't know about.
How does ICS-CERT go about verifying the vulnerability?
Well, after I submitted the vulnerability report through a website mailbox, they reviewed it and once it passed a smell test, they reached out to me and the manufacturer in order to collect more information. In effect, they look for confirmation from both sides — the discoverer and the manufacturer.
You submitted the vulnerability report through an online mailbox? Isn't the mere existence of such a mailbox a hacker's dream come true?
Yes and no. If a bad actor gets into that mailbox, there's no denying that it's going to be Christmas come early. He or she would find vulnerabilities for which patches don't yet exist and for which manufacturers haven't yet issued advisories. Though these don't technically constitute zero-day vulnerabilities (since the authorities, some cybersecurity experts, and the manufacturers know about them) for all intents and purposes, they are indistinguishable from the real thing. Havoc can definitely be wrought.
At the same time, most of the vulnerabilities contained in the mailbox will have been submitted in the past — meaning that many will have already had their disclosures issued and others will be right around the corner, making the window for unmitigated mayhem quite small actually. Beyond this, some subset of submissions will fail to make the grade as genuine vulnerabilities that have not already been disclosed.
Hacking into a secure system, combing through all that information, making sense of it, and acting on it before a disclosure is published, seems an incredible amount of work for a very small window of "opportunity".
In going through this process, was there a sense of urgency that these parties (the vendor and ICS-CERT) need to get the information out there before a hacker finds what you found?
For sure. But that doesn't mean that they aren't also calm and controlled about it. This is very important work, but in order to work, it also needs to be very disciplined work.
I think it's important to understand that as the discoverer and the manufacturer, our experience of the situation is very different from that of the ICS-CERT professionals. For me, I discovered something dangerous — it's startling in a way — and I want to see it attended to as quickly as possible. For the manufacturer, their device is found to be in some way compromised. It's a threat to their business and a big disruption to their normal activities. They want, very eagerly, to put it behind them and move forward.
For ICS-CERT though, it's their everyday. My discoveries followed yesterday's scary discovery, which will give way to tomorrow's scary discovery. I'm not saying that ICS-CERT people are desensitized to cyber threats, but they are definitely a little less panicked about them. And I think that's a good thing.
At the same time, everything sort of depends on the particulars of the individual case. So with the Capsule Datacaptor Terminal Server disclosure, for example, we're talking about a specific opening that the device provides to the "Misfortune Cookie" vulnerability. The "Misfortune Cookie" vulnerability was discovered almost five years ago now. That's five years for hackers and malicious parties to write and refine code designed to seize on that vulnerability. That malicious code is now widely available, off-the-shelf, meaning it can be deployed nearly instantly and with very little effort.
The fact that there are elements of critical healthcare infrastructure vulnerable to that type of attack, running without any relevant patches 5 years down the road amounts to a much more imminent threat. This is what we refer to as a situation that can be "exploited in the wild" and in a situation like this there should be — and there is — a much greater sense of urgency.
In the case of the Datacaptor Terminal Server, we felt like we were operating on borrowed time, so to speak, and the sense of alacrity with which the case was handled was palpable.
Is there a principle in place that disclosures are withheld until patches or other remedies are in place?
Based on my impression, yes. Still, everything really is subject to consideration of the particular circumstances.
So like I said before, the 45-day release window is designed to be somewhat flexible in order to allow for all patches and workarounds to be put in place before the disclosure is publicly issued. But this too obviously has its limits and if a device manufacturer isn't acting responsibly and within reasonable timeframes, I don't think ICS-CERT has much choice but to go ahead with the disclosure.
Do you have any visibility into what type of internal processes this triggers within the vendor?
We had some email strings going back and forth and I got to see a little of all of the different people and voices from the company involved. Manufactures take this very seriously and have product security teams in place to handle vulnerabilities.
We worked closely with them as they studied the vulnerability and its implications on their end and determined what needed to be done to protect their customer base.
There was definitely a sort of internal assessment and review process initiated and I can say with certainty that the FDA was looped in on the situation early and often.
How is the severity score for the vulnerability determined?
The severity score is expressed in what's called a CVSS grade, which is essentially a vulnerability rubric. There are multiple versions of the CVSS, but the most popular version is version 3, and that's what we used. There's an online calculator the breaks the CVSS down according to eight well-defined parameters.
As the discoverer, I suggested values for each of those parameters. The manufacturer does the same. Then with ICS-CERT acting as a moderator of sorts, we begin fleshing out the different arguments or interpretations that created daylight between our scores. The goal is the close all gaps and disagreement and move towards a trilateral consensus.
We went back and forth, discussing the nuances of our competing rationales for a while, and slowly moved into aligned. At the end of the day, ICS-CERT has the final say, but based on what I saw, they really do prefer for us to reach a consensus first.
The vendor must have a vested interest in downgrading the severity score, but you as the discoverer and the Head of Research for a medical device cybersecurity provider also have a vested interest in upgrading the severity. How are these competing and perhaps biased interests managed by ICS-CERT?
ICS-CERT asks for extensive background information from both sides. They also require substantive justifications — usually on the basis of precedence, comparisons, or clearly delineated definitions — for all arguments put forth.
If you can't corroborate your point of view with evidence, it won't hold water, and the competing position will win out. And as I said before, CVSS parameters are fairly well-defined to avoid ambiguity and subjectivity in scoring.
Breaking Down the Players & Their Play
What percentage of these vulnerabilities are discovered by people like you working for cybersecurity companies?
This is nothing more than an educated guess, but I'd say probably somewhere in the area of 90%.
What percentage of these vulnerabilities are discovered by volunteer white hat hackers?
Well, I'd put the remainder in this category. So let's say 10%.
Is there a reward system in place for vulnerability discoveries by white hat hackers?
Mature vendors are moving much more in this direction and have teams that can be contacted by white hats, sometimes even offering rewards.
BD, for example, encourages third-parties to report potential security issues with its devices.
I've also heard of companies holding hackathons for their own products and services — open to the public.
Do medical device vendors have people whose job it is to search for these types of vulnerabilities?
Sometimes. It usually depends on maturity of the company. Google actually has a whole team that operates independently from the rest of the company whose sole objective is to find zero-day vulnerabilities in Google products and services. When they find vulnerabilities — rather than keeping it discreet and resolving it internally — they report it to US-CERT (a sibling organization to ICS-CERT).
It might seem incredibly strange and in a way it is — paying people to attack you — but I consider this a best practice and really believe it's the future for all serious companies.
What percentage of vulnerabilities responsibly disclosed through ICS-CERT, would you estimate, are discovered by vendor personnel?
It's impossible to know.
As it currently stands, Google really is the exception to the rule. For obvious reasons, most companies that learn about problems with their products and services aren't going to publicize them. They fix them and we never find out about it.
A Little Perspective
After a disclosure is issued, can the matter be considered solved or do threats remain around that particular vulnerability?
Not by a long shot. We know that some facilities continue using vulnerable devices without implementing patches or other remedies. There's a whole lifecycle to vulnerabilities and disclosure really only take them to the second stage of that lifecycle.
In this respect, aren’t we just doing the work of black hat hackers for them — telling them what to target?
That's a very cynical question so I'm going to give it a very diplomatic answer: yes and no.
The truth is that the relevant market dynamics are not totally transparent and there's a lot we don't know when it comes to evaluating any given vulnerability. There will always be stragglers when it comes to adopting best practices and implementing the latest patches, so in this sense, we do in a way turn them into very low-hanging fruit for cyber criminals.
But at the same time, we put the "good guys" back in the position of control. If they want to run a tight ship, they can — and we make them safer. If they don't want to, the blame really lies with them.
Think about it like this: sometimes we tell the "bad guys" what to target, but we always tell the "good guys" how to stop the bad guys. Often, we’re telling users where to look when bad actors already know where to look.
You're right that there’s a risk involved, but it’s necessary in order to move forward and advance cybersecurity.
Responsible actors take this information and leverage it to improve their operations. If they ignore it, they put themselves in peril, but that is not really fair to blame the disclosure process for this. Sunlight really is the best disinfectant.
Do you think worldwide healthcare is safer the day before or the day after a zero-day vulnerability is disclosed?
The day after. No doubt. Patches are normally built into the disclosure. And taking a broader view, it definitely leads to smarter product development and security protocol going forward.
You have to understand that absence of proof is not proof of absence. So the fact that we never heard of an attack based on a given vulnerability in no way means that it never happened. We can't always know, let alone measure, the positive impact made by disclosures.
Do you think the responsible disclosure model could be improved by replacing public disclosures with direct-to-facility disclosures? (Security professionals like yourself would also be able to apply for discreet access to information after being properly vetted.)
No. I think the more transparency, the better. Like I said, sunlight is the best disinfectant. There are many advantages to public disclosure:
- It raises awareness and bolsters education efforts.
- It puts pressure on vendors to fix the vulnerability.
- It lets users of devices that were purchased second-hand remain informed. These users are much more likely to be omitted from a direct-to-facility disclosure model.
- It also forces hospital administrators to act more responsibly more quickly.
Of course, keeping the whole disclosure on the down-low wouldn’t really solve the problem either because, in addition to negative points of impact I just mentioned, direct-to-facility disclosures would be a prime target for hacking attacks.
So you'd have ignorant employees, lazy administrators, lazy device manufacturers, some users left totally in the dark, and a potentially rampaging zero-day vulnerability in the wild.
What has most surprised you about the responsible disclosure process?
I'd have to say the attitude of the vendors. I really didn't know what to expect from them because to be frank, this whole thing was a pretty big headache for them. But they really treated me as a partner. They were cordial, professional, and actually appreciative that it was being handled in a responsible manner.
What do you think people would find most interesting about the process?
The negotiations between the three sides — discoverer, manufacturer, and ICS-CERT were really interesting. Not just the in terms of the particulars of the discussion but in terms of the overall atmosphere.
There was a real spirit of collaboration coming from all parties. After this experience, I'm left feeling pretty optimistic about the future of cybersecurity in medicine.
Do you think the “bad guys”, or at least some subset of them, can be persuaded to switch teams with the emergence of a lucrative white hat market?
It’s complicated. I think you need to first differentiate between a "bad guy" that interferes with medical device performance and a "bad guy" that just steals data. Of course, both are terrible. But that does not make them equal.
When it comes to someone who manipulates a pacemaker or a syringe pump, or holds a fleet of hospital computers hostage with a ransomware attack, it's not just a matter of incentive, it's a matter of values. A person who would do such a thing is beyond redemption. That person has no conscience and even if enlightened self-interests put us both on the same side of an issue, I wouldn't trust him in a million years.
When it comes to stealing information, that act can be considered a little more benign, at least in the mind of the actor. In theory, someone like that could win my trust if he or she has a change of heart, mind, and circumstances. So to this extent, it's mostly a psychological and economic question. The psychology of the matter will obviously change with each case, depending on the individual. The economic question is much more suitable to a structural analysis.
Bear with me now as I work through it. The white hat market will only ever be as lucrative as it’s required to be in order to contend with the black hat market. If the black hats go away today, there’ll be no point for the white hats tomorrow morning. So there's a dependence there.
The costs associated with a catastrophic attack will always be higher than the amount of money invested in protecting against it. This is because as soon as the costs of defense approach those associated with a successful attack, the vast majority of decision makers would prefer to save their time and energy and assume a passive posture. Of course, prima facie, a 100% vulnerability does not translate to 100% occurrence of successful attacks. So stakeholders must also factor in the likelihood of actually being made to pay for their lax security.
It basically boils down to the idea of acceptable risk. So, at least as I see it, it's not really possible for the white market will incentivize the black market out of existence.
All this being said, it’s not just a matter of incentivizing black hatters to switch sides, but also disincentivizing them against staying on team “bad guys”. If governments can develop ways to effectively identify and prosecute bad actors, or even just stop the flow of illicit money fueling medical cybercrime, we may be able more permanently tip the balance in favor of the good guys.
Does that make sense?
It definitely sounded smart. But I'll be the one asking the questions... In your opinion, is the threat of “good guys” defecting to the other side realistic? Is it something we should be worried about?
Sure. It’s not something I spend a lot of time thinking about, but I’d have to imagine that it does happen and will probably continue to happen. I don’t know of any specific such cases of it happening and I don't think worrying about it is a very productive way to spend your time in any case. I'd rather spend my time fortifying security measures and making it harder for the "bad guys" to succeed.
Right now, who is winning the cyber battle for the future of healthcare — the good guys or the bad guys?
I'm sorry to say but as things stand now, the bad guys are winning. The industry right now is so far behind global standards for security best practices and defense. Clinical assets are vulnerable in practically every hospital that doesn’t have a devoted solution for their security and the threat is truly horrifying. imagine a compromised ventilator when you’re hospitalized!
Who do you expect to win in the long term?
The good guys, of course. Although the industry is in a bad position, there’s a really encouraging atmosphere emerging and decision makers are beginning to show a readiness and willingness to close the gap.
I look at the CyberMDX team and product and I am confident that we and others like us will be there to turn the tide.