Get notifications!

Anatomy of an ICS-CERT Advisory: How Zero-Days are Disclosed

CyberMDX made news when its research and analysis team identified and responsibly disclosed two serious and hitherto unknown medical device vulnerabilities.

Because the vulnerabilities discovered affect so many facilities and because both official ICS-CERT disclosures were issued so closely together, there's been a lot of interest generated as a result — both in terms of the specific vulnerabilities and in terms of the responsible disclosure process in general. 

In an effort to address that interest and to answer some of the most common questions that accompany it, I decided to put pen to paper — or finger to key, as the case so happens — and provide you with more information. To do that, I turned to Elad Luz, Head of Research at CyberMDX and the man chiefly responsible for the recent disclosures.

While I originally intended to distill his insights into an explainer article, after speaking with Elad, I knew that I would be remiss if I did not let you hear (or read) from him directly. With that in mind, I've reproduced for you below parts of our conversation — transcribed in interview form.

 

Behind the Scenes of a Zero-Day Vulnerability Disclosure

Jon:

How did you discover these vulnerabilities?

Elad:

Well it involves some trade secrets and proprietary techniques, so I can't really share the whole story — at least not if you plan on publishing it...

But, there's a lot that I can share with you. Take the Becton Dickinson (BD) device vulnerability, for example. As Head of Research, I am tasked with collecting general information and conducting analyses on the types of connected devices that are present in our customers' facilities. There are a number of approaches I take to this research. One of them is to crawl the internet and review references to the device. In the course of an exercise along this approach, I came upon a medical device forum where some information on BD's Alaris® TIVA Syringe Pump was shared.

To begin with, this is a problem since some of the information shared was of a sensitive nature and really ought not to be shared beyond the device's manufacturer, owners, and operators. In this specific case, the information included some specs describing the communication protocol used by the pump.

beat-the-hacker-top-cybersec-questions

Even though what I found didn't outline the entirety of the communication protocol, it included enough information to constitute a serious problem. To a malicious and skilled actor, the available protocol elements serve as a key, capable of opening a door to the inner workings of your healthcare operation.

In examining the information found, I noticed that while the device was designed for direct administrative control, the communication protocol on which it runs supports remote control. From a structural point of view, this is asking for trouble. You're effectively introducing functionality that cannot be used without being abused. So as soon as someone can figure out how to open up a line of communication with the device, they can hijack it. 

Building on the elements of the communication protocol I found, I was able to use my knowledge of medical device protocols in conjunction with my white hat hacking skills to satisfy the required connection parameters. That was kind of proof positive of the vulnerability  I as a third-party could send totally unauthorized and potentially fatal commands to the pump.

 

ICS-CERT

Jon:

What does the process of reporting a vulnerability for responsible disclosure look like?

Elad:

Since this sort of thing is obviously sensitive, strong governments are wise enough to insert themselves into the process and establish well-defined procedures for disclosure. In the US, this is overseen by ICS-CERT, the Industrial Control Systems Computer Emergency Response Team. ICS-CERT is a division of National Cybersecurity and Communications Integration Center (NCCIC), which is itself a division of Homeland Security.

Illustration of NCCIC's History

ICS-CERT's stated goal is to reduce the risk of systemic cybersecurity and communications threats to the public.

ICS-CERT asks that when people discover vulnerabilities, they be formally reported to them. In hopes of avoiding an adversarial dynamic, I decided that it would be best for us to first reach out to the device manufacturers, informing them of the issue, and then with their support and collaboration bring the information to ICS-CERT.

So that's pretty much what we did. I contacted the vendors, discussed and verified the vulnerability with them, and secured their support in reporting the issue to ICS-CERT. I think approaching it in this was important  first, so they wouldn't be caught off guard, and second, so they could get a head-start on fixing or patching the underlying problem and issuing an advisory to their customers.

Both vendors were extremely professional and collaborative in working through the responsible disclosure process to close the security gap. 

So, I submitted the vulnerability report to ICS-CERT and waited for their reply. ICS-CERT sets a goal of issuing disclosures within 45 days after initial contact is made.

Jon:

Correct me if I'm wrong, but ultimately, most if not all of the steps required to actually resolve a threat fall to the vendor. What type of enforcement mechanisms does ICS-CERT have in its arsenal to apply pressure to vendors and ensure that they behave responsibly and conscientiously? 

Elad:

You're not wrong. From where I'm standing, the most powerful tool in ICS-CERT's arsenal is its voice as a trusted authority. I don't know about enforcement measures or penalties, but they can make you look really really bad if they want to, which would doubtless affect your reputation and in turn your business.

The 45-day goal I mentioned before is actually not a hard and fast rule, but more of a carrot and stick instrument. On ICS-CERT's website, they write something along the lines of "when a vendor does not provide a reasonable timeframe for remediation, ICS-CERT may disclose vulnerabilities 45 days after initial contact, regardless of whether patches or customer advisories have been issued".

zero-day-vulnerability-resolution-urgencyI don't remember the exact phrasing, but if you pay attention to the language, it's very clear that if a vendor does not behave in a serious and scrupulous manner, an ICS-CERT advisory will be issued with appropriate transparency and honesty allowing the natural consequences of your actions or inactions to take affect. On the flip side, it's also implied that if you work with them in good faith, they can be lenient with the time frame to allow for everything to be handled in the best, least damaging way possible.  

Since these vulnerabilities pertain to critical infrastructure points in hospitals, I have no doubt that other government bodies are also involved and communicating with ICS-CERT. I don't know the particulars of those interactions and how they work, but obviously the FDA, for example, is notified of and monitors vulnerabilities affecting approved devices. I'm sure there are also other government agencies in the mix — perhaps even some that we don't know about.

 
Jon:

In going through the responsible disclosure process, was there a sense of urgency that these parties (the vendor and ICS-CERT) need to get the information out there before a hacker finds what you found?

Elad:

For sure. But that doesn't mean that they aren't also calm and controlled about it. This is very important work, but in order to work, it also needs to be very disciplined work.

I think it's important to understand that as the discoverer and the manufacturer, our experience of the situation is very different from that of the ICS-CERT professionals. For me, I discovered something dangerous  it's startling in a way  and I want to see it attended to as quickly as possible. For the manufacturer, their device is found to be in some way compromised. It's a threat to their business and a big disruption to their normal activities. They want, very eagerly, to put it behind them and move forward.

For ICS-CERT though, it's their everyday. My discoveries followed yesterday's scary discovery, which will give way to tomorrow's scary discovery. I'm not saying that ICS-CERT folk are desensitized to cyber threats, but they are definitely a little less panicked about them. And I think that's a good thing.

At the same time, everything sort of depends on the particulars of the individual case. So with the Capsule Datacaptor Terminal Server disclosure, for example, we're talking about a specific opening that the device provides to the "Misfortune Cookie" vulnerability. The "Misfortune Cookie" vulnerability was discovered almost five years ago now. That's five years for hackers and malicious parties to write and refine code designed to seize on that vulnerability. That malicious code is now widely available, off-the-shelf, meaning it can be deployed nearly instantly and with very little effort.

The fact that there are elements of critical healthcare infrastructure vulnerable to that type of attack, running without any relevant patches 5 years down the road amounts to a much more imminent threat. This is what we refer to as a situation that can be "exploited in the wild" and in a situation like this there should be — and there is — a much greater sense of urgency.

In the case of the Datacaptor Terminal Server, we felt like we were operating on borrowed time, so to speak, and the sense of alacrity with which the case was handled was palpable.

medical-device-vulnerability-borrowed-time-1

Jon:

How is the severity score for the vulnerability determined?

Elad:

The severity score is expressed in what's called a CVSS grade, which is essentially a vulnerability rubric. There are multiple versions of the CVSS, but the most popular version is version 3, and that's what we used. There's an online calculator the breaks the CVSS down according to eight well-defined parameters.

As the discoverer, I suggested values for each of those parameters. The manufacturer does the same. Then with ICS-CERT acting as a moderator of sorts, we begin fleshing out the different arguments or interpretations that created daylight between our scores. The goal is the close all gaps and disagreement and move towards a trilateral consensus.

We went back and forth, discussing the nuances of our competing rationales for a while, and slowly moved into aligned. At the end of the day, ICS-CERT has the final say, but based on what I saw, they really do prefer for us to reach a consensus first. 

Breaking Down the Players & Their Play

Jon:

What percentage of these vulnerabilities are discovered by people like you working for cybersecurity companies?

Elad:

This is nothing more than an educated guess, but I'd say probably somewhere in the area of 90%.

Jon:

What percentage of these vulnerabilities are discovered by volunteer white hat hackers?

Elad:

Well, I'd put the remainder in this category. So let's say 10%.

Jon:

Is there a reward system in place for vulnerability discoveries by white hat hackers?

Elad:

Mature vendors are moving much more in this direction and have teams that can be contacted by white hats, sometimes even offering rewards.

BD, for example, encourages third-parties to report potential security issues with its devices.

I've also heard of companies holding hackathons for their own products and services — open to the public.  

A Little Perspective

Jon:

After a disclosure is issued, can the matter be considered solved or do threats remain around that particular vulnerability?

Elad:

Not by a long shot. We know that some facilities continue using vulnerable devices without implementing patches or other remedies. There's a whole lifecycle to vulnerabilities and disclosure really only take them to the second stage of that lifecycle.

the-day-after-zero-day-vulnerability-disclosure-1

Jon:

In this respect, aren’t we just doing the work of black hat hackers for them  telling them what to target?

Elad:

That's a very cynical question so I'm going to give it a very diplomatic answer: yes and no.

The truth is that the relevant market dynamics are not totally transparent and there's a lot we don't know when it comes to evaluating any given vulnerability. There will always be stragglers when it comes to adopting best practices and implementing the latest patches, so in this sense, we do in a way turn them into very low-hanging fruit for cyber criminals.

But at the same time, we put the "good guys" back in the position of control. If they want to run a tight ship, they can — and we make them safer. If they don't want to, the blame really lies with them.

Think about it like this: sometimes we tell the "bad guys" what to target, but we always tell the "good guys" how to stop the bad guys. Often, we’re telling users where to look when bad actors already know where to look.

You're right that there’s a risk involved, but it’s necessary in order to move forward and advance cybersecurity.

Responsible actors take this information and leverage it to improve their operations. If they ignore it, they put themselves in peril, but that is not really fair to blame the disclosure process for this. Sunlight really is the best disinfectant.  

Jon:

Do you think worldwide healthcare is safer the day before or the day after a zero-day vulnerability is disclosed?

Elad:

The day after. No doubt. Patches are normally built into the disclosure. And taking a broader view, it definitely leads to smarter product development and security protocol going forward.

vulnerabilities-undisclosed-1You have to understand that absence of proof is not proof of absence. So the fact that we never heard of an attack based on a given vulnerability in no way means that it never happened. We can't always know, let alone measure, the positive impact made by disclosures.

Jon:

Do you think the responsible disclosure model could be improved by replacing public disclosures with direct-to-facility disclosures? Security professionals like yourself would also be able to apply for discreet access to information after being properly vetted.

Elad:

No. I think the more transparency, the better. Like I said, sunlight is the best disinfectant. There are many advantages to public disclosure:

  • It raises awareness and bolsters education efforts.
  • It puts pressure on vendors to fix the vulnerability.
  • It lets users of devices that were purchased second-hand remain informed. These users are much more likely to be omitted from a direct-to-facility disclosure model.
  • It also forces hospital administrators to act more responsibly more quickly.

Of course, keeping the whole disclosure on the down-low wouldn’t really solve the problem either because, in addition to negative points of impact I just mentioned, direct-to-facility disclosures would be a prime target for hacking attacks.

So you'd have ignorant employees, lazy administrators, lazy device manufacturers, some users left totally in the dark, and a potentially rampaging zero-day vulnerability in the wild. 

Looking Forward

Jon:

Right now, who is winning the cyber battle for the future of healthcare  the good guys or the bad guys?

Elad:

I'm sorry to say but as things stand now, the bad guys are winning. The industry right now is so far behind global standards for security best practices and defense. Clinical assets are vulnerable in practically every hospital that doesn’t have a devoted solution for their security and the threat is truly horrifying. imagine a compromised ventilator when you’re hospitalized!

Jon:

Who do you expect to win in the long term?

Elad:

The good guys, of course. Although the industry is in a bad position, there’s a really encouraging atmosphere emerging and decision makers are beginning to show a readiness and willingness to close the gap.

I look at the CyberMDX team and product and I am confident that we and others like us will be there to turn the tide.



To learn more about how to protect your clinical assets and stay one step ahead  of medical device vulnerabilities — known and unknown alike — click here >>
New call-to-action

Comments