Coordinated vulnerability disclosure (CVD) is the documented process a manufacturer uses to receive, triage, fix, and publicly disclose security vulnerabilities in a medical device. Under MDR Annex I Sections 17.2 and 17.4, and the interpretation given in MDCG 2019-16 Rev.1, a CVD policy is no longer optional for any connected or software-containing device. It is how a notified body sees that a manufacturer takes post-market cybersecurity seriously.

By Tibor Zechmeister and Felix Lenhard.

TL;DR

  • MDR Annex I Section 17.2 requires software to be developed and manufactured in accordance with the state of the art taking into account information security, and Section 17.4 requires manufacturers to set out minimum requirements concerning IT security measures.
  • MDCG 2019-16 Rev.1 interprets these obligations and explicitly points to vulnerability handling and disclosure as part of the security lifecycle a manufacturer must operate.
  • EN IEC 81001-5-1:2022 names vulnerability disclosure and incident response as required activities in the health software security lifecycle.
  • A working CVD programme needs a policy, a reachable contact, a triage process, service level objectives for response, a patch and release path, and a public advisory.
  • A security.txt file and a single published security email address are the cheapest possible entry points and the ones notified bodies increasingly look for.
  • Tibor has seen a library CVE go unnoticed in a deployed device for several weeks because nobody on the team was watching. A CVD process exists to close that window.

Why coordinated vulnerability disclosure matters

There is a pattern Tibor has seen repeatedly at first notified body engagements. A startup ships a connected wearable, the device passes its pentest, the CE mark is issued, and then nothing. No inbox. No policy. No triage workflow. A security researcher finds a bug six months later, cannot locate a contact, and either posts it publicly or walks away. Both outcomes are bad. One becomes a vigilance case. The other leaves the device vulnerable in the field with nobody watching.

Coordinated vulnerability disclosure is the answer to both failure modes. It gives researchers a way in, gives the manufacturer a way to triage what comes through the door, and gives patients a predictable path to a patched device. It is not a marketing exercise. It is a required post-market cybersecurity activity for any device that contains software or connects to a network.

Felix has coached founders who believed CVD was "what big hospitals do". In reality, it is what any manufacturer of a software-containing medical device has to do if they expect to pass a post-market surveillance review under MDR. The cost is small. The absence is expensive.

What MDR actually says

MDR Annex I Section 17.2 requires that software be developed and manufactured in accordance with the state of the art, taking into account the principles of the development lifecycle, risk management including information security, verification and validation. MDR Annex I Section 17.4 further requires manufacturers to set out minimum requirements concerning hardware, IT network characteristics, and IT security measures including protection against unauthorised access.

MDR itself does not use the phrase "coordinated vulnerability disclosure". The phrase lives one layer up, in the authoritative interpretation. MDCG 2019-16 Rev.1 (December 2019, Rev.1 July 2020) is the MDCG guidance on cybersecurity for medical devices. It interprets the Annex I obligations, walks through the security lifecycle, and explicitly treats vulnerability handling and communication as part of what manufacturers must operate across the lifetime of the device.

EN IEC 81001-5-1:2022 is the current state-of-the-art standard for health software security activities in the product lifecycle. It defines lifecycle processes that map to the MDCG 2019-16 expectations. Vulnerability disclosure is one of them. A manufacturer who claims compliance with 81001-5-1 but has no disclosure channel is not compliant with the standard they cited.

Finally, MDR Article 83 requires every manufacturer to operate a post-market surveillance system. Article 87 requires reporting of serious incidents. A vulnerability that leads or could lead to serious harm crosses the bridge from cybersecurity into vigilance and becomes a reportable event under Articles 87 to 92. CVD is the process that detects these events early enough to handle them.

A worked example

A Class IIa wearable streams physiological data over Bluetooth Low Energy to a companion app. Eighteen months after CE marking, a university researcher notices that the pairing process accepts a default PIN and sends her findings to the manufacturer. Here is what a working CVD process does with that report.

Day 0, hour 0. The researcher visits zechmeister-solutions.com/.well-known/security.txt (the filename is illustrative) and finds a security email and a PGP key. She writes a structured report and encrypts it.

Day 0, hour 4. The on-call duty email acknowledges receipt within four hours. The service level objective is published in the CVD policy. The researcher now knows she is heard.

Day 1. A triage meeting runs the report through a simple rubric: is it reproducible, is it exploitable, what is the impact on patient safety, is patient data exposed, and is the device population affected. The rubric produces a CVSS-style score and a safety impact classification. The bug scores high on exploitability and medium on safety impact.

Day 3. The bug enters the ISO 14971 risk file as a new hazard, anchored to the cybersecurity section, and is evaluated as a risk control change. EN 62304 change control procedures are triggered. A software change request opens in the configuration management system.

Day 14. A patch is developed, unit tested, and regression tested. The SBOM is checked for other instances of the affected library. An updated version is built.

Day 21. The notified body is informed of the change via the standard change notification route. Because the patch addresses a security issue with safety implications, the manufacturer evaluates whether Articles 87 to 92 vigilance reporting applies. In this case no incident has occurred in the field yet, so the evaluation is logged, the competent authority is informed proactively, and the patch ships.

Day 30. A public security advisory is published. It credits the researcher (with her permission), describes the class of vulnerability without publishing an exploit, gives the affected versions, and confirms the fixed version. The SBOM is updated. The PMS report picks up the event for the next PSUR.

Total elapsed time: 30 days from report to disclosure. No patient harm. The notified body sees a complete record at the next surveillance audit. The researcher tells her peers that the manufacturer is credible. Over time this reputation pays for itself.

The Subtract to Ship playbook

A CVD programme for a funded startup can be built in a week by anyone who treats it as a process rather than a product. Here is the minimum viable version, traceable at every step to MDR Annex I Section 17.2, 17.4, MDCG 2019-16 Rev.1, and EN IEC 81001-5-1:2022.

Step 1. Publish a reachable contact. A dedicated security email (for example security@yourcompany) that routes to more than one person. Add a PGP key if you can. Publish the address on your website footer and in the device IFU. This single step resolves the most common failure Tibor sees at audits: no contact, no inbound channel, no programme.

Step 2. Add a security.txt file. The security.txt convention (RFC 9116) is a simple text file at /.well-known/security.txt that lists contact, encryption, policy, and acknowledgements URLs. It costs nothing. Notified body auditors with a cybersecurity focus are beginning to look for it as a signal of intent.

Step 3. Write a CVD policy. One page. Plain language. State scope (which products and versions are in scope), how to report, what the manufacturer will do on receipt, acknowledgement timing, triage timing, fix timing targets, coordinated disclosure expectations, and a safe-harbour clause for good-faith researchers. Publish it under /security or /vulnerability-disclosure.

Step 4. Define the triage rubric. A simple spreadsheet is fine. Columns: reproducibility, exploitability, patient safety impact, data exposure, device population, CVSS score, decision. Every incoming report goes through this same rubric. The output drives whether the report becomes an engineering change, a risk file update, a vigilance evaluation, or all three.

Step 5. Wire CVD into ISO 14971 and EN 62304. Every confirmed vulnerability becomes an entry in the risk file and a change request in the software configuration management system. This is where most startups fail: they triage the report but never feed it back into the regulated processes. Notified bodies will ask to see the thread from report to risk file update to patch release to PMS record.

Step 6. Define the vigilance bridge. Write down, in advance, the criteria that turn a cybersecurity incident into an MDR Article 87 serious incident. Who makes the call. What the timeline is. Who files the MIR. This is the only way a team under stress will not freeze when a real incident arrives.

Step 7. Test it. Run a tabletop exercise once a year. One person plays the researcher. The team follows the real process. Record where it breaks. Fix those places. A policy that has never been rehearsed is decorative.

Step 8. Publish advisories. When a vulnerability is fixed, publish a human-readable advisory. Short, honest, and specific to affected versions. The advisory is also part of the PMS evidence base under MDR Article 83.

Reality Check

Ask these questions of your own CVD programme. Honest answers surface the gap between policy and practice.

  1. Does a researcher who searches for your company's security contact find one within 60 seconds?
  2. If a report arrived this afternoon, who would acknowledge it by end of day, and is that person on holiday this week?
  3. Do you have a written triage rubric, or does every report get evaluated from scratch?
  4. When was the last time a vulnerability report resulted in a documented update to the ISO 14971 risk file?
  5. Is your SBOM current enough that you could search it for a newly announced CVE within one hour?
  6. Have you ever rehearsed the path from vulnerability report to MDR Article 87 serious incident report, even on paper?
  7. Does your notified body know that you operate a CVD programme, and have they seen evidence of it at a surveillance audit?
  8. If a patch shipped tomorrow, would the fielded device population actually receive it, and how would you know?

Frequently Asked Questions

Is a CVD policy required by MDR for a Class I software device? MDR itself does not name CVD by that label, but Annex I Sections 17.2 and 17.4 apply to any software-containing device regardless of class, and MDCG 2019-16 Rev.1 interprets those sections as requiring vulnerability handling and communication. In practice, a notified body looking at a Class IIa or higher software device will expect evidence of a CVD process. For Class I software, the expectation is lighter but growing.

Can a small startup outsource CVD? Partially. Triage and engineering response cannot be outsourced in full because they touch the regulated ISO 14971 and EN 62304 processes. What can be outsourced is the inbound channel: a managed disclosure service or a security partner who screens reports before handing them over. The manufacturer still owns the outcome.

What is security.txt and is it mandatory? security.txt is a plain text file at /.well-known/security.txt defined by RFC 9116. It is not legally mandatory under MDR. It is a low-cost signal that the manufacturer is reachable, and in Tibor's experience it is the first thing a cybersecurity-aware auditor checks on a manufacturer website.

How fast must a vulnerability be patched? There is no universal clock. The policy should state target timelines (for example, critical vulnerabilities within 30 days). What matters to a notified body is that the stated timeline is realistic, tracked, and met. A missed target with a documented reason is acceptable. A missed target with no record is not.

When does a vulnerability become a vigilance case? When it has caused or could cause serious harm, it falls under MDR Articles 87 to 92. The CVD triage rubric should include a safety impact column that triggers a formal vigilance evaluation for any high-impact finding. This is the bridge between the cybersecurity process and the regulated reporting obligations.

What about researcher compensation and bug bounties? Optional. A safe-harbour clause in the CVD policy is more important than a bounty. Good faith researchers want to be credited and not sued. Bounties are a nice-to-have that larger manufacturers add later.

Sources

  1. Regulation (EU) 2017/745 on medical devices, consolidated text. Annex I Sections 17.2 and 17.4, Article 83, Articles 87 to 92.
  2. MDCG 2019-16 Rev.1, Guidance on Cybersecurity for Medical Devices (December 2019, Rev.1 July 2020).
  3. EN IEC 81001-5-1:2022, Health software and health IT systems safety, effectiveness and security, Part 5-1: Security, Activities in the product life cycle.
  4. EN 62304:2006+A1:2015, Medical device software, Software life cycle processes.
  5. EN ISO 14971:2019+A11:2021, Medical devices, Application of risk management to medical devices.