---
title: Cybersecurity Patch Management for Medical Devices
description: How cybersecurity patch management works under MDR, including change control, significant change, and when a patch triggers notified body notification.
authors: Tibor Zechmeister, Felix Lenhard
category: IVDR & In Vitro Diagnostics
primary_keyword: cybersecurity patch management medical device MDR
canonical_url: https://zechmeister-solutions.com/en/blog/cybersecurity-patch-management-mdr
source: zechmeister-solutions.com
license: All rights reserved. Content may be cited with attribution and a link to the canonical URL.
---

# Cybersecurity Patch Management for Medical Devices

*By Tibor Zechmeister (EU MDR Expert, Notified Body Lead Auditor) and Felix Lenhard.*

> **Cybersecurity patches on a medical device are not IT housekeeping. They are regulated software changes under EN 62304, controlled through the design change procedure required by EN ISO 13485 clause 7.3.9, and assessed against the significant-change rules tied to MDR Article 120 before they ship.**

**By Tibor Zechmeister and Felix Lenhard.**

## TL;DR
- Every cybersecurity patch on a CE-marked medical device is a software change and must run through the design change procedure under EN ISO 13485:2016+A11:2021 clause 7.3.9.
- EN 62304:2006+A1:2015 §6 (software maintenance process) and §8 (configuration management) govern how the patch is planned, analysed, verified, and released.
- EN IEC 81001-5-1:2022 §5.7 (security updates) and §9 (post-market plan) set the expected lifecycle behaviour for security patches.
- Whether a patch counts as a "significant change" in the sense of MDR Article 120(3a) determines if legacy-device status is preserved or lost.
- A purely corrective security patch that restores the original safety and performance of the device usually is not a significant change, but the assessment must be documented, signed, and kept in the technical documentation.
- The notified body does not need to approve every patch. It needs to see a patching process it can trust and evidence that individual patches followed it.

## Why cybersecurity patch management is a regulatory activity

Founders often treat security patches the way web teams treat them: ship fast, keep users safe, move on. On a medical device, that instinct is half right. Speed matters because an unpatched vulnerability in the field is a live risk to patients. But on a regulated product, the speed has to run on rails that a notified body auditor can inspect afterwards.

Tibor has seen this play out at surveillance audits many times. In one case from his Round 2 interviews, a software library used inside a medical device had a publicly exploited vulnerability. The manufacturer took several weeks to notice. They eventually patched it through a software change and change control. There was no patient harm, but the window was wide open long enough that something could have happened. The audit conversation afterwards was not about the patch itself. It was about why the process had not detected the CVE earlier and why the change control record did not contain a proper security risk reassessment.

That is the real pattern. Notified bodies rarely fault a startup for releasing a patch. They fault the startup for releasing a patch without the paper trail showing it was handled like a regulated change.

## What MDR and the harmonised standards actually say

MDR Article 10 places the general obligation on the manufacturer to put in place a quality management system and to ensure that serial production remains in conformity with the regulation. Annex I §17.2 requires software to be developed and manufactured in accordance with the state of the art, taking into account the principles of development lifecycle, risk management, including information security, verification, and validation. Annex I §17.4 adds that manufacturers shall set out minimum requirements concerning IT security measures, including protection against unauthorised access.

None of those clauses say "patch management" by name. They do not need to. The presumption of conformity comes from the harmonised standards the MDR references. That is where the operational detail lives.

**EN ISO 13485:2016+A11:2021 clause 7.3.9** requires that changes to design and development are identified, reviewed, verified, validated as appropriate, and approved before implementation. It also requires the review of changes to include evaluation of the effect of the changes on constituent parts and products already delivered. A security patch is a design change. Clause 7.3.9 is the door every patch walks through.

**EN 62304:2006+A1:2015 §6** defines the software maintenance process. It splits maintenance into problem and modification analysis (§6.2), modification implementation (§6.3), and release. Problem reports, including security problem reports, have to be evaluated, assigned to a change request, and traced through verification to release. §8 (configuration management) is what lets the manufacturer prove exactly which version of which component is in which device at which time. Without it, patch management collapses.

**EN IEC 81001-5-1:2022 §5.7** addresses security updates explicitly. It requires the manufacturer to have a process for security update management, including identification, development, verification, and distribution of security updates, and for communication to operators and users. §9 (post-market plan for security) requires monitoring for new vulnerabilities and timely response.

**MDR Article 120(3a)**, as amended by Regulation (EU) 2023/607, preserves legacy-device status under the former Directives only if there is no significant change in the design or intended purpose. The extended transitional provisions mean that many startups still carry devices under Article 120. A poorly classified cybersecurity patch can remove that protection.

## A worked example: CVE-2025-XXXX in a Class IIa connected device

Consider a Class IIa wearable ECG patch with a smartphone companion app. The app uses a third-party Bluetooth library. A CVE is published against that library with a CVSS score of 8.2. Exploit code is public. The fix is an upstream library version bump.

Here is how Tibor would run this through change control on a Felix-style lean team.

**Step 1: detection and problem report (same day).** The SBOM-driven CVE feed triggers an automatic alert. A software problem report is opened under the EN 62304 §6.2 process. Severity, exploitability, and relevance to the device are analysed. The library is used for data transfer only, not for authentication, so the effective exposure is narrower than the generic CVSS suggests. The analysis is documented. This step cannot be skipped even if the fix is obvious.

**Step 2: security risk reassessment.** The cybersecurity risk file, which was built under EN ISO 14971:2019+A11:2021 and EN IEC 81001-5-1 §5, is reopened. The threat model is re-run for the vulnerable function. The residual risk with the current version is recalculated. If it is above the acceptance criteria, a change is mandatory.

**Step 3: change classification against MDR Article 120.** The team asks the question the notified body will ask at the next audit: is this a significant change in design or intended purpose? For a pure security patch that restores the original safety and performance of the device, MDCG 2020-3 (Guidance on significant changes under Article 120) indicates that corrective actions, including cybersecurity corrections, typically do not constitute significant changes when the intended purpose, fundamental design, and performance remain unchanged. The classification is documented with a reference to the exact MDCG 2020-3 flowchart and sub-section used. If the classification is "not significant", the device keeps its legacy or current certificate scope.

**Step 4: modification implementation.** The library is updated. The affected software items are re-verified under EN 62304 §5.6 and §5.7. Integration and system tests that cover the Bluetooth data path are re-run. The SBOM is regenerated. Configuration management (§8) locks the new software version.

**Step 5: release and distribution.** The patch is released through EN 62304 §5.8. Under EN IEC 81001-5-1 §5.7, the user-facing communication is prepared: a security advisory that names the CVE, describes the risk in operator-readable terms, and tells the operator how to verify the patched version.

**Step 6: post-market documentation.** The change, the risk reassessment, the verification evidence, and the significant-change classification all land in the technical documentation and the post-market surveillance file.

Time on a disciplined lean team: roughly five to ten working days for a low-complexity patch like this one. The bottleneck is almost never the code. It is the risk reassessment and the paperwork trail.

## The Subtract to Ship playbook for patch management

Felix's advice to the 44 startups he has coached is consistent: do not invent a parallel patch process. Reuse the one the QMS already has.

**1. Make the patch process one branch of the software change process, not a separate system.** Clause 7.3.9 already describes design change. EN 62304 §6 already describes software maintenance. Write the patch management SOP as a specialisation of those, not as a new document. One process, one audit trail.

**2. Define a patch severity matrix upfront.** Three tiers is enough for most startups: critical (exploit in the wild, patient harm plausible), high (no known exploit, vulnerability in a safety-relevant path), and routine (hygiene, not safety-relevant). The matrix maps directly onto turnaround times. Critical patches are hours to days. Routine patches batch into the next release. Put the matrix in the SOP so the team is not deciding under pressure what "critical" means.

**3. Pre-wire the significant-change decision.** Build a one-page decision record. It lists the MDCG 2020-3 flowchart steps and the answers for this specific patch. This is the document the notified body will ask for at the next surveillance audit. If it exists and is signed before release, the conversation is easy. If it does not exist, the auditor has to reconstruct the reasoning on the spot, and that rarely goes well.

**4. Automate what is safe to automate.** CVE monitoring against the SBOM is an automated pipeline. Dependabot-style pull requests into the development branch are fine. What stays manual is the EN 62304 §6.2 analysis, the risk reassessment, the significant-change classification, and the release approval. The signatures that protect the patient are human signatures.

**5. Keep the communication template ready.** EN IEC 81001-5-1 §9 expects operator communication for security updates. Draft a security advisory template now, not in the middle of a CVE. Fields: CVE identifier, affected device versions, severity, recommended action, verification instructions, contact. A good template turns a scramble into a fill-in.

**6. Track every patch in one register.** Date, CVE or problem report, classification, release version, significant-change decision, PMS linkage. The register is the single page that answers "show me your patching discipline" at an audit. Tibor has never seen a startup regret keeping one, and he has seen many regret not keeping one.

## Reality Check

1. Can you name every open CVE against the software components in your current release, today, without opening a ticket?
2. When was the last time your team ran a cybersecurity patch end-to-end through the EN ISO 13485 clause 7.3.9 design change process, with the record to show for it?
3. Do you have a one-page significant-change decision record template tied to MDCG 2020-3 for every patch release?
4. Is your SBOM regenerated automatically at build time and stored alongside the release artefact, or is it a manual export somebody might forget?
5. If a critical CVE dropped against a dependency tomorrow, how many days would elapse before the patch shipped with full documentation? Measure, do not estimate.
6. Does your post-market surveillance plan name cybersecurity vulnerability monitoring as a data source under MDR Article 83?
7. Can your release process distinguish a security patch from a feature release in the change history, and does the technical documentation reflect that distinction?

## Frequently Asked Questions

**Do we need notified body approval for every security patch?**
No. The notified body approves the QMS and the design change procedure, not each individual patch. What the notified body does inspect at surveillance audits is the evidence that the patching process was followed and that each patch was correctly classified against MDR Article 120 and MDCG 2020-3.

**When does a cybersecurity patch become a significant change?**
When it changes the intended purpose, the fundamental design, or the performance characteristics of the device. A corrective patch that restores the original performance and safety, without adding new features or new clinical claims, typically is not significant. The assessment must be documented under the significant-change guidance in MDCG 2020-3.

**How fast is "fast enough" for a critical patch?**
MDR does not specify hours or days. EN IEC 81001-5-1:2022 §9 requires a timely response. A reasonable internal target is days for a critical, exploitable vulnerability with known exposure, and the post-market surveillance plan should name an upper bound the team commits to. Notified bodies will ask for the target and then ask for the actual times.

**What if the patch fixes a vulnerability in SOUP we cannot verify end-to-end?**
EN 62304 §8.1.2 (software of unknown provenance) still applies. You verify what you can, document what you cannot, and add compensating controls. A pentest result after the patch is often the external evidence that closes the loop for the notified body.

**Do we have to notify users for every patch?**
For security patches with operator-relevant impact, yes. EN IEC 81001-5-1 §5.7 and §9 expect communication to operators and users. Hygiene patches with no operator impact can be communicated in batch release notes.

**Does patching a legacy Article 120 device risk its transitional status?**
Only if the patch is a significant change. Pure corrective security patches usually are not. The risk is not the patch itself. The risk is failing to document the significant-change decision, which leaves the question open at the next audit.

## Related reading
- [The Software Bill of Materials (SBOM) for Cybersecurity](/blog/sbom-cybersecurity-vulnerability-tracking) for the foundation that makes CVE detection possible at all.
- [Cybersecurity Post-Market Surveillance](/blog/cybersecurity-post-market-surveillance) for how CVE monitoring integrates into MDR Articles 83 to 86.
- [MDR Software Maintenance under EN 62304](/blog/mdr-software-maintenance-iec-62304) for the broader software maintenance process that patch management lives inside.
- [SBOM for Medical Devices under MDR](/blog/sbom-medical-devices-mdr) for the regulatory framing of SBOMs as a deliverable.
- [Post-Market Surveillance for AI Devices](/blog/post-market-surveillance-ai-devices) for adjacent PMS patterns when the device has AI components alongside security concerns.

## Sources
1. Regulation (EU) 2017/745 on medical devices, consolidated text. Article 10, Article 120, Annex I §17.2, §17.4.
2. Regulation (EU) 2023/607 amending the transitional provisions of Regulation (EU) 2017/745.
3. EN ISO 13485:2016+A11:2021, Medical devices, Quality management systems, clause 7.3.9.
4. EN 62304:2006+A1:2015, Medical device software, software lifecycle processes, §5.6, §5.7, §5.8, §6, §8.
5. EN IEC 81001-5-1:2022, Health software and health IT systems safety, effectiveness and security, §5.7, §9.
6. MDCG 2019-16 Rev.1, Guidance on cybersecurity for medical devices.
7. MDCG 2020-3, Guidance on significant changes regarding the transitional provision under Article 120 of the MDR.

---

*This post is part of the [IVDR & In Vitro Diagnostics](https://zechmeister-solutions.com/en/blog/category/ivdr) cluster in the [Subtract to Ship: MDR Blog](https://zechmeister-solutions.com/en/blog). For EU MDR certification consulting, see [zechmeister-solutions.com](https://zechmeister-solutions.com).*
