A cybersecurity incident response (IR) plan for a medical device manufacturer is the written procedure that takes the team from "something is wrong" to "the device is safe, patients are protected, the notified body is informed, and the lesson is in the risk file". Under MDR Annex I Sections 17.2 and 17.4 and MDCG 2019-16 Rev.1, an IR plan is part of the state of the art. When an incident causes or could cause serious harm, the plan also has to hand off cleanly to MDR vigilance reporting under Articles 87 to 92.
By Tibor Zechmeister and Felix Lenhard.
TL;DR
- MDR Annex I Sections 17.2 and 17.4 require software-containing devices to be designed and maintained in line with the state of the art for information security.
- MDCG 2019-16 Rev.1 interprets those obligations and treats incident handling as a required lifecycle activity, aligned with EN IEC 81001-5-1:2022.
- A minimum viable IR plan covers six phases: preparation, detection, containment, eradication, recovery, and lessons learned.
- When a cybersecurity incident causes or could cause serious harm or a serious public health threat, it becomes a serious incident under MDR Article 87 and must be reported under Articles 87 to 92 on the regulated timeline.
- Tibor has seen a library CVE sit undetected in a fielded device for several weeks. An IR plan is what shortens that detection window from weeks to hours.
- The IR plan is not a standalone document. It must be wired into the ISO 14971 risk file, the EN 62304 change control process, and the MDR post-market surveillance system.
Why this matters
Tibor has audited startups that could describe their cybersecurity architecture in detail and yet had no written answer to the question "what do you do when you get a security alert on a Sunday night". That gap is where patients get hurt and notified bodies get nervous. A strong preventive posture without an incident response plan is half a programme.
Felix has seen the same pattern from the coaching side. Founders invest in pentesting, they buy a vulnerability scanner, they watch the dashboards, and then the first real event arrives and the team argues for three hours about who decides whether to pull the device from the field. An IR plan answers those questions in advance, when heads are cool and lawyers are not on the call.
The other reason this matters is regulatory. Under MDR a cybersecurity event is not a private technical matter. If it causes or could have caused serious harm, it has to be reported to the competent authority on a clock that runs regardless of whether the engineering team is ready. The IR plan is how a team meets that clock without losing the engineering response.
What MDR actually says
MDR Annex I Section 17.2 requires that software be developed and manufactured in accordance with the state of the art taking into account the principles of development lifecycle, risk management including information security, and verification and validation. MDR Annex I Section 17.4 requires manufacturers to set out minimum requirements concerning hardware, IT network characteristics, and IT security measures, including protection against unauthorised access.
The regulation does not use the phrase "incident response plan". The obligation is implicit: a lifecycle approach to information security cannot exist without a plan for what to do when security fails. MDCG 2019-16 Rev.1 (December 2019, Rev.1 July 2020) makes this explicit in its interpretation and links the obligation to EN IEC 81001-5-1, which in turn names incident response and vulnerability handling as required lifecycle activities.
The vigilance bridge lives in MDR Articles 87 to 92. Article 87 defines the obligation to report serious incidents and field safety corrective actions. A serious incident under MDR Article 2(65) is any incident that directly or indirectly led, might have led, or might lead to the death of a patient, user, or other person, serious deterioration in state of health, or a serious public health threat. A cybersecurity event that meets this definition is a reportable serious incident on the same timelines as any other. Article 88 governs trend reporting. Article 89 governs manufacturer analysis. The IR plan has to deliver the data these articles assume exists.
Finally, MDR Article 83 requires a post-market surveillance system that is proportionate to the risk class and appropriate for the type of device. The IR plan is one of the inputs to that system. Incidents feed the PMS data pool, which feeds the PSUR or PMS report, which feeds the risk file update cycle.
A worked example
A Class IIa connected infusion accessory ships with a cloud backend. On a Sunday afternoon, the cloud operations on-call receives an alert that an unexpected service account is reading patient data logs. Here is how a working IR plan handles the next 72 hours.
Hour 0. The alert reaches the on-call engineer. The IR plan names the on-call rotation, the escalation path, and the decision authority. The engineer opens an incident ticket with a fixed template.
Hour 1. The incident commander is paged. This is one of three named roles that every IR plan needs: incident commander, technical lead, and communications lead. The IC declares a P1 incident and starts a running log.
Hour 2. Containment. The rogue service account is disabled. Network isolation is applied to the affected environment. The IR plan specifies that containment actions must be documented in real time, not reconstructed afterwards.
Hour 6. Assessment. The technical lead confirms the scope: one environment, approximately 40 patient records accessed, no evidence of device-side impact, no evidence of clinical data modification. The clinical safety officer joins the call to make an initial serious-harm assessment.
Hour 10. Vigilance decision. The IR plan has a written rubric that maps incident characteristics to the MDR Article 2(65) serious incident definition. Read-only exposure of patient identifiers without clinical harm is evaluated against the rubric. In this hypothetical the rubric flags a possible serious public health threat because of the potential for patient tracking. The PRRC is called. A serious incident notification is prepared under MDR Article 87 on the 15-day clock.
Hour 24. Eradication. The compromised credential is rotated. Secrets rotation across the environment is completed. The root cause is traced to an over-permissive IAM role. A code-level fix enters the EN 62304 change control process.
Hour 36. Recovery. The affected environment is restored to normal operation under monitoring. Customer-facing communications go out under the comms lead.
Hour 48. Regulatory. The MDR Article 87 serious incident notification is submitted to the competent authority. The risk file is updated under ISO 14971. The PMS record is opened.
Day 14. Lessons learned. A blameless post-incident review is held. The IR plan is updated based on what broke in the response. Two new detections are added to the monitoring stack. The notified body is informed of the change as part of the next surveillance cycle.
The Subtract to Ship playbook
The IR plan is one document. One. A founder who tries to build a three-binder enterprise IR programme on day one will never finish it. The Subtract to Ship approach is: write the minimum viable plan, test it, and grow it from evidence.
Step 1. Define the six phases explicitly. Preparation (tooling, training, contacts), detection (monitoring, alerts, intake), containment (short term), eradication (remove the cause), recovery (return to normal), and lessons learned (blameless review). Each phase has an owner and a checklist. This structure is standard across EN IEC 81001-5-1 and most security frameworks. Use it because it is recognised.
Step 2. Name three roles. Incident commander, technical lead, and communications lead. Every active incident has these three people. Name primaries and backups. The plan must survive vacation schedules.
Step 3. Build the decision rubric. The hardest moment in a real incident is deciding whether it is a serious incident under MDR Article 2(65). Do not make that decision from scratch at 3 a.m. Build a rubric in advance that maps incident characteristics to the serious-incident definition. Have the PRRC review it. Update it after every real incident.
Step 4. Pre-stage the vigilance hand-off. The plan must specify who calls the PRRC, who drafts the MIR, who submits it, and on what clock. Under MDR Article 87 a serious incident must generally be reported without delay and no later than 15 days after awareness, with shorter clocks for serious public health threats and deaths. Write those clocks into the plan. Do not rely on memory.
Step 5. Wire it into ISO 14971 and EN 62304. Every confirmed incident becomes an input to the risk file. Every remedial code change goes through EN 62304 change control. These are not optional extras. A notified body will audit this trail.
Step 6. Define the PMS hand-off. Incidents, including cybersecurity incidents, feed the PMS system under MDR Article 83. The IR plan must specify how incident data flows into PMS records and how it informs the next PSUR or PMS report.
Step 7. Rehearse it. A tabletop exercise twice a year is the cheapest insurance available. The team runs through a realistic scenario. A neutral facilitator injects surprises. The output is a list of gaps. The gaps close before the next rehearsal. Tibor has never seen a first tabletop run smoothly, and that is the point.
Step 8. Keep a runbook library. For the three most likely incident classes (for example credential compromise, exposed secret in source control, third-party library CVE), keep a one-page runbook with exact commands, exact people to page, exact escalation thresholds. Runbooks are what actually save time at 3 a.m. The plan is the container. The runbooks are the tools.
Reality Check
Use these questions as a self-diagnostic. They are built from the failures Tibor has seen at real audits.
- Can you put your hands on your IR plan in under two minutes, and does it fit on fewer than 20 pages?
- If an alert fired right now, who is the incident commander, and is that person reachable this weekend?
- Do you have a written rubric for deciding whether a cybersecurity event is a serious incident under MDR Article 2(65)?
- Has your PRRC been walked through the vigilance hand-off path at least once?
- When was your last tabletop exercise, and what changed in the plan as a result?
- Is there a named monitoring source for CVEs affecting components in your SBOM?
- After your last real or simulated incident, did the risk file get updated within two weeks?
- Does your notified body know that you operate an IR programme, and have they seen the evidence?
Frequently Asked Questions
Does MDR require a separate incident response plan document? Not by name. What MDR requires, through Annex I Sections 17.2 and 17.4 and the MDCG 2019-16 Rev.1 interpretation, is that a manufacturer operates a lifecycle security process that includes incident handling. A separate document is the practical way to meet that obligation and to evidence it at an audit.
Is every cybersecurity incident reportable under MDR? No. Only incidents that meet the MDR Article 2(65) serious incident definition are reportable under Articles 87 to 92. Low-severity events that do not cause or could not have caused serious harm are handled inside PMS under Article 83. The rubric in the IR plan is what separates the two.
What is the reporting clock? Under MDR Article 87, a serious incident must generally be reported without delay after the manufacturer becomes aware of a causal or possible causal relationship, and in any event not later than 15 days. Shorter clocks apply to serious public health threats (2 days) and deaths or unanticipated serious deterioration (10 days). The IR plan must bake these clocks in.
Who owns the IR plan, the security team or regulatory affairs? Both. Ownership has to be joint. Security owns detection, containment, and engineering response. Regulatory affairs, via the PRRC, owns the vigilance hand-off and the MDR reporting clock. A plan owned by only one side will fail at the hand-off.
Can a small startup really run tabletop exercises? Yes. A useful tabletop takes 90 minutes with four people and a printed scenario. The value is not in the simulation polish. It is in forcing the team to walk the plan end to end before a real event does it to them.
What about ransomware? Ransomware affecting a device or a device backend is one of the highest-impact cybersecurity incidents and should have its own runbook. The runbook should spell out what is sacrificed to protect patient safety first and data recovery second.
Related reading
- Cybersecurity Risk Management for Medical Devices Under MDR sets up the ISO 14971 integration that feeds the IR plan.
- Coordinated Vulnerability Disclosure for Medical Devices Under MDR is the companion process for externally reported vulnerabilities.
- MDR Articles 87 to 92: The Vigilance Framework defines the reporting obligations the IR plan hands off to.
- Serious Incidents Under MDR unpacks the Article 2(65) definition the IR rubric depends on.
Sources
- Regulation (EU) 2017/745 on medical devices, consolidated text. Annex I Sections 17.2 and 17.4, Article 2(65), Article 83, Articles 87 to 92.
- MDCG 2019-16 Rev.1, Guidance on Cybersecurity for Medical Devices (December 2019, Rev.1 July 2020).
- EN IEC 81001-5-1:2022, Health software and health IT systems safety, effectiveness and security, Part 5-1: Security, Activities in the product life cycle.
- EN ISO 14971:2019+A11:2021, Medical devices, Application of risk management to medical devices.
- EN 62304:2006+A1:2015, Medical device software, Software life cycle processes.