Threat modeling is the structured process of finding, rating, and mitigating what could go wrong with a device from a security perspective, before an attacker finds it first. For MDR-regulated devices it is the bridge between EN IEC 81001-5-1:2022 and the ISO 14971 risk file, and it is the single activity that distinguishes a security-aware manufacturer from one running on optimism.
By Tibor Zechmeister and Felix Lenhard.
TL;DR
- Threat modeling is the cybersecurity equivalent of hazard identification under EN ISO 14971:2019+A11:2021, adapted for adversarial inputs rather than random failure.
- STRIDE gives a structured taxonomy (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Elevation of privilege) that notified body auditors recognise.
- Attack trees complement STRIDE by modelling how multiple low-severity weaknesses combine into a single high-severity compromise.
- MDR Annex I §17.2 and §17.4 require manufacturers to design software with information security in mind. A documented threat model is the evidence that this was done, not assumed.
- Threat modeling outputs must flow back into the ISO 14971 risk management file. Cybersecurity risks that sit in a separate document outside the risk file are a predictable audit finding.
- Start early. Tibor has repeatedly seen startups discover threats at the pre-audit stage that would have cost a fraction to mitigate if they had been found during architecture.
Why this matters
Most medical device startups Tibor audits arrive at the notified body with a risk file built around the classical ISO 14971 model: electrical hazards, mechanical hazards, biocompatibility, user error. Cybersecurity gets a half-page appendix and a reference to "industry best practice". That is no longer acceptable, and arguably never was.
EN IEC 81001-5-1:2022 is now the reference standard for health software security, and MDCG 2019-16 Rev.1 is the MDR-specific interpretation. Both of these assume that manufacturers have a structured way of identifying threats. Not a penetration test at the end. A threat model, produced during architecture and maintained for the life of the product.
In the interviews that shaped this blog, Tibor described the gap bluntly. Wearables and smart wearables are growing fast. Encryption, data protection, and penetration testing are under-invested. Vulnerabilities are discovered by curious researchers, and occasionally by actors who are not so curious. The window between a discoverable weakness and an exploited one is the window a threat model is supposed to close.
What MDR actually says
MDR Annex I GSPR 17.2 states that software shall be developed and manufactured in accordance with the state of the art, taking into account the principles of development life cycle, risk management, including information security, verification and validation. GSPR 17.4 adds that manufacturers shall set out minimum requirements concerning hardware, IT networks characteristics and IT security measures, including protection against unauthorised access, necessary to run the software as intended.
Neither of these paragraphs uses the words "threat modeling". They do not need to. The obligation to consider information security during development, to document minimum IT security measures, and to treat software risk management as part of Annex I GSPR 1-9 collectively forces the question: how did the manufacturer identify the security risks in the first place?
MDCG 2019-16 Rev.1 closes that gap. It maps MDR expectations to EN IEC 81001-5-1 activities, and one of the core activities is threat modeling. The guidance is explicit that security risks are part of the overall risk management process required by EN ISO 14971:2019+A11:2021, not a parallel process. Two risk files are a finding. One risk file with security risks properly categorised is the expected state.
A worked example
Consider a connected insulin dosing aid: a handheld device with a microcontroller, Bluetooth Low Energy to a companion smartphone app, and a cloud backend that logs doses for clinician review. Class IIb under MDR Rule 11 if the dosing is software-driven.
The startup sits down for its first threat model. The data-flow diagram takes about two hours: handheld to phone over BLE, phone to cloud over TLS, clinician portal to cloud over HTTPS. Four components, three trust boundaries, one user and one clinician persona.
STRIDE is applied at each trust boundary.
- Spoofing of the handheld to the phone. A rogue BLE peripheral advertises the same service UUID. The phone pairs with it. Threat.
- Tampering with dose history in transit. Modified values arrive in the clinician portal. Threat.
- Repudiation. A clinician denies having approved a dose change. No audit log tied to an authenticated identity. Threat.
- Information disclosure. The BLE payload is not encrypted at the application layer, only by the BLE link layer. A sniffer within range captures dose values. Threat.
- Denial of service. A crafted BLE packet crashes the handheld firmware, blocking dosing. Severity depends on how the clinical workflow handles a missing device. Threat.
- Elevation of privilege. A cloud API endpoint intended for the clinician portal is reachable with a patient token. Threat.
The team then builds an attack tree for the worst of these, starting with the root goal "attacker administers an incorrect dose". Branches include spoofed peripheral, tampered dose record, compromised cloud. Each leaf gets a rough likelihood and a rough impact. Impact couples directly to the ISO 14971 harm taxonomy. The team now has 14 ranked security risks, each with an owner, a mitigation, and a verification plan.
The entire exercise took one day with three engineers and a regulatory lead. Tibor has seen this same exercise discovered six months later during a pre-audit, costing weeks of architectural rework and delayed certification. The work is not hard. The timing matters.
The Subtract to Ship playbook
Felix has coached 44 startups through regulatory work, and the pattern he sees with threat modeling is the same pattern he sees everywhere in MedTech: founders either skip it entirely or drown in a 60-page methodology that nobody on the team actually uses. The Subtract to Ship version cuts to what the notified body will ask for.
Step 1. Draw the data flow diagram once, keep it alive. One page. Every component, every trust boundary, every data type. Update it when the architecture changes. If the diagram is stale, the threat model is stale, and any mitigation based on it is a guess.
Step 2. Run STRIDE per trust boundary, not per component. Boundaries are where attacks cross. Running STRIDE on every component produces noise. Running it on every boundary produces a list of threats a human can actually review.
Step 3. Link every threat to an ISO 14971 hazardous situation. This is the step most startups miss. A threat in a separate cybersecurity register is a finding. A threat that has a row in the same risk table the notified body will review is evidence of an integrated process. EN IEC 81001-5-1 and MDCG 2019-16 both assume this integration.
Step 4. Score with two dimensions, not ten. Likelihood and impact. Five levels each. Keep it legible. A scoring rubric nobody can apply consistently produces a scoring rubric the auditor will challenge.
Step 5. Attack trees only for top risks. STRIDE finds breadth. Attack trees find depth. Building an attack tree for every threat wastes time. Building one for the three threats that could kill a patient is the whole point.
Step 6. Mitigations trace to design controls. Every mitigation needs a design input, a design output, a verification method, and a validation approach. Under EN 62304:2006+A1:2015 this is the software lifecycle. Threat model mitigations are not a side document. They are software requirements.
Step 7. Re-run the threat model on every architectural change. Not every code change. Architectural changes only: new component, new trust boundary, new data type, new integration, new SOUP dependency. This is the cadence the lifecycle requirement of EN IEC 81001-5-1 actually asks for.
Step 8. Keep the evidence one click away from the risk file. The notified body auditor will ask. Tibor does ask. The right answer is a link to the threat model in the risk management file, not a tour through three different SharePoint folders.
Felix has a rule for founders who push back on the effort: the threat model you do not build in week 20 is the change control you will fund in week 80. The up-front cost is lower than the post-certification rework cost. Every time.
Reality Check
- Does your device have a current data flow diagram that includes every trust boundary, and can you produce it in under a minute?
- Have you run STRIDE against each trust boundary, and is the output stored in a file your notified body auditor could read and understand?
- Is every identified threat linked to a hazardous situation in your EN ISO 14971:2019+A11:2021 risk file, or is it parked in a separate cybersecurity register?
- For your top three security risks, do you have an attack tree or equivalent depth analysis?
- Does every mitigation in your threat model trace to a design input, design output, verification, and validation?
- When was your threat model last updated, and can you point to the architectural change that triggered the update?
- If a new CVE were published tomorrow against a SOUP component in your device, how quickly would your threat model and risk file be updated?
- Could you defend your threat modeling methodology to a notified body auditor without reaching for industry jargon?
If any of these answers feel soft, that is the signal. Tibor's experience is that auditors find the soft spots on day one.
Frequently Asked Questions
Is threat modeling required by MDR? MDR Annex I §17.2 and §17.4 require manufacturers to develop software considering information security and to define minimum IT security measures. MDCG 2019-16 Rev.1 interprets this against EN IEC 81001-5-1:2022, which explicitly includes threat modeling as a core security activity. The specific words "threat model" do not appear in the MDR, but the obligation to identify security risks in a structured way is unambiguous.
Do I need STRIDE specifically, or can I use another method? Any structured method that produces a reviewable, repeatable inventory of threats is acceptable. STRIDE is common because it is legible to auditors and pairs well with data flow diagrams. PASTA, LINDDUN, and OCTAVE are legitimate alternatives. Whatever the method, the evidence of application, not the brand name, is what matters.
How does threat modeling differ from penetration testing? Threat modeling is a design-time activity that asks "what could go wrong". Penetration testing is a verification activity that asks "can an attacker actually do it". Both are required. A pentest without a threat model is a spot check. A threat model without a pentest is a set of assumptions.
Can Class I software skip threat modeling? No. The obligation in Annex I §17.2 applies to all software devices regardless of class. The depth and formality scale with risk, but the activity itself does not go away for lower classes.
Who on a three-person startup should own the threat model? Typically the lead software engineer owns the artefact, with the regulatory lead owning the integration into the ISO 14971 risk file. The founder usually owns the prioritisation of which mitigations get funded. Nobody external owns it. Outsourced threat modeling without internal ownership produces a document, not a capability.
Related reading
- SBOM for medical devices under MDR. Every threat model depends on knowing what libraries are actually in the device.
- MDR Annex I GSPR explained. The parent requirement that makes security a legal obligation, not a preference.
- MDR software lifecycle and IEC 62304. The lifecycle that threat model mitigations have to fit into.
- SOUP and OTS software under MDR. The most common missed threat is an unpatched third-party library.
Sources
- Regulation (EU) 2017/745 on medical devices, consolidated text. Annex I, GSPR 17.2 and 17.4.
- MDCG 2019-16 Rev.1 (December 2019, Rev.1 July 2020), Guidance on Cybersecurity for medical devices.
- EN IEC 81001-5-1:2022, Health software and health IT systems safety, effectiveness and security, Part 5-1: Security, Activities in the product life cycle.
- EN ISO 14971:2019+A11:2021, Medical devices, Application of risk management to medical devices.
- EN 62304:2006+A1:2015, Medical device software, Software life cycle processes.