---
title: MDR Software Risk Analysis: FMEA and ISO 14971 for SaMD
description: How software FMEA, EN ISO 14971 and EN 62304 fit together for SaMD under MDR, and why hardware-style FMEA alone is not enough.
authors: Tibor Zechmeister, Felix Lenhard
category: Quality Management Under MDR
primary_keyword: software risk analysis FMEA ISO 14971
canonical_url: https://zechmeister-solutions.com/en/blog/software-risk-analysis-fmea-iso-14971
source: zechmeister-solutions.com
license: All rights reserved. Content may be cited with attribution and a link to the canonical URL.
---

# MDR Software Risk Analysis: FMEA and ISO 14971 for SaMD

*By Tibor Zechmeister (EU MDR Expert, Notified Body Lead Auditor) and Felix Lenhard.*

> **FMEA is a useful tool inside the EN ISO 14971:2019+A11:2021 risk management process, but FMEA alone is not a software risk analysis. Software fails differently from hardware. It fails because a requirement was wrong, a state was not handled, or a library changed behaviour after an update. A compliant SaMD risk file uses FMEA where it fits, adds software-specific hazard analysis, and connects every control back to EN 62304 software items and the software safety class.**

**By Tibor Zechmeister and Felix Lenhard.**

## TL;DR
- EN ISO 14971:2019+A11:2021 is the risk management standard the MDR expects, and it applies to software as a medical device the same way it applies to any other device.
- FMEA is one technique inside the 14971 process. It is good at component-level failure modes and poor at systemic software failures caused by incorrect requirements, unhandled states, or integration effects.
- EN 62304:2006+A1:2015 requires the software safety class (A, B, or C) to be derived from the contribution of the software to a hazardous situation, which means the risk analysis must exist before the software architecture is frozen.
- MDR Annex VIII Rule 11 means most diagnostic and decision-support software lands in Class IIa or higher, and Rule 11 classification pulls the software safety class upward with it.
- A credible SaMD risk file combines a software hazard analysis, software FMEA where useful, and trace links from each hazard to the affected software item and its verification evidence.
- Tibor's rule of thumb: if a founder hands over a spreadsheet labelled FMEA and nothing else, that is not a software risk analysis, that is an Excel file.

## Why software risk analysis needs its own discussion

Tibor has audited SaMD risk files that were technically present but practically empty. A team would hand over a clean FMEA spreadsheet with component-level rows: database connector fails, authentication service fails, UI thread crashes. Each row had a severity, a probability, a risk priority number, and a mitigation. It looked like risk management. It was not.

The missing piece was always the same. None of these rows captured the software failures that actually hurt patients on SaMD. None of them asked what happens when a threshold is set one digit off in the requirements document. None of them asked what happens when a machine learning model is trained on a population that does not include the user in front of the screen. Those are the software failures that cause harm, and they do not show up in a component FMEA.

Felix sees the same issue from the coaching side. Small teams adopt FMEA because it is the technique they learned from hardware or automotive backgrounds. It is a legitimate tool. When it is the only tool, the software risk file stops at the surface and never reaches the hazards that EN ISO 14971:2019+A11:2021 actually expects.

## What MDR actually says about software risk

MDR Annex I GSPR 1 requires devices to achieve the performance intended by the manufacturer while being safe and effective. GSPR 3 requires the manufacturer to establish, implement, document, and maintain a risk management system as a continuous iterative process throughout the entire lifecycle of the device, with regular systematic updating. GSPR 4 requires risks to be reduced as far as possible without adversely affecting the benefit-risk ratio.

For software specifically, MDR Annex I §17.1 requires devices that incorporate electronic programmable systems, including software, or software that is a device in itself, to be designed to ensure repeatability, reliability, and performance in line with their intended use. §17.2 requires software to be developed and manufactured in accordance with the state of the art taking into account the principles of development lifecycle, risk management, including information security, verification, and validation.

That paragraph is the regulatory hinge. Risk management is explicitly called out as one of the three pillars of software development under MDR, alongside lifecycle and verification/validation. The standard the EU harmonises for that pillar is EN ISO 14971:2019+A11:2021. The standard for the lifecycle pillar is EN 62304:2006+A1:2015. The two are designed to reference each other.

MDR Annex VIII Rule 11 pulls this into classification territory. Most software intended to provide information used to take decisions with diagnosis or therapeutic purposes lands in Class IIa, Class IIb, or Class III depending on the severity of the outcome. MDCG 2019-11 Rev.1 explains the classification in detail and confirms that almost no clinically useful SaMD remains Class I under MDR. Class IIa or higher means a notified body will review the risk file and the software documentation.

## Where FMEA fits and where it does not

FMEA, Failure Mode and Effects Analysis, is a bottom-up technique. It starts with a component or a function and asks what can fail, what the effects of that failure are, and how severe and likely the failure is. FMEA is useful because it forces a team to enumerate failure modes systematically instead of relying on intuition.

For hardware, FMEA fits cleanly. A resistor can open or short. A connector can lose contact. A battery can drop below a minimum voltage. Each of these has a physical mode and a measurable probability. You list them, you score them, you control them.

For software, FMEA breaks down in three predictable places.

First, software does not fail the way hardware fails. Software executes the instructions it was given. When a SaMD application displays the wrong patient risk score, it is almost never because a bit flipped in memory. It is because the requirement that drove the calculation was wrong, or the edge case was not specified, or the input validation did not reject a value outside the expected range. These are not component failures. They are systemic failures rooted upstream in requirements and design.

Second, FMEA probability scores are misleading for software. In hardware, probability is grounded in physics and historical failure data. In software, the same bug will trigger every time the same input arrives. The probability of the bug existing is either zero or one. What varies is the probability of the triggering input, which is a property of the usage environment, not the software itself.

Third, FMEA is local. It looks at one component at a time. Many real SaMD hazards come from interactions: a correct calculation combined with a correct display running on a device that does not refresh when the underlying data updates, so the user sees stale information. No single component failed. The composition failed.

FMEA still has a place. It is useful for analysing interfaces where software meets hardware: sensor reads, actuator drives, network boundaries, database writes. It is useful for enumerating failure modes of specific SOUP components. It is useful as a structured review technique during architecture reviews. What it is not is a substitute for the software hazard analysis that EN ISO 14971:2019+A11:2021 and EN 62304:2006+A1:2015 together expect.

## A worked example: SaMD triage tool, Class IIa under Rule 11

Consider a small startup building a triage tool that takes patient-reported symptoms and suggests a urgency level: routine, urgent, or emergency. Under Rule 11, providing information to inform decisions with therapeutic or diagnostic purposes puts this firmly in Class IIa at minimum, and depending on how the team frames the severity of possible outcomes, it can move to Class IIb.

A pure component FMEA of this tool might list: login service fails, symptom form crashes, network request times out, database unavailable. Each row gets a severity and a mitigation. The spreadsheet is six pages long and looks thorough.

The software hazard analysis that EN ISO 14971:2019+A11:2021 expects asks different questions. What if the symptom taxonomy does not include a symptom the user is experiencing and the user picks the closest match? What if two symptoms together indicate a cardiac event but neither alone triggers the urgent branch? What if the user selects a language the model was not validated against? What if an update to an open-source NLP library used in preprocessing shifts the scoring threshold by a few percent? What if the model is systematically less accurate for users above 75 years old, a subgroup overrepresented in the real use population? Each of these is a plausible route from software to patient harm, and none of them are component failures.

A credible risk file for this device contains a software hazard list like the one above, a benefit-risk analysis that acknowledges where the tool is and is not reliable, and FMEA sections for the specific subsystems where component-level thinking adds value: the authentication layer, the database persistence layer, the integration with any external APIs. The software safety class is then derived from the contribution of each software item to the identified hazardous situations, as EN 62304:2006+A1:2015 clause 4.3 requires. Tibor's experience with similar tools: most end up Class B overall, with specific items Class C where the software directly determines the urgency output.

## The Subtract to Ship playbook for SaMD risk analysis

Felix coaches teams to build software risk analysis in a specific order so they do not have to redo it later.

Step 1. Write a short intended-use and user profile document before touching FMEA or hazard analysis. State who the user is, where they are, what device they are running it on, and what decisions the software informs. Every later hazard trace will reference this document, so it has to exist first.

Step 2. Run a structured software hazard analysis session. Multi-disciplinary, not just developers. Include clinical, product, and ideally an outside reviewer. Ask the systemic questions: what if a requirement is wrong, what if a state is unhandled, what if a subgroup is underrepresented, what if an external dependency shifts. Record every hazard with a plain-language description.

Step 3. Use FMEA as a second-pass technique on the interfaces and components where it adds value. Do not use it as the backbone. Do not let its RPN scores drive priorities without sanity checking against the hazard analysis from Step 2.

Step 4. Map each hazard to the affected software items and derive the software safety class per item as EN 62304:2006+A1:2015 clause 4.3 requires. Software items that do not contribute to a hazardous situation can be Class A. Items whose failure could contribute to non-serious injury are Class B. Items whose failure could contribute to serious injury or death are Class C.

Step 5. Apply the EN ISO 14971:2019+A11:2021 risk control hierarchy. Inherent safety by design first. Protective measures second. Information for safety last. Do not jump to a warning dialog when the fix is to remove the unsafe path.

Step 6. Close the loop. Link every control to a software requirement, every requirement to a test, and every residual risk to a documented acceptability justification. Tibor's audit experience: the teams that pass this step cleanly are the teams that use a single traceability tool rather than three disconnected spreadsheets.

Step 7. Keep it alive. EN ISO 14971:2019+A11:2021 explicitly requires the risk management process to continue through production and post-production. When a SOUP dependency publishes a CVE, when post-market data shows a subgroup performing worse than expected, when a new requirement lands, the risk file gets updated. Not every three years. Continuously.

## Reality Check
1. Does your risk file contain a software hazard analysis that covers wrong requirements, unhandled states, and subgroup performance, or only a component FMEA?
2. Can you trace every identified hazard to a specific software item, and does the software safety class of that item reflect the hazard's severity per EN 62304:2006+A1:2015 clause 4.3?
3. Did a multi-disciplinary team run your software hazard analysis, or did one engineer fill in a spreadsheet alone?
4. Are your FMEA probability scores based on something defensible for software, or copied from hardware templates that do not apply?
5. When your last SOUP component published a CVE, did the risk file get updated, or did the update happen only in the issue tracker?
6. If a notified body auditor opened your risk management report tomorrow, would it explain in plain language how the software can contribute to a hazardous situation?
7. Is your risk analysis ahead of your architecture freeze, or did the architecture come first and the risk file follow to justify it?

## Frequently Asked Questions

**Is FMEA required by MDR for software?**
No. MDR does not require any specific technique. It requires a risk management process that conforms to the state of the art. EN ISO 14971:2019+A11:2021 is the harmonised standard, and it presents FMEA as one technique among several.

**Can we use ISO 14971 alone without EN 62304 for SaMD?**
No. EN ISO 14971:2019+A11:2021 is the risk process. EN 62304:2006+A1:2015 is the software lifecycle. MDR Annex I §17.2 expects both, and the software safety class that EN 62304 requires is derived from the risk analysis EN ISO 14971 produces.

**How does Rule 11 affect the depth of our software risk analysis?**
Rule 11 drives notified body involvement, and notified body involvement drives depth of review. A Class IIa SaMD under Rule 11 will have its risk file audited against EN ISO 14971:2019+A11:2021 and its software documentation against EN 62304:2006+A1:2015.

**What about AI or machine learning components?**
ML introduces hazards FMEA is especially poor at capturing: drift, subgroup underperformance, adversarial inputs, training-data bias. These belong in the software hazard analysis, not in a component FMEA. EN ISO 14971:2019+A11:2021 still applies.

**How often should the software risk file be updated?**
Continuously. EN ISO 14971:2019+A11:2021 requires the process to run through the full lifecycle. Update at every significant release, at every SOUP dependency with a security implication, and at every post-market signal that changes probability or severity.

## Related reading
- [MDR software lifecycle: EN 62304 for startups](/blog/mdr-software-lifecycle-iec-62304) covers the lifecycle pillar that risk analysis feeds into.
- [Software safety classification under EN 62304](/blog/software-safety-classification-iec-62304) explains how Class A, B, and C are derived from the risk file.
- [MDR classification Rule 11 for software](/blog/mdr-classification-rule-11-software) shows why most SaMD lands in Class IIa or higher and what that means for the risk depth.
- [The EN ISO 14971 Annex Z trap](/blog/iso-14971-annex-z-trap) covers the MDR-specific deviations founders miss when they copy the standard verbatim.
- [Software traceability across design, tests, and risks](/blog/software-traceability-requirements-design-tests-risks) shows how to link hazards to items to tests without three disconnected spreadsheets.

## Sources
1. Regulation (EU) 2017/745 on medical devices, consolidated text. Annex I GSPR 1, 3, 4; Annex I §17.1, §17.2; Annex VIII Rule 11.
2. EN ISO 14971:2019+A11:2021, Medical devices, Application of risk management to medical devices.
3. EN 62304:2006+A1:2015, Medical device software, Software life cycle processes.
4. MDCG 2019-11 Rev.1 (October 2019, Rev.1 June 2025), Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 and Regulation (EU) 2017/746.

---

*This post is part of the [Quality Management Under MDR](https://zechmeister-solutions.com/en/blog/category/quality-management) cluster in the [Subtract to Ship: MDR Blog](https://zechmeister-solutions.com/en/blog). For EU MDR certification consulting, see [zechmeister-solutions.com](https://zechmeister-solutions.com).*
