After auditing dozens of SaMD manufacturers, the same ten EN 62304 gaps come up in roughly the same order. None are exotic. All are preventable. If you fix these ten before your Stage 2 audit, you will remove the majority of findings notified bodies routinely write against software files.

By Tibor Zechmeister and Felix Lenhard.

TL;DR

  • The most common MDR software audit findings are not technical — they are evidence gaps against EN 62304:2006+A1:2015.
  • Incomplete traceability between requirements, design, risk, code, and tests is finding number one by a wide margin.
  • Untracked SOUP, missing problem-resolution records, and stale software development plans together account for most of the remaining findings.
  • The cybersecurity gap — no evidence of EN IEC 81001-5-1:2022 activities — is now a routine finding since MDCG 2019-16 Rev.1 entered standard auditor practice.
  • Every finding in this list has a short, non-bureaucratic fix. None require new tools.

Why this matters

Tibor has sat on both sides of the audit table: as a notified body lead auditor writing findings, and as a founder receiving them. The ten findings in this post are the ones that appear most often in SaMD audits under EN 62304 and MDR Annex I §17.2. They are not the findings founders expect. Most founders prepare for deep technical scrutiny of their algorithm. What they get instead is ten evidence-chain questions they have not thought about.

Every finding below is preventable with modest, proportional effort. The fix is almost never "add a new process." The fix is usually "document what you are already doing." This post is the pre-audit checklist Tibor wishes every founder would run a month before Stage 2.

What MDR actually says

MDR Annex I §17.2 requires that software be developed in accordance with the state of the art, taking into account the principles of development lifecycle, risk management, verification and validation. MDR Annex II §6.1 requires the technical documentation to describe the design and manufacturing processes — for software, that means the EN 62304 lifecycle evidence. MDR Article 83 requires a post-market surveillance system, and Article 87 requires vigilance reporting; both rely on software-side traceability between complaints and code.

The operative standard throughout is EN 62304:2006+A1:2015, covering planning (clause 5.1), requirements (5.2), architecture (5.3), detailed design (5.4), implementation (5.5), integration and system testing (5.6–5.7), release (5.8), maintenance (clause 6), risk management (clause 7), configuration management (clause 8), and problem resolution (clause 9). Cybersecurity activities under EN IEC 81001-5-1:2022 and MDCG 2019-16 Rev.1 layer on top.

With that framing, here are the ten findings — in roughly the order they appear in audit reports.

The 10 most common findings

1. Broken traceability between requirements, risk, design, code, and tests

What it looks like: "The manufacturer could not demonstrate traceability between SRS-014 and verification test evidence."

Why it happens: Requirements in Word, tests in CI, risks in a spreadsheet — nothing links. Trace matrices get reconstructed by hand at audit time, and the reconstruction reveals gaps.

Fix: Adopt a single trace format from day one. A CSV or Markdown trace matrix in the repository, generated from IDs in requirements, risk entries, and test names. Review at every release per EN 62304 clause 5.8.

2. Untracked or stale SOUP inventory

What it looks like: "The SOUP list does not reflect actual dependencies. Versions differ between the list and the SBOM."

Why it happens: Manual maintenance. A developer updates a library, CI builds happily, nobody touches the list.

Fix: Generate the SOUP list from the build via an SBOM tool (Syft, CycloneDX). For each item, keep a one-row record covering EN 62304 clause 5.3.3 and 7.1.3: identification, purpose, known anomalies, risk.

3. Missing problem-resolution records

What it looks like: "No evidence of problem resolution records for issues identified between March and July."

Why it happens: Bugs are closed with a terse comment. No analysis, no evaluation against known risks, no decision rationale. EN 62304 clause 9 requires all of this.

Fix: A lightweight problem report template in the issue tracker. Three mandatory fields: description and reproduction, impact analysis (including risk control impact), resolution. The ticket is the clause 9 record — do not duplicate in Word.

4. No documented software development plan, or a plan that does not match reality

What the finding looks like: "The software development plan references a V-model process, but the development history in the repository shows sprint-based iterative development without the documented reviews."

Why it happens: The SDP was written early, probably copied from a template, and never updated. The team moved to agile. The plan and the practice diverged.

The fix: Rewrite the SDP to match what the team actually does. EN 62304 clause 5.1 does not prescribe a methodology — it requires the methodology to be documented. Agile is fine. Iterative is fine. Say so, name the ceremonies, name the artefacts, and keep the plan alive through annual review.

5. Cybersecurity evidence absent or shallow

What the finding looks like: "No evidence of threat modelling, secure design activities, or vulnerability management per EN IEC 81001-5-1:2022."

Why it happens: Cybersecurity became routine in audits only recently. Files from before 2022 often have nothing on it. Founders assume a TLS certificate and a password field are enough.

The fix: Add a security risk management file covering the EN IEC 81001-5-1:2022 activities: threat modelling (STRIDE or equivalent), secure design controls, vulnerability management process, security verification. MDCG 2019-16 Rev.1 is the guidance framework. The file does not need to be large — a dozen pages is often sufficient for a Class IIa SaMD — but it must exist and be traceable to risk controls.

6. Software safety classification rationale missing or wrong

What the finding looks like: "The manufacturer classified the software as Class B under EN 62304 without documented rationale, and the hazard analysis suggests Class C is more appropriate."

Why it happens: Class B is the default hope. Teams want to avoid the additional clause 5.4 and 5.5 rigour of Class C. They classify based on preference, not evidence.

The fix: Write a one-page classification rationale. What is the worst-case harm if the software fails? Does a risk control external to the software reduce the worst case? If the answer to the second question is yes, document the external control. EN 62304 clause 4.3 is explicit: the class is determined by the severity of the potential harm, considering external risk controls.

7. Architecture documentation that does not match the code

What the finding looks like: "The software architecture document describes three modules; the repository contains seven services with different boundaries."

Why it happens: The architecture was documented once, early. The code evolved. Nobody updated the diagram.

The fix: Keep the architecture document short and at the right level of abstraction. One or two diagrams plus a table of components, their interfaces, and their safety relevance. Review on every major release. EN 62304 clause 5.3 does not ask for UML diagrams — it asks for an architecture that supports verification and risk control.

8. Verification evidence that does not cover the claimed requirements

What the finding looks like: "Test case TC-042 is marked as verifying SRS-018, but the test does not exercise the requirement as stated."

Why it happens: The trace matrix was built to make the numbers match, not to reflect actual coverage. A test that mentions the requirement ID in a comment gets counted as covering it, even if the test logic does not match.

The fix: Review trace coverage semantically, not syntactically. For each software requirement, the reviewer must be able to read the test and agree that the test verifies the requirement. This is a 30-minute exercise per 20 requirements, done by a second pair of eyes before release.

9. Change control without impact analysis

What the finding looks like: "Pull request #312 introduced a change to the risk-classified module without documented impact analysis against existing risks."

Why it happens: Changes are merged on code review alone. The reviewer checks the code, not the risk file. By audit time, nobody can say whether any of the last 50 changes affected the risk analysis.

The fix: Add a required checkbox to the pull request template: "This change affects / does not affect the risk analysis. If it affects: which hazards, which risk controls, link to updated risk entry." The checkbox is the clause 6.2 and clause 7.4 record. No free-form is needed.

10. No bridge from post-market data back into the software lifecycle

What the finding looks like: "The manufacturer has a PMS plan and receives customer complaints, but there is no documented mechanism by which complaints are analysed against the software risk file and feed into problem resolution."

Why it happens: PMS and software development are run by different people. The PMS system collects complaints. The development team fixes bugs. Nobody connects the two in writing.

The fix: Document the bridge in the PMS plan and in the SDP. Every complaint that relates to software behaviour triggers a problem report in the development issue tracker with a link back to the PMS record. The problem report is analysed against the risk file under EN 62304 clause 7.4 and clause 9. This bridge is also where MDR Article 83 (PMS) and Article 87 (vigilance) obligations meet the software lifecycle.

A worked example

A Class IIb SaMD manufacturer preparing for Stage 2 runs this list as a self-assessment. They find: trace matrix has three of thirty requirements with no linked test; SOUP list is manual and six months stale; problem resolution records exist only for the last two months; SDP is from 2023 and says V-model; no cybersecurity file; classification rationale is a single sentence.

Two weeks of focused work clears all six, in this order: SDP rewrite (half a day), SOUP automation in CI (one day), classification rationale (half a day), cybersecurity file (three days), trace matrix closure (one day), backfill of problem resolution records (two days). Stage 2 runs with zero software-side major findings.

The Subtract to Ship playbook

Run this list as a self-audit eight weeks before Stage 2. For each finding, ask: "Does this apply to us? If yes, what is the smallest fix that would withstand a lead auditor reading the evidence?" Do not over-engineer, do not buy tools, and do not hire consultants before running the self-audit yourself.

The order above is roughly the priority order. Traceability first, because it cascades. SOUP second, because automation is cheap. Problem resolution third, because backfilling is painful. Cybersecurity fifth, because it is the fastest-growing finding category in 2026.

Reality Check

  1. Can you produce a trace matrix linking every software requirement to its design element, risk control, implementation, and verification test — and would a second reviewer agree the links are semantically correct?
  2. Is your SOUP list generated from the actual build, not hand-maintained?
  3. Does every defect closed in the last 12 months have a problem resolution record that includes impact analysis against the risk file?
  4. Does your software development plan match the way the team actually works today, and was it reviewed in the last 12 months?
  5. Can you show a cybersecurity file with threat model, controls, and vulnerability management per EN IEC 81001-5-1:2022?
  6. Is your software safety classification supported by a written rationale referencing specific hazards and external risk controls?
  7. Is your architecture document aligned with the current codebase, or did it diverge after the last refactor?
  8. Does your pull request template require a risk-impact checkbox, and is it filled in for every change to risk-classified modules?
  9. Is there a documented, working bridge between PMS complaints and software problem resolution?
  10. If a notified body auditor asked you to produce the evidence for any of these ten findings in the next hour, would you succeed?

Frequently Asked Questions

Which of these findings is considered a major non-conformity? Broken traceability (finding 1), stale SOUP (finding 2), and missing cybersecurity evidence (finding 5) are most frequently written as majors because they represent systemic gaps. Single missing problem reports or a slightly stale SDP are usually minors. But severity depends on the auditor and the evidence.

How long do I have to close a major non-conformity after Stage 2? Typically 30 to 90 days depending on the notified body, but the CE certificate is not issued until the CAPA is accepted. Plan for delay.

If my software is Class A under EN 62304, do all these findings still apply? Most do, at reduced depth. Class A still requires clause 5.1 planning, clause 6 configuration management, clause 7 risk, clause 8 configuration management, and clause 9 problem resolution. What changes is the depth of clauses 5.3–5.7.

Do I need a separate cybersecurity file, or can I merge it with my risk file? Merging is acceptable if the cybersecurity content is clearly identified and traces to EN IEC 81001-5-1:2022 activities. Many teams keep a separate annex under the same risk management process. Either works.

What is the single highest-leverage fix? Trace matrix discipline. If your traceability is clean, findings 1, 3, 6, 8, and 9 largely disappear, because the trace exposes gaps before the auditor finds them.

Can I use these ten findings as my pre-audit checklist? Yes — that is exactly the intent. Review each one, score yourself honestly, fix the failing ones in priority order before you schedule Stage 2.

Sources

  1. Regulation (EU) 2017/745 on medical devices, consolidated text. Annex I §17.2; Annex II §6.1; Articles 83, 87.
  2. EN 62304:2006+A1:2015 — Medical device software — Software life cycle processes. Clauses 4.3, 5.1–5.8, 6, 7, 8, 9.
  3. EN ISO 14971:2019+A11:2021 — Medical devices — Application of risk management to medical devices.
  4. EN IEC 81001-5-1:2022 — Health software and health IT systems safety, effectiveness and security — Part 5-1: Security.
  5. MDCG 2019-16 Rev.1 (July 2020) — Guidance on cybersecurity for medical devices.