Software unit verification under EN 62304:2006+A1:2015 Section 5.5 is the activity where the manufacturer verifies that each software unit meets its detailed design and the coding standards the team has defined. For Class B and Class C software, unit implementation and verification are mandatory, with acceptance criteria documented and evidence captured. For Class A software, the full set of unit-level activities is not required by the standard, though the software still has to be implemented under the planning and requirements activities. Unit testing is not the only verification technique the standard accepts — code review and static analysis count — but it is the cheapest and most reproducible form for most modern teams. The MDR is the North Star. EN 62304:2006+A1:2015 is the tool that operationalises the unit-level verification obligation under MDR Annex I Section 17.2.
By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.
TL;DR
- EN 62304:2006+A1:2015 Section 5.5 defines software unit implementation and verification as the activity where each software unit is coded and then verified against its detailed design and acceptance criteria.
- The activity is mandatory for Class B and Class C software. For Class A, the standard does not require detailed design or unit-level verification — though the software still runs under planning, requirements, and system testing.
- Verification at the unit level can be performed by unit tests, code review, static analysis, or a combination. The standard does not mandate one technique; it mandates that the chosen techniques produce documented evidence that the unit meets its criteria.
- Acceptance criteria must be defined before the verification runs. A unit test with no pre-defined pass/fail threshold is not a verification activity — it is an observation.
- For Class C, the standard adds explicit acceptance criteria categories — correct operation, boundary conditions, error handling, and data/control flow — that the verification approach must address.
- Unit-level evidence is captured automatically from CI wherever possible. Screenshots, hand-written test logs, and reconstructed coverage reports do not survive audit scrutiny.
- Traceability runs from the detailed design to the unit to the verification case to the result. A gap in this chain is a gap in the file.
Why unit verification is where the lifecycle proves it is real
A Notified Body reviewer who wants to know whether a manufacturer's software lifecycle is real — not a paper exercise — goes to the unit-level evidence first. The test plans and traceability matrices at the system level can be polished in a few weeks before an audit. The unit-level evidence cannot. If unit verification has been running continuously from the start, the evidence shows it — commit histories with test runs attached, coverage trends over time, anomaly tickets tied to specific units, refactoring that was safe because the tests caught regressions. If unit verification was bolted on at the end, the evidence shows that too — a flurry of tests added in a narrow window, coverage numbers that appeared overnight, test names that do not match the units they supposedly cover.
Every startup we meet has an opinion on unit testing before they read the standard. Either they believe unit testing is central to good engineering and they already do it, or they believe it slows development and they do not. Both positions collide with EN 62304:2006+A1:2015 in the same way: the standard does not care about opinions. It requires that software units in Class B and Class C items be verified against documented acceptance criteria, with evidence. How you produce the evidence is your choice. That the evidence exists is not.
The good news for the teams who already write unit tests is that the standard accepts what they are doing, as long as the acceptance criteria are documented and the evidence is captured reproducibly. The work is formalising what already runs, not building a parallel activity.
EN 62304:2006+A1:2015 Section 5.5 — what the standard actually requires
Section 5.5 of EN 62304:2006+A1:2015 is the software unit implementation and verification clause. Read carefully, it has three parts.
The first part requires the manufacturer to implement each software unit. Implementation is the coding activity itself — turning the detailed design into source code that meets the design and the coding standards the team has defined. The standard does not prescribe a language, a style guide, or a framework; it requires that whatever coding standards the manufacturer chooses are documented and followed.
The second part requires the manufacturer to establish software unit verification procedures and acceptance criteria, and to verify that each software unit meets those criteria before it is integrated. The acceptance criteria are defined at the planning stage and refined during detailed design. The verification procedures are the techniques the team will use — unit tests, code review, static analysis, or a combination — to check the unit against the criteria.
The third part, which applies specifically to Class C software, adds explicit acceptance criteria categories the verification approach has to address: proper event sequence, data and control flow, planned resource allocation, fault handling including error definition and isolation, initialisation of variables, self-diagnostics, memory management and memory overflows, and boundary conditions. The standard does not require a specific test for each category — it requires that the verification approach address each one.
The activity sits inside the MDR chain through Annex I Section 17.2, which requires software to be developed in accordance with the state of the art taking into account the principles of development life cycle, risk management, verification, and validation. (Regulation (EU) 2017/745, Annex I, Section 17.2.) Unit verification is the lowest level at which that obligation lands on the code itself. The record of unit verification is part of the technical documentation described in MDR Annex II.
The approach — unit testing, code review, static analysis
The standard accepts multiple verification techniques at the unit level, and most startups end up using a combination. The three that matter are unit testing, code review, and static analysis.
Unit testing is the execution-based technique. The team writes tests that exercise the unit with defined inputs, capture the outputs, and compare them to expected values. The tests run in CI, produce logs with timestamps and version identifiers, and fail loudly when the unit drifts from its expected behaviour. This is the cheapest and most reproducible form of unit verification for software that can be exercised in isolation — which, for most modern medical software, is most of it.
Code review is the inspection-based technique. A reviewer who did not write the unit reads it against the detailed design and the coding standards, raises questions, and records findings. Code review catches things unit tests cannot — readability issues, coding-standard violations, architectural drift, logic the tests do not exercise. For safety-critical units in Class C software, code review is often required on top of unit testing, not instead of it.
Static analysis is the tool-based technique. A static analyser reads the code without executing it and flags patterns associated with defects — uninitialised variables, null-pointer dereferences, buffer overruns, memory leaks, unreachable code, coding-standard violations. For safety-critical code, a static analyser that enforces the manufacturer's coding standards produces evidence that maps directly to several of the Class C acceptance criteria categories in Section 5.5 — initialisation of variables, memory management, fault handling.
The lean approach combines the three. Unit tests for behavioural verification. Code review as a gate on every change that touches regulated code. Static analysis running automatically in CI and blocking merges on violations. Each technique produces its own evidence, each trace back to the acceptance criteria, and the three together cover more ground than any one of them alone.
Acceptance criteria — the part most teams skip
The single most common finding at audit in this area is that unit tests exist but acceptance criteria do not. A test that asserts expect(result).toEqual(42) is not, by itself, a verification against acceptance criteria. It is a test. The acceptance criterion is the statement of what the unit is supposed to do that the test is measuring against.
Acceptance criteria are defined before the verification runs. They come from the detailed design, and they answer questions like: what are the expected outputs for the specified inputs, what are the boundary conditions the unit has to handle, what errors must the unit raise under what conditions, what performance thresholds must the unit meet, what state must the unit leave the system in after execution. For Class C software, the criteria also cover the categories Section 5.5 calls out explicitly — event sequencing, data and control flow, resource allocation, fault handling, initialisation, self-diagnostics, memory management, boundary conditions.
The form of the acceptance criteria does not have to be heavy. A short block of structured text in the detailed design that lists the criteria for a unit, referenced from the unit test file by a stable identifier, is enough. What the Notified Body will look for is the link from the test to the criterion, and the link from the criterion to the detailed design. A test run with no acceptance criterion is an observation. An acceptance criterion with no test is an unverified claim. Both are findings.
Scaling by safety class — A, B, and C
The depth of unit-level activity scales with the software safety class under EN 62304:2006+A1:2015. The scaling is not a style preference; it is written into the standard, and misreading it is expensive in both directions.
Class A. The standard does not require detailed design for Class A software, and it does not require unit implementation and verification as a distinct activity. Class A software is still implemented, and it is still verified at the system level under Section 5.7, but the unit-level evidence is not mandated by the standard. This does not mean a team cannot write unit tests for Class A software — most teams do, because unit tests are cheap insurance. It means the standard does not require the documentation and evidence capture at the unit level for Class A that it requires for B and C. The subtraction here is real: a legitimately Class A module does not need the unit-level file that a Class B module needs.
Class B. Class B software requires detailed design at the software-item level and unit implementation and verification against the acceptance criteria defined in the detailed design. Unit-level evidence is captured, traced to the criteria, and reviewed. The standard does not require the exhaustive Class C criteria categories, but it does require that the verification approach be documented and that the evidence survive audit review.
Class C. Class C software requires detailed design down to the unit level, and the full Class C acceptance criteria categories — event sequencing, data and control flow, resource allocation, fault handling, initialisation, self-diagnostics, memory management, boundary conditions — must be addressed by the verification approach. Class C is also where the independence of the verification starts to matter: the person who verifies a unit should ideally not be the person who wrote it, or the verification should be reviewed by someone other than the author. The standard does not mandate a specific independence structure, but it mandates that the verification activity is something more than the author running their own tests on their own code.
Assigning software items to the right class is the subtraction move that pays the most here. A team that assigns Class C to everything to be safe is paying the Class C cost on every unit in the system. A team that assigns Class A to everything to avoid the work is building a file that will not survive the first risk-analysis review by the Notified Body. The class is determined by the EN ISO 14971:2019+A11:2021 risk analysis, honestly applied at the item level.
Automated testing and CI evidence
The regulatory artefact of a unit verification is a reproducible record — input, execution, output, pass/fail judgement, software version, timestamp. A CI test runner produces this record natively. A hand-run test with a screenshot does not.
The practice that makes unit-level evidence cheap is wiring the verification into CI from the first commit that is intended to become regulated code. Unit tests run on every push. Static analysis runs on every push and blocks merges on violations. Code review is enforced by the pull-request workflow. Coverage is tracked over time and reported per commit. Every one of these is a regulatory artefact the moment the team decides to treat it as one — which means documenting the CI configuration as part of the software verification plan, storing the logs with the release, and tying the results to the acceptance criteria.
Teams that set this up in sprint one pay almost nothing to maintain it. Teams that bolt it on before audit week pay heavily and end up with a record the auditor does not trust because it appeared suddenly. The cost curve rewards early investment more sharply in this activity than almost anywhere else in the lifecycle.
Traceability from detailed design to verification
Traceability at the unit level runs from the detailed design to the software unit to the acceptance criteria to the verification case to the verification result. Every link has to close. A unit with no detailed design entry is untraceable. An acceptance criterion with no verification case is an unverified claim. A verification case with no link to an acceptance criterion is either orphaned or covering unspecified behaviour.
The practice that keeps traceability cheap is storing the detailed design, the acceptance criteria, and the verification cases in the same repository as the code, with stable identifiers that cross-reference. Traceability then becomes a query over the repository, not a hand-maintained spreadsheet. A broken link becomes a CI failure, not an audit finding.
Common mistakes startups make with unit verification
- Writing unit tests without documented acceptance criteria. The tests may be technically correct, but without the criteria the auditor cannot tell what is being verified against what.
- Assigning Class A to items that the risk analysis would support as Class B, to avoid the unit-verification obligation. This is a misrepresentation the auditor will surface during the risk-file review.
- Assigning Class C to items that are legitimately Class B, to be safe. This pays the Class C cost on every unit and slows the team down for no compliance benefit.
- Capturing unit-test evidence by screenshot. Screenshots are not reproducible records. Use CI logs with version and timestamp.
- Treating code review as informal. For Class B and Class C code, code review is part of the verification activity and has to produce a record — who reviewed, when, what was found, what was resolved.
- Letting static analysis violations accumulate as warnings instead of treating them as merge blockers. A warning nobody fixes is a warning that erodes the credibility of the whole analysis.
- Running unit tests manually the week before release. Unit tests that do not run continuously in CI cannot produce the evidence trail the standard expects.
- Conflating unit tests with integration tests or system tests. Each level has a job. Unit tests verify units against the detailed design. Integration tests verify interfaces. System tests verify the integrated software against the requirements. Mixing the levels leaves gaps at each one.
The Subtract to Ship angle
Unit verification is the activity where the class-based scaling of EN 62304:2006+A1:2015 gives the biggest subtraction dividend if the classification is honest. The moves that work are these.
One — assign the software safety class at the item level, honestly, based on the risk file. Class A where Class A is defensible, Class B where Class B applies, Class C where the risk analysis demands it. No bulk assignment in either direction.
Two — automate the verification evidence from day one. Unit tests in CI, static analysis in CI, code review enforced by pull-request gates. The evidence is a byproduct of the engineering, not a separate artefact.
Three — document the acceptance criteria once, in the detailed design, and link them from the tests by stable identifiers. A short block of criteria per unit beats a long hand-written test report.
Four — do not build a parallel unit-test process for the regulatory file. The unit tests the team already runs are the unit tests in the file. The work is capturing them and linking them, not writing new ones.
Five — keep the coding standards short and enforceable. A twenty-page coding standard nobody follows is worse than a one-page standard the static analyser enforces on every commit.
Every activity still traces to a specific clause of EN 62304:2006+A1:2015 and, through the standard, to MDR Annex I Section 17.2. Activities that do not trace come out. For the broader framework, see post 065.
Reality Check — Is your software unit verification ready for a Notified Body review?
- Do you have a software verification plan that defines the unit-level techniques the team uses — unit tests, code review, static analysis — and the evidence each one produces?
- Are acceptance criteria documented in the detailed design for every Class B and Class C software unit, with stable identifiers that the verification cases reference?
- For Class C items, does the verification approach address event sequencing, data and control flow, resource allocation, fault handling, initialisation, self-diagnostics, memory management, and boundary conditions?
- Do your unit tests run automatically in CI on every change, with evidence captured in a reproducible form — inputs, outputs, version, timestamp?
- Is static analysis running in CI, with violations blocking merges rather than accumulating as warnings?
- Is code review a required gate on every change that touches Class B or Class C code, with a record of who reviewed and what was resolved?
- Does traceability close from the detailed design to the unit to the acceptance criteria to the verification case to the result, without gaps?
- Is the software safety class assigned at the item level based on the EN ISO 14971:2019+A11:2021 risk analysis, or is it bulk-assigned for convenience?
- Does the unit-verification documentation reflect what the engineering team actually does, or does it describe a process that exists only on paper?
Any question you cannot answer with a clear yes is a gap between your current practice and what the Notified Body will expect to see.
Frequently Asked Questions
Is unit testing mandatory under EN 62304:2006+A1:2015? Unit-level verification is mandatory for Class B and Class C software. The standard does not mandate unit testing specifically as the technique — code review and static analysis are also accepted — but it mandates that the chosen techniques produce evidence that each unit meets its acceptance criteria. Most teams use unit testing because it is the cheapest and most reproducible form for software that can be exercised in isolation.
Does Class A software need unit tests under the standard? No. The standard does not require detailed design or unit implementation and verification as a distinct activity for Class A software. Class A software is still implemented and still verified at the system level under Section 5.7, but the unit-level file required for Class B and Class C is not mandated for Class A. Most teams still write unit tests for Class A code because they are cheap insurance, but the regulatory obligation does not apply.
What are the Class C acceptance criteria categories? For Class C software, the verification approach must address proper event sequencing, data and control flow, planned resource allocation, fault handling including error definition and isolation, initialisation of variables, self-diagnostics, memory management and memory overflows, and boundary conditions. The standard does not prescribe specific tests for each category — it requires that the verification approach documents how each category is covered.
Can static analysis substitute for unit testing in EN 62304:2006+A1:2015? Partly. Static analysis covers several of the Class C categories — initialisation, memory management, some fault-handling patterns — that are hard to cover exhaustively with execution-based tests. For those categories static analysis is often the stronger technique. For behavioural verification — does the unit compute the right output for a given input — unit testing is usually needed. The two techniques complement each other rather than substitute.
Do I need independent verification at the unit level? For Class C software, the standard expects the verification activity to have some form of independence from the author of the unit — typically achieved through code review by a second engineer, or through verification owned by a team member other than the implementer. The standard does not mandate a specific organisational structure, but it does treat verification done entirely by the author of the code with more scepticism than verification done by a reviewer.
Where does unit-verification evidence live in the MDR technical file? In the software documentation section of the technical file required by MDR Annex II. The expected contents are the software verification plan, the detailed design with acceptance criteria, the verification cases, the verification results (CI logs, static analysis reports, code review records), and the traceability matrix. (Regulation (EU) 2017/745, Annex II.) The form can be structured markdown, PDFs, or a combination, as long as a reviewer can reproduce the activity from the record.
Related reading
- MDR Software Lifecycle Requirements: How IEC 62304 Helps You Demonstrate Conformity — the lifecycle overview that frames where unit verification fits.
- EN 62304 Software Safety Classification A, B, C — the classification that scales the unit-level activities.
- Software Development Planning Under EN 62304 — the planning activity where verification techniques are chosen.
- Software Architectural Design Under EN 62304 — the architecture activity that precedes detailed design.
- Software Detailed Design Under EN 62304 — the design activity that produces the acceptance criteria for unit verification.
- Software Requirements Analysis Under EN 62304 — the requirements activity that drives the verification chain.
- Software Integration Testing Under EN 62304 — the activity immediately after unit verification.
- MDR Software System Testing: Validating the Complete System via IEC 62304 — the system-level verification that unit testing feeds into.
- Software Configuration Management Under EN 62304 — the process that keeps unit evidence tied to specific versions.
- Software Problem Resolution Under EN 62304 — the process that handles anomalies surfaced during unit verification.
- The Subtract to Ship Framework for MDR Compliance — the methodology pillar this post applies to unit verification.
Sources
- Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Annex I Section 17.1 and Section 17.2; Annex II (Technical Documentation). Official Journal L 117, 5.5.2017.
- EN 62304:2006+A1:2015 — Medical device software — Software life-cycle processes (IEC 62304:2006 + IEC 62304:2006/A1:2015), Section 5.5 — Software unit implementation and verification. Harmonised standard referenced for the software lifecycle under MDR Annex I Section 17.2.
- EN ISO 14971:2019+A11:2021 — Medical devices — Application of risk management to medical devices. Harmonised standard referenced for risk management under MDR Annex I, integrated with EN 62304:2006+A1:2015 for the software safety class determination.
- EN ISO 13485:2016+A11:2021 — Medical devices — Quality management systems — Requirements for regulatory purposes. Harmonised standard referenced for the QMS that wraps the software lifecycle.
This post is a category-9 spoke in the Subtract to Ship: MDR blog, focused on software unit implementation and verification under EN 62304:2006+A1:2015 Section 5.5. Authored by Felix Lenhard and Tibor Zechmeister. The MDR is the North Star for every claim in this post — EN 62304:2006+A1:2015 is the harmonised tool that operationalises the unit-level verification obligation under MDR Annex I Section 17.2, not an independent authority. For startup-specific regulatory support on software safety classification, verification planning, and audit-ready evidence capture, Zechmeister Strategic Solutions is where this work is done in practice.