EN 62304:2006+A1:2015 Section 5.2 requires the manufacturer to establish, document, and maintain software requirements for each software system. The requirements must be derived from the system requirements, must cover functional and capability requirements, software system inputs and outputs, interfaces between the software and other systems, software-driven alarms and warnings, security requirements, usability-related requirements, data definitions and database requirements, installation and acceptance requirements, operation and maintenance requirements, and the risk control measures implemented in software. Each requirement must be uniquely identified, verifiable, consistent with the others, and traceable to the system requirements and to the risk-management file under EN ISO 14971:2019+A11:2021. The MDR is the North Star. EN 62304:2006+A1:2015 is the tool.
By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.
TL;DR
- EN 62304:2006+A1:2015 Section 5.2 is the clause that requires software requirements analysis and a documented software requirements specification for every medical device software project.
- Requirements must be derived from the system requirements, be verifiable, be uniquely identified, be internally consistent, and trace upward to the system requirements and downward to the design and verification activities.
- The clause enumerates the categories the requirements must cover — functional, interface, data, alarm, security, usability, installation, maintenance — and the manufacturer has to address each category that applies.
- Safety-related requirements that implement risk control measures are a distinct category. They have to be flagged, traced to the risk-management file under EN ISO 14971:2019+A11:2021, and verified with evidence proportional to the software safety class.
- Each requirement needs an acceptance criterion a tester can run. A requirement you cannot verify is a requirement the Notified Body will reject.
- The software requirements specification feeds directly into the technical documentation under MDR Annex II — the Notified Body reads it to understand what the software is supposed to do and how the manufacturer proved it.
- The most common gap is not missing requirements. It is requirements that are too vague to verify and too tangled with implementation to trace.
Why software requirements are where the lifecycle either holds together or falls apart
Every software team we work with hits the same wall when they start writing requirements for a medical device. The team has been shipping product for months, maybe years. Features exist. Users use them. Everybody on the team knows what the software does. And then somebody — the regulatory lead, the incoming QA manager, the Notified Body pre-assessment — asks for the software requirements specification, and the room goes quiet. The features exist in the code. The requirements exist in somebody's head, in a Notion doc, in the product backlog, in Slack threads from eighteen months ago. They do not exist in one traceable, verifiable, signed-off document.
That gap between "we know what it does" and "we have a requirements specification that survives audit" is where most Notified Body findings on medical software originate. The findings are rarely that the software is unsafe. The findings are that the manufacturer cannot show, from the requirements forward, that the software does what it is supposed to do and nothing it is not supposed to do. Without a requirements specification, the rest of the lifecycle has nothing to trace to. Architecture hangs in the air. Verification proves nothing in particular. Risk controls point at code that no requirement demanded. The file looks busy but the chain does not close.
EN 62304:2006+A1:2015 Section 5.2 is the clause that forces the chain to close. It requires the manufacturer to produce software requirements that are specific, verifiable, traceable, and complete across a defined set of categories. The clause is not decoration around the real engineering work. It is the spine of the regulatory file for every line of code that follows. This post walks through what Section 5.2 requires, how to write functional and non-functional requirements that survive audit, how to handle safety-related requirements, how to establish traceability to the system requirements and the risk file, how to define verification criteria, and the mistakes we see repeatedly when startups try to short-circuit the activity.
Section 5.2 scope — what the clause actually requires
Section 5.2 of EN 62304:2006+A1:2015 — Software requirements analysis — sits directly after the software development planning clause (Section 5.1) and before architectural design (Section 5.3). The sequence matters. Requirements come after the plan because the plan tells you how requirements will be written and managed. Requirements come before the architecture because the architecture is derived from what the software has to do, not the other way around.
The clause requires the manufacturer to define and document software requirements for each software system. The requirements are derived from the system requirements — the higher-level requirements that describe what the device as a whole does — and are specific to the software part of the system. Where the device is pure software (a standalone MDSW with no hardware), the system requirements and the software requirements may look similar, but the two layers still exist and still have to trace. Where the device has hardware and software, the software requirements are the portion of the system behaviour the software is responsible for.
The clause lists the categories the software requirements must cover. Functional and capability requirements — what the software does. Software system inputs and outputs — what data comes in and what goes out, in what format, at what rate, under what conditions. Interfaces between the software and other systems — APIs, protocols, hardware interfaces, third-party services. Software-driven alarms, warnings, and operator messages — which conditions trigger which alerts. Security requirements — authentication, authorisation, data protection, audit logging. Usability-related requirements that trace from the usability engineering file. Data definitions and database requirements. Installation and acceptance requirements. Operation and maintenance requirements. And — critically — the risk control measures that are implemented in software.
Each of those categories is addressed to the extent it applies to the specific software system. A standalone decision-support MDSW will have a large functional section and extensive data and interface requirements. A pacemaker firmware module will have a different emphasis. The standard does not prescribe volume. It prescribes that every applicable category is addressed, and that the requirements in each category are specific, verifiable, consistent, and traceable.
Section 5.2 also requires the manufacturer to include risk control measures identified under the software risk-management process (Section 7 of the standard) as software requirements. This is the bridge between the risk file and the requirements file — risk controls implemented in software become requirements, and those requirements are flagged so the verification activities downstream give them the rigour that the software safety class demands.
Finally, the clause requires the manufacturer to verify the software requirements before the next lifecycle activity begins. Verification of requirements is different from verification of code. It asks: are the requirements complete, consistent with each other, consistent with the system requirements, free of contradictions, verifiable, and traceable? If yes, the activity closes and design can start. If no, the requirements go back to the author.
Functional and non-functional requirements — writing them so a tester can verify them
Functional requirements describe what the software does — the inputs it accepts, the computations it performs, the outputs it produces, the states it can be in, the transitions between states. Non-functional requirements describe how the software does it — performance, reliability, availability, scalability, maintainability, portability, usability under load.
A functional requirement has three properties that determine whether it survives audit. It is specific — a single, unambiguous statement of behaviour. It is verifiable — a tester can write a test that produces a pass or fail result against it. And it is atomic — one requirement, one behaviour. Compound requirements that bundle three behaviours into one sentence cannot be verified cleanly, because a partial pass is neither a pass nor a fail, and the traceability chain breaks.
A requirement like "the system shall process patient vital-sign data quickly" fails all three properties. It is not specific — what does "process" mean, and which vital signs? It is not verifiable — "quickly" has no threshold. It is not atomic — "process" hides an entire pipeline. Rewritten, the same intent becomes something like "the system shall compute the heart-rate variability metric from the incoming ECG signal within 2 seconds of receiving a complete 60-second ECG sample at 500 Hz." That version is specific (heart-rate variability, ECG, sampling rate), verifiable (2 seconds, measurable), and atomic (one computation, one timing constraint). A tester can write a test for it today.
Non-functional requirements are often where teams cut corners, because they feel softer than functional requirements. This is exactly where audits hit hard. "The system shall be reliable" is not a requirement. "The system shall maintain an uptime of at least 99.5% measured over rolling 30-day windows, excluding scheduled maintenance windows of no more than 4 hours per month announced at least 24 hours in advance" is a requirement. The test is the same: can a tester write a verification procedure that produces a pass or fail answer? If yes, it is a requirement. If no, it is a wish.
The discipline that makes both functional and non-functional requirements work is writing each one alongside its verification procedure. You do not finalise the requirement until you can write the test that proves it. This single practice eliminates most of the vagueness that would otherwise ship into the design activity and blow up at system testing.
Safety-related requirements — the risk control bridge
Safety-related software requirements are the ones that implement risk controls identified under the software risk-management process. These are not a separate document. They are requirements in the main software requirements specification, flagged so the downstream activities treat them with the rigour their safety class demands.
The bridge works like this. The EN ISO 14971:2019+A11:2021 risk-management process identifies hazards and hazardous situations for the device. For each hazardous situation that the software contributes to, the team considers risk control options. Some controls are external to the software — a hardware interlock, a clinical procedure, a label warning. Some controls are implemented inside the software — an input validation check, an alarm condition, a rate-limiter, a redundancy check. The controls that are implemented inside the software become software requirements under Section 5.2.
Each such requirement carries a flag or a traceability link back to the risk-management file entry that demanded it. The flag signals to everyone downstream — architect, developer, tester, auditor — that this requirement is a risk control, and that failure to implement or verify it has safety consequences, not merely functional ones. In a Class C software item, safety-related requirements drive the deepest verification activities in the lifecycle. In a Class B item they still drive significant rigour. In a Class A item — which by definition has no software failures that cause injury — safety-related requirements may not exist at all, though the risk analysis has to support that conclusion.
The traceability is bidirectional. From the risk file forward, every risk control implemented in software must appear as at least one requirement. From the requirements backward, every requirement flagged as safety-related must link to at least one risk file entry. A risk control with no corresponding requirement is a control that was never implemented. A safety-flagged requirement with no risk file entry is a flag placed by someone who did not understand the bridge. Both are findings waiting to happen.
Traceability upward to system requirements and downward to design
Traceability is the property of the requirements file that turns it from a list into a chain. Section 5.2 requires that the software requirements be traceable to the system requirements. Other clauses of the standard require that the software requirements trace downward to the architectural design, the detailed design, the source code, the verification activities, and the risk-management file.
A traceability record is not a spreadsheet bolted on at the end of the project. It is a living structure that grows as the requirements are written. The minimum structure is: every system requirement has zero or more software requirements that derive from it; every software requirement derives from one or more system requirements, or is explicitly marked as originating from a risk control in the risk file; every software requirement links forward to the architectural element that addresses it and to the verification activity that proves it.
For startup teams, the practical move is to keep the traceability in the same repository as the requirements themselves, not in a separate tool. Requirements written as structured markdown with stable identifiers, linked by those identifiers in the architecture document, the test plans, and the risk file, produce a traceability record that is queryable with ordinary repository tooling. The moment the traceability lives in a tool nobody looks at, the chain rots.
The upward trace closes the question "why does this software requirement exist?" The downward trace closes the question "how do we know it was implemented and verified?" An auditor can start at either end. A requirements file that answers both questions is audit-ready. A file that answers only one of them is not.
Verification criteria — what it takes to close a requirement
A requirement closes when verification evidence exists that satisfies the acceptance criterion written into the requirement. The verification criterion is not separate from the requirement — it is part of it. A requirement without an acceptance criterion is an incomplete requirement, no matter how well-written the first sentence is.
Acceptance criteria can take several forms depending on what is being verified. For a computational requirement, the criterion is a set of test inputs and expected outputs. For a performance requirement, the criterion is a measurement procedure and a threshold. For a state-transition requirement, the criterion is a sequence of events and the expected intermediate and final states. For an interface requirement, the criterion is a protocol conformance test. For a risk control requirement, the criterion is a test that proves the control operates under the conditions the risk file specifies.
The rigour of the verification scales with the software safety class. A Class A requirement may be verified at the system-test level only. A Class B requirement typically requires integration-level verification. A Class C requirement — particularly a safety-flagged one — may require unit-level, integration-level, and system-level verification, all captured as reproducible evidence from an automated test run.
The practical test of a verification criterion is the "tester reading cold" test. Hand the requirement and its acceptance criterion to an engineer who has not been part of the project. Can they write a test that produces an unambiguous pass or fail? If yes, the criterion works. If no, it needs to be rewritten before it reaches the design activity.
Common mistakes startups make writing software requirements
The mistakes cluster in the same places across the teams we have worked with, and they are all fixable before the audit if somebody is watching for them.
Requirements that describe implementation, not behaviour. "The system shall use a Kalman filter to smooth the signal" is an implementation choice, not a requirement. The requirement is what the smoothed signal has to look like. Implementation belongs in the design document. Mixing them collapses the architectural layer and makes every change to the implementation a change to the requirement.
Requirements that are not atomic. A single sentence that describes three behaviours chained together cannot be verified cleanly and cannot be traced cleanly. The fix is mechanical — split it into three requirements, give each one its own identifier, and link them if they need to share context.
Requirements written after the code. A requirement written to match code that already exists is not a requirement — it is a description. Descriptions do not survive audit because they have no independent authority. The fix is harder: the team has to actually rederive the requirements from the system requirements and the risk file, even if the code happens to already implement them. Where the rederived requirements and the existing code disagree, the code is wrong, not the requirement.
Missing non-functional requirements. Teams write the functional section and forget performance, reliability, security, maintainability, and installation. The Section 5.2 category list is the checklist. Run it against the document before the activity closes.
No link to the risk file. Safety-related requirements exist in the document but carry no traceability to the risk-management file, so the auditor cannot verify that every risk control has a requirement and every safety-flagged requirement has a risk entry. The fix is to make the link explicit and automated where possible.
Verification criteria written later. Criteria deferred to "when we write the tests" mean the requirement is incomplete when it reaches the design activity. The design is then built against an incomplete requirement, and the verification at the end tries to reverse-engineer criteria from whatever got built. The fix is to write the acceptance criterion alongside the requirement, not after the code.
The Subtract to Ship angle — the smallest requirements specification that closes the chain
The temptation with software requirements is to write more. More categories, more detail, more elaboration, more edge cases. Every addition feels like diligence. Most additions are noise that dilutes the chain the auditor is trying to read.
The Subtract to Ship move on software requirements is to write the smallest specification that covers every applicable Section 5.2 category, carries every safety-related requirement flagged and traced, meets the acceptance criteria test for every entry, and closes the traceability chain to the system requirements and the risk file. Everything beyond that is waste — not because it is wrong, but because it obscures the signal auditors look for.
The subtraction test on a requirements document is sharp. Can every requirement be traced to a system requirement or a risk control? If not, cut it or add the trace. Can every requirement be verified with a test a cold reader could write? If not, rewrite it or cut it. Is every applicable Section 5.2 category covered? If not, add the missing category. Does anything duplicate another requirement? If yes, merge. The document that remains is shorter than the template, harder to write, and much more likely to hold up when the Notified Body reads it as part of the MDR Annex II technical documentation. For the broader framework applied to MDR, see post 065.
Reality Check — Are your software requirements audit-ready?
- Does your software requirements specification cover every category enumerated in EN 62304:2006+A1:2015 Section 5.2 that applies to your software system?
- Is every requirement written as a specific, verifiable, atomic statement of behaviour or property?
- Does every requirement carry an acceptance criterion that a tester who has never seen the project could use to produce a pass or fail result?
- Is every safety-related requirement flagged and traced to a specific entry in the EN ISO 14971:2019+A11:2021 risk-management file?
- Does every software requirement trace upward to a system requirement, and does every system requirement have the software requirements it demands?
- Does every software requirement trace forward to the architectural element that addresses it and the verification activity that proves it?
- Are requirements distinct from implementation, or has the team written "how" where the standard asks for "what"?
- Would the requirements specification survive a skeptical Notified Body reviewer reading it as part of the technical documentation under MDR Annex II?
Any question you cannot answer with a clear yes is a gap between the document on file and the document the audit will expect.
Frequently Asked Questions
Are user stories acceptable as software requirements under EN 62304:2006+A1:2015? User stories can be part of how the team captures intent during planning, but the regulatory artefact is a software requirements specification that satisfies Section 5.2 — specific, verifiable, atomic, traceable, and complete across the required categories. A ticket that says "as a user I want to see my heart rate" is not a requirement in the Section 5.2 sense, because it has no acceptance criterion and no traceability to system requirements or the risk file. Teams that run agile processes can keep the stories in the tracker and maintain a parallel requirements specification that closes the regulatory chain.
How detailed do software requirements need to be for Class A software? The Section 5.2 categories apply to all software safety classes. What scales with the class is the depth of the downstream verification, not the completeness of the requirements themselves. A Class A software item still needs requirements that are specific, verifiable, and traceable, because the system test that proves the software does what it says it does reads from the requirements. The difference shows up in architecture and unit-level work downstream, not in the requirements clause.
What is the difference between a system requirement and a software requirement? System requirements describe what the device as a whole does — the clinical function, the inputs and outputs at the device boundary, the performance the device promises. Software requirements describe what the software portion of the device does to meet those system requirements. In a pure-software MDSW the two layers can look similar but are still distinct — the system layer defines the device-level obligations, the software layer defines the behaviour the code has to implement. EN 62304:2006+A1:2015 assumes both layers exist and requires the software requirements to trace upward to the system layer.
Do risk control measures always become software requirements? Only when the risk control is implemented in software. A risk control that is a hardware interlock, a clinical procedure, or a label warning does not become a software requirement — it lives in the risk-management file and wherever that control is actually implemented. A risk control that the team decides to implement in the code — an input range check, an alarm on a threshold, a redundant calculation — becomes a software requirement under Section 5.2 and is flagged as safety-related.
How do software requirements connect to the MDR technical documentation under Annex II? MDR Annex II lists the contents of the technical documentation the manufacturer has to assemble for the Notified Body. The software requirements specification is part of the design and manufacturing information that Annex II requires, and it is read by the Notified Body as evidence that the manufacturer has a disciplined development process under Annex I Section 17. A requirements specification that satisfies Section 5.2 of EN 62304:2006+A1:2015 is the form that evidence takes for the software portion of the technical file.
Can software requirements change after they are baselined? Yes — Section 5.2 and the configuration management and change control clauses assume requirements evolve as the software and its understanding evolve. What the standard requires is that changes are controlled: every change is documented, reviewed, traced to the reason for the change (often a risk file update, a problem report, or a system requirement change), and re-verified where the change affects verified behaviour. Uncontrolled changes break traceability and are one of the most common findings on software audits.
Related reading
- MDR Software Classification Under Rule 11 — post 371, the MDR device class that sits above the software safety class.
- MDR Software Lifecycle Requirements: How IEC 62304 Helps You Demonstrate Conformity — post 376, the lifecycle overview this requirements post sits inside.
- MDR Software Safety Classification: Understanding IEC 62304 Class A, B, and C — post 377, the class determination that scales the verification of the requirements.
- MDR Software Development Planning: Using IEC 62304 for the Software Development Plan — post 379, the planning activity that defines how requirements will be written and managed.
- Software Architectural Design Under EN 62304 — post 381, the next activity after requirements.
- Software Detailed Design Under EN 62304 — post 382, the design layer that implements the requirements.
- Software Verification and System Testing Under EN 62304 — post 383, the activity that closes the acceptance criteria written into each requirement.
- Software Configuration Management Under EN 62304 — post 386, the process that keeps the requirements and their changes under control.
- Tool Qualification and Control Under EN 62304 — post 417, the controls on requirements tools whose output enters the regulatory file.
- The Subtract to Ship Framework for MDR Compliance — post 065, the methodology pillar this post applies to software requirements analysis.
Sources
- Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Annex I Section 17; Annex II. Official Journal L 117, 5.5.2017.
- EN 62304:2006+A1:2015 — Medical device software — Software life-cycle processes (IEC 62304:2006 + IEC 62304:2006/A1:2015), Section 5.2 Software requirements analysis. Harmonised standard referenced for the software lifecycle under MDR Annex I Section 17.
- EN ISO 14971:2019+A11:2021 — Medical devices — Application of risk management to medical devices. Harmonised standard referenced for risk management under MDR Annex I.
- EN ISO 13485:2016+A11:2021 — Medical devices — Quality management systems — Requirements for regulatory purposes. Harmonised standard referenced for the QMS under MDR Article 10(9) and Annex IX.
This post is a category-9 spoke in the Subtract to Ship: MDR blog, focused on the software requirements analysis activity required by EN 62304:2006+A1:2015 Section 5.2. Authored by Felix Lenhard and Tibor Zechmeister. The MDR is the North Star for every claim in this post — EN 62304:2006+A1:2015 is the harmonised tool that provides presumption of conformity with the software-lifecycle aspects of MDR Annex I Section 17, and the software requirements specification is part of the technical documentation required by MDR Annex II, not an independent authority. For startup-specific support on writing software requirements that match a real engineering team, close the traceability chain, and survive Notified Body audit, Zechmeister Strategic Solutions is where this work is done in practice.