A risk control that has not been verified is a claim, not a control. EN ISO 14971:2019+A11:2021 clause 7.3 requires verification of both implementation and effectiveness for every risk control measure. MDR Annex I §3 requires risk management as a continuous, documented process. Notified body auditors expect a bidirectional trace from each hazard through its controls to specific verification evidence. Missing that trace is one of the most common findings on startup risk files.
By Tibor Zechmeister and Felix Lenhard.
TL;DR
- EN ISO 14971:2019+A11:2021 clause 7.3 requires two distinct verifications for each risk control: verification of implementation and verification of effectiveness.
- Implementation means the control is physically or functionally present in the device as intended.
- Effectiveness means the control actually reduces the risk it was selected to reduce.
- MDR Annex I §3 embeds this requirement into the General Safety and Performance Requirements as a continuous risk management process.
- Notified body auditors read the risk file with one question in mind: can I follow a single hazard from identification to verification evidence without gaps?
- A verification that is not linked to a specific control is effectively invisible to the auditor.
- The trace runs hazard → risk → control → verification evidence → residual risk. Every row must connect.
Why this matters
Tibor has seen the same audit finding more times than he can count. The risk file lists twenty-seven controls. The design verification report lists forty-one test protocols and their results. Neither document references the other. The auditor asks, for a specific hazard, which test proved that the control worked. The team scrambles. The test exists. The rationale exists. The link does not exist on paper.
That gap is the finding. Not the absence of verification work, but the absence of the trace that would let an auditor or a post-market reviewer confirm that the verification work covered the control it was supposed to cover. In practice, this is a documentation problem that becomes a compliance problem the moment someone from outside the team looks at the file.
The fix is structural. It has to be built into the risk management process from the first entry, not retrofitted in the week before the stage 1 audit. Retrofitting a trace across hundreds of hazards and dozens of verification reports is the kind of work that consumes months of a small team's time and still looks reactive when the auditor arrives.
What ISO 14971 and MDR actually say
EN ISO 14971:2019+A11:2021 clause 7.3 is titled Implementation of risk control measures and requires that:
- Each risk control measure is implemented.
- Implementation is verified.
- The effectiveness of the risk control measures is verified.
The two verifications are distinct. Verification of implementation asks whether the control exists in the device as designed. Verification of effectiveness asks whether, now that it exists, it actually reduces the risk it was selected to reduce. A locked enclosure is implemented if the lock is there and functions. It is effective if the lock actually prevents the hazardous access scenario the risk analysis identified.
Clause 7.3 also requires that the records of these verifications be maintained as part of the risk management file. Clause 7.4 then moves to the residual risk evaluation after the controls are verified, and clause 8 covers the overall residual risk acceptability.
MDR Annex I §3 requires manufacturers to establish, implement, document, and maintain a risk management system. The text specifies that risk management shall be understood as a continuous iterative process throughout the entire lifecycle of a device. Continuous iteration means that verification is not a one-time pre-market event. It is an activity that is reopened whenever new information, new design changes, or post-market signals require it.
MDR Annex I §4 requires that the manufacturer's risk management measures do not adversely affect the benefit-risk ratio. This connects the verification of individual controls to the overall benefit-risk analysis in the technical documentation.
MDR Annex I §8 sets the priority order for risk controls. Verification must be able to show that the applied control is, first, at the highest feasible level of the hierarchy and, second, effective at the level it was chosen.
EN ISO 13485:2016+A11:2021 clause 7.3 on design and development carries the QMS-side obligations for verification and validation. The risk management verification lives inside this framework in practice. The design verification records and the risk control verification records are often the same documents, but they must be tagged to both purposes.
A worked example
A handheld diagnostic device includes a battery with a known thermal runaway failure mode. The risk file lists the hazard. Runaway causes case temperature to exceed safe contact limits. The selected first-tier control is inherent design. A battery chemistry and pack configuration chosen specifically to prevent runaway conditions. The selected second-tier control is a protective measure. A thermal fuse that opens the circuit above a defined temperature. The selected third-tier control is information for safety. A user warning and contraindication for operation in high ambient temperatures.
Verification of each control has to be specific.
Inherent design control. Implementation is verified by the design review record showing that the selected battery chemistry was specified, sourced, and incorporated into the final design. Effectiveness is verified by thermal abuse testing under the IEC 62133 series or equivalent, with test reports linked in the risk file.
Protective measure. Implementation is verified by the bill of materials, the schematic review, and incoming inspection records showing that the thermal fuse is installed with the specified trip temperature. Effectiveness is verified by a functional test that induces the condition and records that the fuse opens at or below the design temperature.
Information for safety. Implementation is verified by the label artwork approval and the IFU content review. Effectiveness is verified by a summative usability evaluation under EN 62366-1:2015+A1:2020 showing that intended users notice, read, and act on the warning.
The risk file row for the original hazard now contains references to six verification records. Three for implementation, three for effectiveness. Each reference is a specific document ID with a revision. The residual risk is then evaluated under clause 7.4 with the benefit of this evidence, and the overall residual risk acceptability is evaluated under clause 8.
That is what a verified risk control looks like on paper. Not because the verifications are long, but because the trace is complete.
The Subtract to Ship playbook
Tibor's practical guidance for building this trace in a resource-constrained startup is structured as a small set of habits that apply from the first hazard entry.
Habit 1. Use a single risk file as the hub. Whether the tool is a spreadsheet, a QMS module, or a dedicated risk management platform, there is one authoritative file. Every hazard has a row. Every row has columns for controls and for verification references. Never split controls and verifications across different tools without a cross-reference column.
Habit 2. Assign unique control identifiers. Each control gets an ID like RC-001, RC-002, and so on. Every verification protocol and report references the relevant RC numbers in its test objective. The auditor then reads a test report and immediately knows which controls it covers.
Habit 3. Separate the two verification questions in the file. Create two columns or two fields. One for verification of implementation. One for verification of effectiveness. When a startup uses a single field labelled "verification," it tends to collect implementation evidence only and miss effectiveness evidence. The split forces the team to ask both questions.
Habit 4. Review the trace at every design change. When the design changes, the affected controls are flagged in the risk file. Their verifications are reviewed for continued applicability. Some will need to be repeated. This is part of the continuous iterative process required by MDR Annex I §3.
Habit 5. Generate the trace matrix on demand. The risk file should be able to output a trace matrix that an auditor can read in twenty minutes. Hazard, risk, control, implementation evidence, effectiveness evidence, residual risk. If the team cannot generate this matrix from the source documents quickly, the structure is not ready for audit.
Felix's coaching experience adds a cultural point. The founders who pass risk verification audits cleanly treat the risk file as a living document owned by the whole engineering team, not as a compliance artefact owned by the quality lead alone. When engineers reference the risk file during design reviews and update it as they learn, the verification links build themselves. When the risk file is written once and then left alone until audit, every verification link is a last-minute reconstruction.
Reality Check
- Can you pick any hazard in your risk file and, in under two minutes, produce the specific implementation verification record and effectiveness verification record for each of its controls?
- Are your verification records tagged with the control identifiers they cover, or only with the design requirements they address?
- Does your risk file treat implementation and effectiveness as two distinct verification questions, or do they collapse into a single "verified" checkbox?
- When a design change affects a risk control, is there a defined process for revisiting the associated verification evidence?
- Does your trace matrix cover residual risk evaluation (clause 7.4) and overall residual risk acceptability (clause 8), not just control verification?
- For information-for-safety controls, do you have summative usability evidence that users actually notice and act on the warning?
- If a notified body auditor pulled a random hazard from your file and asked to see the effectiveness verification, would the answer be a document reference or a promise to follow up later?
Frequently Asked Questions
What is the difference between verification of implementation and verification of effectiveness? Implementation confirms the control exists in the device as specified. Effectiveness confirms that, now it exists, it actually reduces the risk. A thermal fuse that is installed but never tested under fault conditions is implemented but not verified for effectiveness.
Can design verification testing double as risk control verification? Yes, and in practice most verification evidence is dual-purpose. The requirement is that the evidence is explicitly linked to the risk control in the risk file, not that it is a separate test campaign. One test report can cover many controls if the links are recorded.
Do I need to re-verify controls after a minor design change? You need to evaluate whether the change affects the controls and their verification. Some minor changes will require re-verification. Others will not. The decision must be documented. MDR Annex I §3 treats risk management as continuous, which means the review is required; the re-verification is only required where the review concludes it is needed.
How does clause 7.3 relate to clause 7.4? Clause 7.3 covers verification of the controls themselves. Clause 7.4 covers the residual risk evaluation after the verified controls are in place. The two clauses run in sequence. Residual risk cannot be evaluated meaningfully until the controls supporting it are verified.
What is the most common audit finding on risk control verification? Missing or broken traces between the risk file and the verification reports. The work usually exists. The links do not. That gap is enough to generate a nonconformity.
Can a verification be "by analysis" instead of by test? Yes, where justified. Clause 7.3 does not prescribe the verification method. Analysis, inspection, simulation, and test are all possible. The justification for the chosen method is part of the verification record.
Related reading
- The ISO 14971 Annex Z Trap. why the EN version and its GSPR mapping matter for verification arguments.
- Information for Safety: Warnings and Training. the third-tier control that demands its own effectiveness evidence.
- Benefit-Risk Analysis in the Technical Documentation. where verified residual risks feed the overall benefit-risk position.
- Design Verification under ISO 13485. the QMS-side verification process that provides most of the evidence.
- Software Traceability: Design, Tests, and Risks. how software teams build the bidirectional trace for IEC 62304 and ISO 14971.
Sources
- Regulation (EU) 2017/745 on medical devices, consolidated text. Annex I §3, §4, §8.
- EN ISO 14971:2019+A11:2021, Medical devices, Application of risk management to medical devices. Clauses 7.1, 7.3, 7.4, 8, 10.
- EN ISO 13485:2016+A11:2021, Medical devices, Quality management systems, Requirements for regulatory purposes. Clause 7.3.