System integration and verification testing is where subsystems become a device and where you prove, with documented evidence traced to every requirement, that the device does what you said it does. Under MDR Annex II §6.1(b) and EN ISO 13485:2016+A11:2021 clause 7.3.6, the auditor will want a plan, the results, the traceability, and a clean story for every failure you encountered along the way.

By Tibor Zechmeister and Felix Lenhard.

TL;DR

  • System integration testing brings hardware, firmware, software, and mechanical subsystems together and exercises them as one device against the full set of system-level requirements.
  • Verification under EN ISO 13485:2016+A11:2021 clause 7.3.6 asks a single question: do the design outputs meet the design inputs? Integration testing is the answer at the system boundary.
  • MDR Annex II §6.1(b) requires the technical documentation to contain the verification results that demonstrate conformity with the General Safety and Performance Requirements in Annex I.
  • Traceability from requirement to test case to test result is not bureaucracy. It is the only way an auditor can confirm that every requirement was actually tested and every test belongs to a requirement.
  • Failures during verification are not a problem. Undocumented failures, untraceable root cause, and missing re-test records are the problem.

Why this matters

The worst notified body finding Tibor sees in electromechanical devices is not a failed test. It is a device that was clearly tested, that probably works, and whose manufacturer cannot produce the evidence in a form an auditor can follow. The integration test report exists. The requirements exist. The traceability between them does not.

This is the moment where startup velocity collides with design controls. For months the team has been building and debugging at the bench. Firmware engineers have been running their own tests. The mechanical team has been running theirs. The electrical team has been running theirs. Everything "works." Then the notified body asks for the system verification evidence and the team has to reconstruct, from Slack messages and bench notebooks, something that should have been a single controlled document.

Integration and verification testing is where you either earn your CE mark cleanly or spend the next six months rebuilding records to match what you already did. It is also where most of the genuinely surprising problems in a device get discovered, because subsystem tests never exercise the interfaces the way the real device does.

What MDR actually says

MDR Annex II §6.1(b) requires the technical documentation to include "the results and critical analyses of all verifications and validation tests and/or studies undertaken to demonstrate conformity of the device with the requirements of this Regulation and in particular the applicable general safety and performance requirements." Annex I contains those requirements, and §17 addresses devices incorporating electronic programmable systems.

The MDR itself does not prescribe a verification methodology. It presumes one through harmonised standards. EN ISO 13485:2016+A11:2021 clause 7.3.6 requires documented procedures for design and development verification to ensure that design and development outputs meet the design and development input requirements. The clause requires plans including methods, acceptance criteria, statistical techniques where used, and a sample size rationale. Clause 7.3.10 requires design and development files containing the records that demonstrate compliance.

For electromedical devices, EN 60601-1 adds system-level verification expectations. Basic safety and essential performance must be demonstrated at the complete equipment level. Subsystem-level testing is not a substitute. This is precisely what system integration testing is for: bringing the subsystems together and exercising the device as a whole against the system requirements.

Risk control measures under EN ISO 14971:2019+A11:2021 must also be verified. Clause 10.2 of EN ISO 14971 requires verification of the implementation and effectiveness of each risk control. In practice, many of those verifications land inside the system integration test plan because they cannot be exercised at subsystem level.

A worked example

Consider a Class IIa point-of-care device: a reusable base unit with a disposable cartridge, a touchscreen UI, an internal pump, temperature control, a Bluetooth link to a companion app, and a rechargeable battery. The system requirements document has 214 numbered requirements. The design input side is done, the subsystems have each passed their own verification, and the team is ready to integrate.

The integration test plan is written as a controlled document under clause 7.3.6. It states the purpose, the device configuration under test (with exact firmware build numbers, hardware revisions, and cartridge lot numbers), the environment, the entry criteria, the exit criteria, the acceptance criteria, the test cases, and the traceability to requirements. For this device the team writes 168 test cases. Every system requirement is covered by at least one test case. Forty-six requirements are covered by multiple test cases because they apply in several operating modes. The traceability matrix is a simple table: requirement ID, test case ID, result, evidence reference.

During execution, 12 test cases fail on first run. Nine are firmware bugs (one temperature setpoint overshoot, two UI state transition errors, six cosmetic text issues the verification still must record). Two are genuine design problems (the pump draws slightly too much current when the battery is below 20 percent, causing a momentary voltage sag that resets the Bluetooth module). One is a test setup error.

The team does not patch-and-retest in place. Each failure gets a nonconformity record. Root cause is analysed. A change request goes into the design change process. The fix is implemented in a new firmware build. The impacted test cases are re-run on the new build, and the re-test is recorded with the new build number. The test setup error results in a correction to the test case itself, which goes through document control. When the notified body looks at the verification file, they can follow every failure to its correction and every correction to its re-test. That is what "critical analyses of all verifications" in Annex II §6.1(b) looks like in practice.

The Subtract to Ship playbook

The goal is verification evidence an auditor can read in one sitting. Not more paperwork. Less ambiguity.

Write the integration test plan before you start integrating. Not after. The plan names the device configuration, the entry criteria (which subsystem tests must have passed), the exit criteria, and the acceptance criteria. If you are not willing to write down what "passed" means before you run the test, you are not ready to run the test.

Build the traceability matrix from day one of requirements. Every system requirement gets at least one verification method assigned at the moment it is written. Most will be "test." Some will be "inspection," "analysis," or "demonstration." When a requirement has no sensible verification method, that is a signal the requirement is badly written, not that verification is optional.

Separate subsystem verification from system integration verification. Subsystem tests belong in subsystem verification reports. System integration tests exercise the integrated device against system-level requirements. Do not try to combine them. Notified bodies get confused and so do your own engineers six months later.

Control the device under test like a product. Firmware build number, hardware revision, mechanical revision, cartridge lot. If any of those change, the verification run is suspect and the traceability must show which test cases are affected. This is what clause 7.5.9 (traceability) will later require during production. Start early.

Handle failures as evidence, not as embarrassments. A verification run with zero failures on first execution is not a sign of a strong device. It is a sign of a weak test plan or a hidden failure. Auditors know this. Document every failure, the root cause, the corrective action, the re-test, and the final result. This is what "critical analyses" means in MDR Annex II §6.1(b).

Keep the evidence package together. The plan, the test cases, the raw results (screenshots, scope traces, log files, photos), the nonconformities, the re-tests, and the final summary report go into the design and development file required by EN ISO 13485 clause 7.3.10. Not in Slack. Not on someone's laptop. In the QMS.

Cross-reference the risk file. For every risk control measure that is verified at the system level, the integration test report or the risk management file should reference the test case that verified it. When your notified body compares your risk control measures against your verification evidence, the crosswalk should already be done.

Do not forget EN 60601-1 and EN 60601-1-2 system-level tests. For electromedical devices, system-level electrical safety and EMC testing happens at an accredited test house. Those reports are part of your verification evidence and must be referenced from the integration test summary. See our post on electrical safety testing for how to sequence that work against your in-house integration activity.

Reality Check

  1. Can you point to a single controlled document that is the system integration test plan, approved before execution started?
  2. Does every system-level requirement in your requirements document appear in your traceability matrix with at least one verification method assigned?
  3. For every test case that failed during verification, can you show the nonconformity, the root cause analysis, the corrective action, and the re-test on the corrected build?
  4. If an auditor asked for the exact firmware build, hardware revision, and consumable lot used during each verification run, could you answer in under a minute?
  5. Are the risk control measures from your EN ISO 14971 risk file traceable to specific verification test cases?
  6. Is your EN 60601-1 and EN 60601-1-2 test report from the accredited lab referenced from your internal verification summary?
  7. Are the verification records actually stored in your QMS, not on bench notebooks and personal drives?
  8. If you removed the most experienced engineer from the team tomorrow, could the remaining team reproduce the verification from the documentation alone?

Frequently Asked Questions

What is the difference between verification and validation? Verification asks: do the outputs meet the inputs? Validation asks: does the device meet user needs and intended use in the actual use environment? EN ISO 13485 clauses 7.3.6 and 7.3.7 keep them separate for a reason. Integration testing is mostly verification.

Do I need to test every requirement? Yes. If a requirement exists and is not verified, your design is not verified. If a requirement cannot be verified, rewrite it so that it can. There is no such thing as a valid unverified requirement.

Can I reuse subsystem verification for system verification? No. Subsystem verification proves the subsystem meets its own requirements. System verification proves the integrated device meets the system requirements. These are different requirements and different tests. Reuse is a red flag to an auditor.

How do I handle test failures without blowing up the timeline? Failures are expected. Plan for them. Budget re-test cycles into the schedule. A clean failure-root cause-fix-retest chain takes days, not weeks, if your change control is working.

Does the notified body want to see raw data or just the summary? Both. They want the summary report to start with, and they reserve the right to ask for raw evidence on any test case. Make sure the raw evidence is retrievable.

Where does usability verification fit in? Usability verification under EN 62366-1:2015+A1:2020 is separate from technical verification but often runs on the same integrated device. Plan the integration build so it can serve both purposes.

Sources

  1. Regulation (EU) 2017/745 on medical devices, consolidated text. Annex II §6.1(b), Annex I §17.
  2. EN ISO 13485:2016+A11:2021 — Medical devices — Quality management systems — Requirements for regulatory purposes. Clauses 7.3.6, 7.3.7, 7.3.10, 7.5.9.
  3. EN ISO 14971:2019+A11:2021 — Medical devices — Application of risk management to medical devices. Clause 10.2.
  4. EN 60601-1:2006+A1+A12+A2+A13:2024 — Medical electrical equipment — General requirements for basic safety and essential performance.