A good risk management tool in 2026 holds a live, versioned risk file that links hazards, hazardous situations, harms, controls, verification records, and post-market feedback, with full traceability and sign-off history. The category that matters is function, not brand. And under EN ISO 13485:2016+A11:2021 clause 4.1.6, any such tool is a QMS application of computer software whose use must be validated before it goes live and re-validated after changes.

By Tibor Zechmeister and Felix Lenhard.

TL;DR

  • The question is not "which brand is best" but "what does a good risk management tool actually do". Tool categories matter more than logos.
  • A credible tool maintains a living risk file with traceability from hazard to harm to control to verification to post-market feedback.
  • Any software tool used in the QMS, including a risk management tool, falls under EN ISO 13485:2016+A11:2021 clause 4.1.6 and must be validated for its intended use before deployment and after significant changes.
  • Excel can be validated, but the validation burden rises fast as the risk file grows. A purpose-built tool is usually cheaper past a certain device complexity.
  • The tool does not make the team compliant. Tibor has audited plenty of expensive tools holding unsafe risk files.

Why this matters

Tibor has seen two failure modes repeated at startups choosing risk management tools. First failure mode: the team buys an expensive purpose-built risk management module, drops the five percent Excel approach into it unchanged, and walks into the audit believing the tool validates the content. It does not. Second failure mode: the team refuses to leave Excel, the risk file grows to hundreds of rows, version control collapses, and the traceability from hazard to verification evidence exists only in one engineer's head.

Felix sees the same pattern in Subtract to Ship sprints from the founder angle. The tool conversation happens too early, before the team knows what a good risk file looks like. Or it happens too late, after six months of Excel sprawl, at which point migration is painful and the audit is four weeks away. Neither is good.

The 2026 tool landscape is wide. Purpose-built risk management modules inside eQMS platforms, standalone EN ISO 14971 tools, integrated ALM systems with risk plug-ins, and generic spreadsheet plus document control setups all exist. The right choice depends on device complexity, team size, and how fast the risk file will grow. This post gives the decision framework Tibor uses when asked.

What MDR actually says

The MDR does not name any specific tool and never will. It requires outcomes. Three anchors:

MDR Annex I, Chapter I. The GSPRs require manufacturers to establish, implement, document, and maintain a risk management system, and to keep it current across the device lifecycle. That implies a tool capable of maintaining a living file, not a point-in-time snapshot. A spreadsheet with no version control cannot credibly maintain a risk management system over a seven-year device lifetime.

MDR Article 10(9) and EN ISO 13485:2016+A11:2021 clause 4.1.6. Manufacturers must establish, document, implement, and maintain a QMS. Clause 4.1.6 of the harmonised standard requires the organisation to document procedures for the validation of the application of computer software used in the quality management system. Validation is proportionate to the risk associated with the use of the software. Validation and re-validation must occur before initial use and after significant changes. This applies to the risk management tool. It applies to the word processor used to export reports. It applies to Excel. The question is not "does clause 4.1.6 apply" but "what does proportionate validation look like for this tool".

EN ISO 14971:2019+A11:2021. The standard describes process, not tooling. The tool must enable the process. That includes risk analysis, risk evaluation, risk control, residual risk evaluation, overall residual risk evaluation, review, and production and post-production information. If a tool cannot support one of these steps with traceable records, the tool is not fit for purpose.

Plain-language translation: the MDR demands a process the team can execute and defend. The tool's job is to make that process cheap to run, cheap to audit, and cheap to keep alive.

A worked example

A seven-person MedTech startup building a Class IIb surgical accessory. At seed stage, one engineer owns risk in a spreadsheet. By series A, the risk file has 240 rows, the engineer leaves, and the team cannot trace which controls have been verified. Tibor is asked to review.

The audit finds three problems. First, no version history on the file beyond the last three Git commits. Second, hazards are written in free text with no controlled vocabulary, so the same hazard appears under five spellings. Third, the post-market feedback log lives in a separate ticketing system with no link back to the risk file. The residual risk evaluation has not been updated in eleven months.

The fix is not to throw money at a branded tool. The fix is to write down, on one page, what the team actually needs. The team lists the functional requirements:

  • A risk register with controlled vocabulary for hazard, hazardous situation, and harm.
  • Explicit linkage from each risk to the design input, design output, verification record, and validation record.
  • Version history on every row and every field.
  • A clean audit trail of who changed what and when.
  • An input channel for post-market feedback that updates probability and severity without losing the pre-change values.
  • A report generator that can produce the risk management report expected by the notified body.
  • Export of the full file to a human-readable format that survives vendor lock-in.

Then the team evaluates three categories of tool against that list. Category one, the eQMS risk module they already pay for. Category two, a standalone EN ISO 14971 tool. Category three, their existing spreadsheet with a document control wrapper. The standalone tool wins on traceability and the report generator. The eQMS module wins on integration with document control. The spreadsheet loses on version history and audit trail.

The team picks the standalone tool, writes a validation plan under EN ISO 13485:2016+A11:2021 clause 4.1.6, executes the plan, migrates the 240 rows with a cleaning pass, and closes the old file as a controlled record. The tool change itself adds six weeks to the timeline, but it saves an estimated three months of audit preparation downstream. The tool did not make the team compliant. The team writing down what they actually needed made the tool useful.

The Subtract to Ship playbook

Felix runs this playbook with founders weekly. It is tool-agnostic on purpose.

1. Write the functional requirements first. One page. No brand names. List what the tool must do for your specific device and team. Use the seven items from the example above as a starting point and add anything specific to your device class or software stack.

2. Write the validation plan before picking the tool. EN ISO 13485:2016+A11:2021 clause 4.1.6 is not optional. A proportionate validation plan for a risk management tool includes the intended use statement, the risk assessment of tool failure, the acceptance criteria, the test cases, the records of execution, the approval, and the re-validation trigger. If the plan is impossible to write for a candidate tool, that tool is wrong for you.

3. Evaluate by category, not brand. The categories that matter in 2026:

  • Integrated eQMS risk modules. Pro: single source of truth with document control. Con: risk features often lag standalone tools on depth.
  • Standalone EN ISO 14971 tools. Pro: deepest risk-specific functionality, cleanest traceability. Con: integration cost with the rest of the QMS.
  • ALM tools with risk plug-ins. Pro: developer-friendly for software-heavy devices. Con: governance model often looser than auditors want.
  • Spreadsheet plus document control wrapper. Pro: cheap, familiar, validates easily at tiny scale. Con: breaks at approximately 100 to 150 risk rows or when more than one person needs to edit concurrently.

4. Run a one-week trial on real risks. Do not trial on demo data. Load ten real hazards from your current file and rehearse one full risk management cycle: analysis, evaluation, control, residual risk, verification link, post-market update. The tool that handles this smoothly wins. The tool that handles this painfully will be a daily tax for the lifetime of the device.

5. Validate proportionally. The validation burden should match the tool's role. A risk management tool holding the Class IIb risk file has a higher validation burden than a checklist tool used for internal training sign-offs. Write a targeted test protocol. Cover intended use, data integrity, access control, audit trail, backup and restore, and vendor change management. File the validation report as a controlled record.

6. Plan for migration risk. Any tool is temporary. Confirm on day one that you can export the full file, including history and attachments, in a format your next tool can import. A tool that traps data is a tool that fails at the worst possible moment.

7. Accept that the tool is a force multiplier, not a substitute. Tibor's core lesson: a perfect tool with a weak team produces an unsafe risk file in a pretty format. A weak tool with a trained team can produce a defensible risk file. Train the team first, pick the tool second.

Reality Check

  1. Have you written a one-page functional requirements document for your risk management tool, or did you pick the tool first and rationalise afterwards?
  2. Do you hold a validation plan and validation report for your current risk management tool under EN ISO 13485:2016+A11:2021 clause 4.1.6?
  3. Does your tool maintain a full audit trail on every field and every row, including who changed what and when?
  4. Can you trace any single risk in your file to its design input, its verification record, and the relevant post-market feedback in under two minutes?
  5. If your current tool vendor disappeared tomorrow, could you export the complete file, history included, in a format a replacement tool could consume?
  6. When did you last re-validate the tool, and was the re-validation triggered by an actual change or by a calendar reminder?
  7. If an auditor asked "why this tool, not another", could the team answer without saying the vendor's name?

Frequently Asked Questions

Can we just use Excel as our risk management tool? Yes, up to a point. Excel plus disciplined document control can be validated under clause 4.1.6, and Tibor has audited clean Excel-based risk files at small Class I manufacturers. Past roughly 100 to 150 risk rows or more than one concurrent editor, the validation effort to keep Excel defensible usually exceeds the cost of a purpose-built tool.

Do we have to validate a commercial off-the-shelf tool ourselves? Yes. Vendor certificates do not substitute for your validation. You are responsible for the tool's use in your QMS. Vendor documentation is useful input, but the validation against your intended use is yours to do and yours to record.

How often do we need to re-validate? On every significant change to the tool or to your intended use. Significant is a judgement call. Vendor release notes are the trigger. A UI update with no data model change is usually a smoke test. A change to how the tool handles traceability or audit trails is a full re-validation.

Can we validate a tool using the vendor's test scripts? You can leverage vendor test evidence as input, but the final protocol must cover your specific intended use. Blindly running vendor scripts is not validation of the tool in your QMS.

Is an eQMS risk module always better than a standalone tool? No. The question is depth versus integration. If your risk file is large and complex, a standalone tool typically wins on depth. If your QMS needs tight coupling between risk and document control, an integrated eQMS module wins.

Does the tool choice affect what a notified body expects? The tool does not change the expectation, but a weak tool choice can make compliance harder to demonstrate. Auditors do not grade tools. They grade outcomes, audit trails, and traceability.

Sources

  1. Regulation (EU) 2017/745 on medical devices, consolidated text. Annex I Chapter I (General Safety and Performance Requirements), Article 10(9).
  2. EN ISO 14971:2019+A11:2021, Medical devices. Application of risk management to medical devices.
  3. EN ISO 13485:2016+A11:2021, Medical devices. Quality management systems. Requirements for regulatory purposes. Clause 4.1.6 (Validation of the application of computer software used in the quality management system).