MDR does not use the phrase "algorithmic transparency." It uses "information supplied with the device." Annex I Chapter III §23 tells you what must appear in the label and instructions for use. For Class III and implantable devices, Article 32 adds the Summary of Safety and Clinical Performance. Trade secrets and model internals are not required — residual risks, intended purpose, performance, limitations, and contraindications are.
By Tibor Zechmeister and Felix Lenhard.
TL;DR
- MDR frames transparency through information obligations, not "explain the algorithm" requirements.
- Annex I Chapter III §23 is the master list of what must appear in labelling and IFU — this is your disclosure surface.
- For Class III and implantable devices, Article 32 requires a public Summary of Safety and Clinical Performance (SSCP).
- You must disclose intended purpose, residual risks, contraindications, warnings, performance characteristics, and user qualifications.
- You may protect model weights, training data specifics, and proprietary algorithm internals — none of those are required disclosures under MDR.
- The IFU is the primary transparency vehicle. It is not marketing. Write it for the user who must decide whether to trust an output.
Why this matters
A founder building a triage algorithm for emergency departments once asked us, halfway through a technical file review: "Do we have to publish our model architecture?" The honest answer is no — and that answer surprises people. The cultural conversation around AI transparency has spilled over into MedTech and created the impression that MDR demands algorithmic explainability in a technical sense. It does not.
What MDR demands is something more specific and, for a well-run startup, more manageable: the user of your device must have enough information to use it safely and correctly. That information lives in the label and the instructions for use. For high-risk devices, part of it also lives in a public summary document. The rest — your model, your weights, your training pipeline — is yours to protect as trade secret if you choose.
This post lays out precisely what you must disclose, where, and what you may keep confidential.
What MDR actually says
Annex I Chapter III §23 — Information supplied by the manufacturer. This section lists what information must accompany every medical device. It distinguishes between information on the label (§23.2) and information in the instructions for use (§23.4). For software-driven devices, the relevant items typically include:
- The intended purpose, including intended users and patient population.
- Intended clinical benefits.
- Residual risks, contraindications, warnings and precautions.
- Performance characteristics of the device.
- Any specific training or qualifications required of users.
- Where relevant, information to allow the user to verify whether the device operates correctly.
- For software: minimum hardware requirements, IT environment, IT security measures, and any known incompatibilities.
- Warning about serious incidents and how to report them.
This list is the floor. For AI/ML devices, notified bodies read it with an expanded lens: performance characteristics means real performance across the intended population, including subgroups where performance differs. Residual risks include the risk of a wrong output. Precautions include the conditions under which the output should not be relied upon.
Article 32 — Summary of Safety and Clinical Performance. For implantable devices and Class III devices (with limited exceptions), the manufacturer must draw up an SSCP. The SSCP is validated by the notified body and made publicly available through Eudamed. It includes a section written for patients in lay language and a section for intended users.
The SSCP content requirements in Article 32(2) include, among others: device identification, intended purpose, description, possible alternatives, harmonised standards and common specifications applied, summary of clinical evaluation and PMCF, suggested profile and training for users, information on residual risks and undesirable effects, and warnings and precautions.
Annex I §17 (electronic programmable systems). Requires software to be developed and manufactured in accordance with the state of the art taking into account the principles of development life cycle, risk management, verification and validation.
MDCG 2019-16 Rev.1 addresses cybersecurity and is referenced here because IT security information is part of your transparency obligation for connected and software devices.
What is not in the MDR: no requirement to disclose training data composition in a technical file, no requirement to publish model architecture, no requirement to make inference code public. That does not mean these are irrelevant — they belong in your internal technical documentation and may be reviewed by the notified body. But they are not required public disclosures.
A worked example
A startup has built a Class IIb AI-based lesion segmentation tool for radiology. Intended users are qualified radiologists. Training data came from three European hospital cohorts. The model is a proprietary architecture the team considers core IP.
Here is what they must disclose, and where.
In the IFU (Annex I §23.4): - Intended purpose: "Software intended to assist qualified radiologists in the segmentation of suspected lesions in contrast-enhanced CT scans of the liver in adult patients." - Intended users: board-certified radiologists with experience in abdominal imaging. - Performance characteristics: sensitivity, specificity, Dice coefficient on the validation set, reported with confidence intervals. Performance stratified by lesion size band, patient BMI, and scanner vendor, where performance differs meaningfully. - Residual risks: risk of false negatives and false positives, risk of automation bias, conditions under which performance degrades (e.g. certain contrast protocols, motion artefact, pediatric patients — out of scope). - Contraindications and limitations: pediatric patients, non-contrast studies, non-liver anatomy. - Warnings: the tool is assistive; final diagnostic responsibility rests with the radiologist. - IT requirements: PACS integration protocol, minimum hardware, network security assumptions. - Incident reporting route.
In the technical file (seen by NB only): - Training data provenance, sample sizes, inclusion/exclusion criteria, bias analysis under EN ISO 14971:2019+A11:2021. - Model architecture, training procedure, hyperparameters. - Full validation study, not just headline numbers. - Change control for model updates.
What stays confidential: - Specific hyperparameters, model weights, inference code. - Proprietary data preprocessing. - Commercial details of training data licensing.
Because this device is Class IIb (not Class III or implantable), Article 32 does not apply. No SSCP is required. If the same tool were reclassified to Class III because of a higher-risk indication, the team would need to publish an SSCP with the lay-person section and the intended-user section.
The mistake we have seen repeatedly: startups who either over-disclose (publishing model internals in the IFU and exposing trade secrets unnecessarily) or under-disclose (glossing over subgroup performance differences and residual risk). Both trigger findings. The clean path is the Annex I §23 checklist, written honestly.
The Subtract to Ship playbook
1. Treat Annex I §23 as your disclosure checklist. Print it. Go line by line. For each item, decide: what does the user need to know to use this device safely? Draft the IFU around that question.
2. Write performance characteristics honestly. Headline numbers are not enough for AI/ML devices under state-of-the-art expectations. Report performance for the intended population as a whole and for relevant subgroups. If a subgroup performs worse, say so. Your IFU is not marketing — a notified body will compare it line-for-line with your clinical evaluation report.
3. Build a residual risk table that maps to the IFU. Each residual risk accepted in the risk management file should have a corresponding disclosure in the IFU. EN ISO 14971:2019+A11:2021 requires this alignment under its disclosure of residual risk clauses. Auditors trace this mapping.
4. Separate "what the user needs" from "what we protect." Create a simple two-column document internally: column one lists every piece of information relevant to the device, column two marks it as IFU, tech file only, or confidential. This makes IFU reviews fast and defensible.
5. For Class III and implantable devices, treat the SSCP as a deliverable, not an afterthought. It is validated by the notified body and published on Eudamed. Draft the intended-user section first (it is closer to your existing technical content), then adapt it into lay language for the patient section. Test the lay section with a non-medical reader.
6. Coordinate with the AI Act. For high-risk AI systems in the meaning of the EU AI Act, transparency obligations in Article 13 of the AI Act will stack on top of MDR Annex I §23. The content overlaps substantially. Draft once, cover both.
7. Update the IFU when the model updates. A significant change to an AI/ML device often changes performance characteristics, which means the IFU claims are no longer accurate. Build IFU review into your change control procedure — do not let it drift.
Reality Check
- Can a user read your IFU and understand when your device should not be used?
- Does your IFU report performance for the intended population, not just headline aggregate numbers?
- Is every residual risk in your risk management file mirrored in an IFU disclosure?
- If your model were updated tomorrow, would the IFU still be accurate — and do you have a process to catch that?
- For Class III/implantable: does your SSCP lay-person section pass a read-aloud test with a non-medical reader?
- Can your regulatory lead list, from memory, what Annex I §23 requires in the IFU for your device class?
- Have you documented what you treat as trade secret and why it is not required disclosure?
- Does your IFU tell the user how to report a serious incident?
Frequently Asked Questions
Does MDR require me to publish my model architecture? No. MDR does not require disclosure of model architecture, weights, or training code. It requires disclosure of intended purpose, performance, risks, limitations, and the information a user needs for safe use.
Where is "algorithmic transparency" actually defined in MDR? It is not. MDR speaks of "information supplied by the manufacturer" in Annex I Chapter III §23. That is the operative obligation.
What about the EU AI Act — does it require more disclosure? The AI Act adds transparency obligations for high-risk AI systems, including some that overlap with MDR Annex I §23 (intended purpose, performance, limitations, user instructions). Content overlaps significantly. You write one set of documents covering both regimes.
My device is Class IIa. Do I need an SSCP? No. Article 32 SSCP is required for implantable and Class III devices, with narrow exceptions defined in that article.
Can I refuse to disclose subgroup performance if it hurts commercially? No. If subgroup performance differs meaningfully and the difference affects safe use, it is a residual risk disclosure under Annex I §23 and EN ISO 14971:2019+A11:2021. Hiding it is a regulatory and ethical problem.
Is the IFU enough, or do I also need a separate transparency statement? For MDR purposes, the IFU plus the risk management file plus (if applicable) the SSCP are the transparency vehicles. A separate voluntary "model card" is useful internally and for customer dialogue but does not replace the IFU.
Related reading
- Explainability requirements for AI medical devices — the companion post on what notified bodies actually ask for.
- Writing a compliant IFU under MDR — structural guide to the document that carries your disclosures.
- SSCP under Article 32 — when and how to write the public summary.
- Annex I Chapter III information supplied with the device — the full requirements list.
- AI medical devices under MDR — landscape overview for AI-based devices.
Sources
- Regulation (EU) 2017/745 on medical devices, consolidated text. Annex I Chapter III §23, Article 32, Annex I §17.
- MDCG 2019-16 Rev.1 — Guidance on Cybersecurity for medical devices (December 2019, Rev.1 July 2020).
- EN ISO 14971:2019+A11:2021 — Medical devices — Application of risk management to medical devices.
- EN 62366-1:2015+A1:2020 — Medical devices — Application of usability engineering to medical devices.