A clinical investigation under MDR Articles 62 to 82 is one possible source of clinical evidence, not the only one. Annex XIV Part A of Regulation (EU) 2017/745 explicitly recognises other sources that, taken together, can be sufficient to demonstrate conformity with the general safety and performance requirements for many devices: scientific literature and equivalence data under MDCG 2020-5, non-interventional performance studies, usability data generated under EN 62366-1:2015+A1:2020, registry data, retrospective clinical analyses, and bench and pre-clinical testing. For implantable and Class III devices, Article 61(4) sets a default requirement for a new investigation, but Article 61(4) to (6) and MDCG 2023-7 describe the specific exemption routes. For every other class, the question is not "investigation or nothing" but which combination of sources genuinely answers the clinical claims in the intended purpose.
By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.
TL;DR
- MDR Article 61 and Annex XIV Part A recognise multiple sources of clinical evidence. A new clinical investigation is one source among several.
- Non-interventional performance studies, where the device is used within its intended purpose and the study does not change clinical management, can generate usable clinical data at a fraction of the cost of an interventional trial.
- Usability data generated under EN 62366-1:2015+A1:2020 is clinical evidence for specific claims about safety in the hands of the intended user. It is not a substitute for efficacy data, but it covers a real regulatory question.
- Registry data, where a high-quality registry exists for the indication, can feed the clinical evaluation directly and is the preferred evidence source for several device categories.
- Retrospective analyses of routine clinical data, including real-world data collected during post-market surveillance of predicate devices, can contribute to the evidence base where the methodology is defensible.
- For implantable and Class III devices, Article 61(4) sets the default to "new investigation required." Article 61(4) to (6) and MDCG 2023-7 describe the four specific exemption routes. For non-implantable, non-Class III devices, the alternatives are directly available.
Why the alternative-sources question matters for your startup
Felix has sat in too many planning meetings where the founder walks in assuming "clinical evidence" means "clinical investigation" and walks out with a EUR 1.5 million plan the company will never be able to execute. The assumption is almost always wrong. The MDR does not say every device needs an interventional trial. It says every device needs sufficient clinical evidence for its intended purpose, and Annex XIV Part A lists the sources from which that evidence can come.
The difference is not semantic. For a Class IIa device whose clinical claims are well-supported by literature, equivalence to a legacy device, and usability data from a formative and summative evaluation under EN 62366-1:2015+A1:2020, the clinical evidence package may not require any new interventional work at all. For a Class IIb device with a novel measurement method, a non-interventional performance study at one clinical partner site may produce everything the Notified Body needs. The founder who defaults to "we need a trial" is frequently the founder who runs out of runway before the trial starts.
This post maps the alternatives, names what each one can and cannot do, and ends with the subtraction rule: you do not pick the easiest alternative — you pick the combination that actually answers the clinical claims your intended purpose makes.
The alternative-sources question — what the MDR text actually says
The anchor is Article 61 of Regulation (EU) 2017/745, with the procedural detail in Annex XIV Part A.
"Confirmation of conformity with relevant general safety and performance requirements set out in Annex I under the normal conditions of the intended use of the device, and the evaluation of the undesirable side-effects and of the acceptability of the benefit-risk-ratio referred to in Sections 1 and 8 of Annex I, shall be based on clinical data providing sufficient clinical evidence..." — Regulation (EU) 2017/745, Article 61, paragraph 1.
Article 61(1) then defines clinical data as information concerning safety or performance that is generated from the use of a device, and lists the admissible sources: clinical investigations of the device concerned, clinical investigations or other studies reported in scientific literature of a device for which equivalence can be demonstrated, reports published in peer-reviewed scientific literature on other clinical experience of either the device concerned or a device for which equivalence can be demonstrated, and clinically relevant information coming from post-market surveillance, in particular the post-market clinical follow-up.
Annex XIV Part A then sets out the clinical evaluation procedure: identification of available clinical data, appraisal of the data for suitability and quality, analysis of whether the data demonstrates conformity with the relevant general safety and performance requirements, and planning of additional clinical data generation where gaps remain.
Article 61(4) sets the default rule that clinical investigations shall be performed for implantable devices and Class III devices. Article 61(4), (5), and (6) then describe the specific exemption routes, and MDCG 2023-7 (December 2023) clarifies the four cases in which implantable and Class III devices can be exempted from mandatory pre-market clinical investigations, along with the "sufficient levels of access to data" requirement for equivalence claims.
For every device that is not implantable and not Class III, the alternatives described below are directly available without invoking an Article 61(4) exemption at all.
Performance studies as non-interventional clinical data
A non-interventional performance study is a study in which the device is used within its intended purpose, on patients who would have been treated with the device in routine clinical practice anyway, and the study protocol does not alter clinical management beyond what would have happened without the study. It is not an interventional clinical investigation under Articles 62 to 82 because the device is CE-marked or is being used within an established care pathway where its use is not determined by the study protocol. Article 74 and Article 82 set out specific rules for certain post-market investigations and for studies not conducted for conformity assessment purposes.
The evidence value of a well-designed non-interventional study can be substantial. Real patients, real clinicians, real workflows, real outcomes — data that an interventional trial sometimes cannot produce because its protocol constraints remove the real-world variability the Notified Body is actually asking about. A non-interventional study typically avoids the full Annex XV clinical investigation plan, the Article 70 competent authority authorisation route that applies to pre-market investigations for conformity assessment, and the full operational overhead of a Good Clinical Practice trial, although ethics committee approval and national law obligations still apply and must be checked for every specific case.
Where this lands for a startup: non-interventional performance studies are not a loophole and they are not unregulated. They are a legitimate evidence source under Annex XIV Part A when the clinical question does not require an interventional trial. A competent clinical advisor and a local ethics contact are non-negotiable before the first patient is enrolled.
Usability data under EN 62366-1:2015+A1:2020
Usability is clinical evidence for a specific class of claims. MDR Annex I Sections 5 and 22 require that devices are designed and manufactured in such a way that use errors are reduced as far as possible, and that the ergonomic features and environment of intended use are taken into account. The harmonised standard that provides presumption of conformity for these requirements is EN 62366-1:2015+A1:2020 — Medical devices — Part 1: Application of usability engineering to medical devices.
A properly conducted usability engineering process generates two kinds of evidence. Formative evaluations produce iterative design data that feed the risk management file under EN ISO 14971:2019+A11:2021 and the design history documentation. Summative evaluations, conducted on the final device with representative users in a realistic use environment, produce the validation evidence that the device can be used safely by the intended user for the intended purpose.
Usability data is clinical evidence for questions like "can the intended user operate this device without a use error that leads to harm" and "is the user interface compatible with the clinical workflow it is designed to fit." It is not clinical evidence for questions like "does this device treat the disease better than the comparator." The trap to avoid is either extreme: dismissing usability as "not real clinical data" (wrong — it is data about safety in use, which is part of the clinical evaluation), or presenting a usability study as if it answered efficacy questions it does not answer (wrong — the Notified Body will catch this immediately and ask for the efficacy data separately).
For many software as a medical device products and many low-to-moderate-risk hardware devices, a thorough usability engineering file under EN 62366-1:2015+A1:2020 plus literature plus equivalence can cover a large fraction of the clinical evidence package, with a small non-interventional study or targeted post-market clinical follow-up covering the residual questions.
Registry data as a clinical evidence source
A clinical registry is a structured, ongoing data collection on patients treated with a class of devices, a category of procedures, or a specific condition. Where a high-quality registry exists for the indication, registry data can feed the clinical evaluation directly. Joint replacement registries are the textbook example, but registries exist or are being built for cardiovascular devices, spinal implants, diabetes technology, and an expanding list of categories.
Registry data has three strong properties as clinical evidence. It is real-world — the patients, the clinicians, the follow-up periods, and the adverse event rates reflect actual clinical practice. It is longitudinal — follow-up periods of years or decades are possible in ways no startup-funded investigation can match. It is large — sample sizes that would be prohibitively expensive to recruit in a trial already exist in the registry.
The limit is quality control. A registry is only as good as its data dictionary, its capture rate, its adjudication process, and its governance. Before relying on a registry for clinical evidence under Annex XIV Part A, the sponsor must appraise the registry's methodology against the suitability and quality criteria Annex XIV requires — the same appraisal that would be done for any literature-based evidence. A well-run national registry with published peer-reviewed outcomes is a strong source. A vendor-operated database with no external governance is not.
Startups with devices in registry-covered indications should map the registry ecosystem for their device category at the start of clinical strategy planning, not during the Notified Body review.
Retrospective analyses of routine clinical data
A retrospective analysis uses data that were already collected for routine clinical care or for prior studies, and analyses them for a specific clinical question relevant to the device under evaluation. The data may come from electronic health records, hospital information systems, post-market surveillance data on predicate or equivalent devices, or prior clinical studies that are being re-analysed for a new question.
Under Annex XIV Part A, retrospective data can contribute to the clinical evidence package where the methodology is defensible and the data quality is adequate. The appraisal criteria are the same as for any clinical data source: relevance to the device and the intended purpose, methodological quality, generalisability to the target population, and the adequacy of the statistical analysis. Retrospective analyses are particularly useful during the identification and appraisal phases of the clinical evaluation, because they can rule out or confirm hypotheses before any new data generation is scoped.
The common mistake is treating retrospective data as a cheap substitute for prospective data without being honest about the limitations. Selection bias, missing data, unmeasured confounders, and the absence of a pre-specified analysis plan are real constraints. An honest retrospective analysis names these constraints up front and lets them inform what else the clinical evidence package needs.
Bench and pre-clinical testing as part of the evidence stack
Bench testing, simulated-use testing, animal studies where ethically and scientifically justified, and pre-clinical testing against harmonised standards are not "clinical" evidence in the Article 61(1) sense, but they are directly referenced in Annex XIV Part A as part of the broader evidence package that supports the clinical evaluation. The 600 to 3,000 bench and simulated-use tests that a disciplined startup runs before the first patient, referenced in post 144, are exactly the evidence that can reduce the scope of what the clinical investigation needs to answer — and sometimes remove the need for a new investigation entirely.
A separate post (147) covers the role of bench and non-clinical evidence in detail. The key point here is that bench and pre-clinical data are not an alternative to clinical evidence in the strict Article 61 sense — they are the foundation on which the clinical evidence sits, and a strong pre-clinical package directly shrinks the clinical data gap that remains.
When each alternative fits
There is no universal ranking. The right combination depends on the device, the intended purpose, and the risk class. A rough map for a startup planning its clinical evidence strategy:
- Literature plus equivalence plus usability: fits many Class I, Class IIa, and some Class IIb devices where the clinical question is well-covered by existing evidence and the new device sits within an established category. Check MDCG 2020-5 for the equivalence criteria.
- Literature plus non-interventional performance study: fits devices where the clinical question is narrow, the device is in routine use or can be evaluated within a routine care pathway, and the interventional trial route would be disproportionate.
- Registry data plus targeted post-market clinical follow-up: fits devices in indications where a high-quality registry exists and the residual questions can be answered through post-market clinical follow-up planned under Annex XIV Part B.
- Retrospective analysis plus non-interventional study: fits devices where historical data exists but needs prospective confirmation on one or two specific endpoints.
- Full clinical investigation: fits implantable and Class III devices that do not meet the Article 61(4) to (6) exemption routes, and any device where the claims cannot be supported by any other combination of sources.
The rule that ties all of these together is the one in Annex XIV Part A itself: identify, appraise, analyse, and plan. The clinical evaluation plan is the document where the combination is chosen and defended. The Notified Body will assess that plan, not the founder's preference for one source over another.
Common mistakes startups make with alternative evidence sources
- Treating "no investigation" as "no work." The alternatives are rigorous. A well-run literature review plus usability engineering file plus equivalence appraisal is substantial work, just of a different kind than a trial. Skipping steps because "we do not need a trial" produces a clinical evaluation the Notified Body rejects.
- Using usability data to answer efficacy questions. EN 62366-1 covers safety in use. It does not cover clinical effectiveness. The two data types answer different questions under Annex XIV Part A.
- Citing a registry without appraising it. A registry is a source, not a pass. Its methodology must be evaluated against the Annex XIV suitability and quality criteria before its data counts toward the clinical evidence package.
- Assuming Article 61(10) means "no clinical data needed." Article 61(10) allows, for certain devices, that clinical data may not be deemed necessary where the evaluation is based on sufficient other data. It does not mean zero evidence — it means the evidence may come from non-clinical sources where this is justified in the clinical evaluation and accepted by the Notified Body. The justification is substantial.
- Running a non-interventional study without local regulatory advice. National ethics and research law still apply, and "non-interventional" does not mean "no ethics approval needed." Check every jurisdiction in which the study will run.
- Defaulting to a trial when the alternatives would have worked. The expensive mistake in the opposite direction. A EUR 1.5 million trial for a Class IIa device whose evidence package could have been built from literature, equivalence, and a summative usability study is money the startup cannot get back.
The Subtract to Ship angle
Subtract to Ship applied to clinical evidence sources is the same discipline as Subtract to Ship applied to anything else in the MDR: the Regulation sets a floor, the founder's instinct adds layers above the floor, and the subtraction removes the layers that do not trace back to a specific clinical claim or a specific Annex XIV Part A requirement.
Concretely: start from the intended purpose and the specific clinical claims it makes. For each claim, ask which source in Annex XIV Part A actually answers the question behind the claim. Literature may answer it. Equivalence may answer it. A usability engineering file may answer it. A registry may answer it. A retrospective analysis may answer it. A non-interventional study may answer it. A full clinical investigation may be the only thing that answers it. The answer will rarely be "one source covers everything" and will rarely be "we need a full interventional trial for every claim." It will usually be a combination, and the combination that costs the least while genuinely answering every claim is the combination to run.
The subtraction is not in the evidence quality. The subtraction is in the choice of source. A EUR 200,000 clinical evidence package that answers the claims is better regulatory evidence than a EUR 1.5 million clinical evidence package that answers the same claims plus two questions nobody asked.
Reality Check — Where do you stand?
- Have you listed your clinical claims from the intended purpose, one by one, and mapped each one to a specific Annex XIV Part A source that could answer it?
- Have you checked whether your device falls under Article 61(4) for implantable or Class III devices, and if so, whether any of the Article 61(4) to (6) exemption routes and MDCG 2023-7 cases apply?
- Have you evaluated equivalence per MDCG 2020-5 for every plausibly equivalent device already on the market?
- Do you have a usability engineering file under EN 62366-1:2015+A1:2020, and have you been honest about which claims it answers and which it does not?
- Have you mapped the registry ecosystem for your indication, and appraised any relevant registry against the Annex XIV suitability and quality criteria?
- Is your clinical evaluation plan explicit about which source answers which claim, with gaps identified and a plan to close them?
- If your current plan is "run a full clinical investigation," can you defend that plan against a review that asks "which specific claims could not have been answered by any combination of the alternatives?"
Frequently Asked Questions
Can a Class IIa device get CE-marked under MDR without a new clinical investigation? Yes, frequently. Class IIa devices are not subject to the Article 61(4) default rule that applies to implantable and Class III devices. For many Class IIa devices, a clinical evidence package built from literature, equivalence under MDCG 2020-5, a usability engineering file under EN 62366-1:2015+A1:2020, and post-market clinical follow-up planned under Annex XIV Part B can be sufficient, provided the clinical evaluation under Annex XIV Part A demonstrates that the available data answers the clinical claims in the intended purpose. The Notified Body makes the final call based on the clinical evaluation plan and report.
Is a non-interventional performance study the same as post-market clinical follow-up? They overlap but are not identical. Post-market clinical follow-up under Annex XIV Part B is a continuous process after CE marking that collects clinical data to confirm safety and performance throughout the expected lifetime of the device. A non-interventional performance study is a specific study, which may be conducted before or after CE marking, with a defined protocol, endpoints, and duration. A post-market clinical follow-up plan can include one or more non-interventional studies. The key distinction is that post-market clinical follow-up is a continuous obligation, while a non-interventional study is a bounded project.
Does Article 61(10) let me skip clinical data entirely? No, not in the general case. Article 61(10) allows, for certain devices, the clinical evaluation to be based on non-clinical evidence where this is duly justified. The justification must be documented in the clinical evaluation, accepted by the Notified Body where a Notified Body is involved, and based on the device's specific characteristics. It is not a blanket exemption and it is not a startup shortcut — it is a narrow provision for specific device categories where clinical data in the Article 61(1) sense would add nothing the non-clinical evidence does not already demonstrate.
Can a registry alone provide sufficient clinical evidence? For some device categories, yes — particularly in fields with mature, peer-reviewed registries and a device that fits within the registry's scope. For most startup devices, no — the registry will usually need to be combined with device-specific evidence because the registry was designed around predicate or equivalent devices, not the new device. The clinical evaluation plan is where the combination is specified.
What counts as "sufficient levels of access to data" for an equivalence claim under MDCG 2023-7? MDCG 2023-7 addresses the Article 61(6) requirement that, for implantable and Class III devices claiming equivalence to a device already on the market, the manufacturer of the device under evaluation must have sufficient access to the technical documentation of the equivalent device to substantiate the equivalence claim. The guidance clarifies that this access is typically arranged through a contract between manufacturers and describes the level of data access the contract must provide. For non-implantable, non-Class III devices, this specific requirement does not apply, although the general data-appraisal requirements of Annex XIV Part A still do.
Related reading
- What Is Clinical Evaluation Under MDR? — the hub post for the clinical evaluation cluster and the framework every alternative source fits into.
- Equivalence Under MDR — the alternative source most frequently underused by startups.
- Clinical Data Sources Under MDR — the broader catalogue of sources Annex XIV Part A recognises.
- Sufficient Clinical Evidence for Class I Devices — the lower bound of the clinical evidence question.
- What Is a Clinical Investigation Under MDR? — the definitional companion for when an investigation is genuinely required.
- How to Run a Lean Clinical Investigation as a Startup with Limited Budget — the procedure for the cases where a new investigation is the only path.
- Bench and Non-Clinical Evidence in the Clinical Evaluation — the pre-clinical evidence layer that sits underneath every alternative source described here.
- Real-World Evidence Under MDR — the broader real-world data question the alternative sources sit inside.
- PMCF Surveys and Registries for Startups — the post-market side of the registry and survey evidence routes.
- The Subtract to Ship Framework for MDR Compliance — the methodology behind choosing the lightest evidence package that genuinely answers the claims.
Sources
- Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 61 (clinical evaluation), Article 61(4) to (6) (clinical investigation requirements for implantable and Class III devices and exemption routes), Article 61(10) (cases where clinical data may not be deemed necessary), Annex XIV Part A (clinical evaluation procedure), Annex II (technical documentation), Annex III (post-market surveillance technical documentation). Official Journal L 117, 5.5.2017.
- MDCG 2020-5 — Clinical Evaluation — Equivalence: A guide for manufacturers and notified bodies, April 2020.
- MDCG 2023-7 — Guidance on exemptions from the requirement to perform clinical investigations pursuant to Article 61(4)-(6) MDR and on 'sufficient levels of access' to data needed to justify claims of equivalence, December 2023.
- EN ISO 14155:2020+A11:2024 — Clinical investigation of medical devices for human subjects — Good clinical practice.
- EN 62366-1:2015+A1:2020 — Medical devices — Part 1: Application of usability engineering to medical devices.
- EN ISO 14971:2019+A11:2021 — Medical devices — Application of risk management to medical devices.
This post is part of the Clinical Evaluation & Clinical Investigations series in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. If your clinical evidence plan has defaulted to "we need a full clinical investigation" without working through the alternatives Annex XIV Part A actually recognises, Zechmeister Strategic Solutions works with founders on exactly this question — which combination of sources answers the claims in the intended purpose at the smallest defensible scope.