MDR Article 61 and Annex XIV Part A recognise three legitimate sources of clinical data for clinical evaluation: scientific literature on the device or its underlying technology, clinical data from an equivalent device already on the market, and clinical investigation of the device itself. A well-built clinical evaluation plan draws on a deliberate combination of these three, chosen in order of cost and feasibility, not defaulted to the most expensive pathway first.
By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.
TL;DR
- The three clinical data sources under MDR are scientific literature, equivalence to an existing device, and clinical investigation of the device itself. Annex XIV Part A structures the clinical evaluation around these sources.
- Literature is usually the cheapest source. Equivalence is legally stricter under MDR than under the former Directives and is tightly scoped by MDCG 2020-5 (April 2020). Clinical investigation is the most expensive and most time-consuming source and must follow EN ISO 14155:2020+A11:2024.
- The sources are not mutually exclusive. Most clinical evaluation reports combine two or three of them, with each source answering a defined subset of the clinical questions.
- The choice of sources must be pre-specified in the clinical evaluation plan (CEP) together with appraisal criteria. Reverse-engineering the sources around the data you happen to have is the fastest way to fail a Notified Body review.
- A Graz-based company saved EUR 400,000-500,000 and 1-1.5 years by recognising that their established measurement methods were covered by existing literature and harmonised standards, removing the need for pre-market clinical investigations on those aspects.
Why this matters for your startup
The founder of a Class IIa diagnostic device sits down with a clinical contract research organisation and is quoted EUR 600,000 for a pre-market clinical investigation. The founder has been told, more than once, that MDR "requires clinical evidence." The founder assumes the CRO quote is the price of entry. The spreadsheet gets rewritten. The next funding round becomes urgent.
This scene plays out every week across European MedTech. The assumption behind it is wrong in a specific, expensive way. MDR requires clinical evidence. It does not require that clinical evidence come from a new clinical investigation. The Regulation names three sources. Literature. Equivalence. Clinical investigation. The choice between them drives most of the clinical evaluation budget, and for many devices, the cheapest legitimate source is not the one the CRO quoted.
Knowing the three sources and how they combine is not a nice-to-have. For a resource-constrained startup, it is the difference between a clinical evaluation that fits the runway and a clinical evaluation that ends the runway.
Source 1 — Scientific literature
Literature is the first source named in Annex XIV Part A and, for most startup devices, the most under-used. A literature-based contribution to clinical evaluation is built on a structured, documented, reproducible search of peer-reviewed publications relevant to the device, its underlying technology, the clinical condition, the target population, or the established measurement methods the device relies on.
The literature review is not a collection of papers the founder happens to have read. It is a formal process with a pre-specified search strategy, pre-specified inclusion and exclusion criteria, pre-specified appraisal criteria, and a documented audit trail from search query to final evidence table. MEDDEV 2.7/1 revision 4 (June 2016) remains the widely referenced structural guide for how this process is organised, and the MDR text takes precedence where the two diverge.
Literature is the cheapest source when the literature actually exists. It is also the source where startups leave the most money on the table, because the instinct is to assume clinical evidence means new clinical data. For devices built on well-established physical principles, well-characterised materials, or measurement methods with decades of published performance data, the literature carries most of the evidence burden and the clinical investigation — if any — closes only the genuinely novel gaps.
The limitation of literature is specificity. A literature body on the general measurement principle does not automatically substantiate the specific clinical claim your labelling makes. The CEP must be honest about what the literature answers and what it does not. Padding a CER with loosely relevant references is worse than useless — Notified Body reviewers read the abstracts and notice.
Source 2 — Equivalence to an existing device
The second source is clinical data from a device already on the market that the manufacturer claims is equivalent to the device under evaluation. Equivalence is a legal pathway defined in Annex XIV Part A paragraph 3 and interpreted by MDCG 2020-5 (Clinical Evaluation — Equivalence: A guide for manufacturers and notified bodies, April 2020).
Equivalence under MDR is stricter than equivalence under the former Directives. The equivalence claim must be demonstrated across three dimensions simultaneously: technical characteristics, biological characteristics, and clinical characteristics. MDCG 2020-5 details what each dimension covers and how differences are to be assessed. The threshold is not "similar enough." The threshold is that any differences between the devices do not adversely affect the clinical safety and performance of the device under evaluation, and that this is demonstrated with scientific justification.
For implantable and Class III devices, equivalence is further constrained. The manufacturer must have "sufficient levels of access" to the clinical data and technical documentation of the equivalent device, typically via an ongoing contractual relationship with the other manufacturer. This requirement makes same-manufacturer equivalence (a new version of your own previous device) realistic and cross-manufacturer equivalence (claiming equivalence to a competitor) very difficult in practice.
For non-implantable, non-Class III devices, equivalence is possible without the contract requirement, but the three-dimensional demonstration still applies. Startups frequently assume equivalence is a shortcut. Under MDR, it is a legitimate pathway with a high documentation burden, and Notified Bodies scrutinise equivalence claims closely. The right question is not "can we claim equivalence" but "do we actually have access to the level of detail needed to defend the three-dimensional claim."
Source 3 — Clinical investigation of the device itself
The third source is clinical data generated by a clinical investigation on the device itself. A clinical investigation is a study conducted on human subjects under a protocol, with ethics approval, following the good clinical practice requirements of EN ISO 14155:2020+A11:2024 and the MDR provisions in Articles 62 to 82 and Annex XV.
Clinical investigation is the most expensive source, the most time-consuming source, and the most logistically demanding source. A pre-market clinical investigation for a Class IIb or Class III device typically takes 12 to 24 months from protocol to final report, costs six figures at a minimum and frequently seven, and requires a dedicated clinical operations capability the startup usually does not have in-house.
Clinical investigation is also sometimes unavoidable. For genuinely novel implantable and Class III devices that do not fit any of the exemption cases in MDR Article 61(4)-(6), the general rule is that pre-market clinical investigations are required. For non-implantable, non-Class III devices, clinical investigation may still be the only source that can answer a specific question about clinical performance or a specific clinical claim in the intended purpose.
The right framing is not "should we run a clinical investigation." The right framing is "for which specific parts of the clinical evidence burden is clinical investigation the only source that can answer the question, and how do we design the investigation to answer exactly those questions and nothing more." A clinical investigation that tries to substantiate every claim in the CEP is a clinical investigation that costs three times what it needs to.
When to combine the sources
The three sources are not an either/or choice. A well-built clinical evaluation combines them deliberately, with each source carrying the specific part of the evidence burden it is best suited to.
A typical combination looks like this. Literature carries the evidence for the underlying technology, the measurement principle, the general clinical context, and the established components. Equivalence — when the access conditions are met — carries the evidence for the clinical performance of similar clinical applications, filling gaps that literature cannot address directly. Clinical investigation closes the remaining gaps: specific clinical claims unique to the new device, novel aspects of the intended purpose, and any performance characteristic that literature and equivalence cannot substantiate to the level required by the risk class.
The CEP specifies the combination before any data is collected. For each clinical question derived from the intended purpose and the applicable general safety and performance requirements, the plan names the source that will answer it, the appraisal criteria, and the acceptance threshold. This pre-specification is not bureaucratic overhead. It is the mechanism that prevents the evidence pack from being assembled backwards to fit whatever data happened to be easiest to find.
The source hierarchy — cost, time, and strength of evidence
There is no official MDR hierarchy among the three sources. All three are legitimate, and the Regulation does not rank them. In practice, two informal hierarchies matter.
The first is cost and time. Literature is cheapest and fastest when it exists. Equivalence is moderate in cost, higher in documentation burden, and dependent on access to the equivalent device's data. Clinical investigation is the most expensive and slowest. A startup working under a runway constraint reads this hierarchy upwards: exhaust the cheaper sources before committing to the more expensive ones.
The second is strength of evidence for a specific question. For a question about the clinical performance of the specific device under specific conditions, a well-designed clinical investigation on that device is the strongest source. For a question about the underlying scientific principle or the safety of the material class, a body of peer-reviewed literature may be stronger than a single investigation. For a question about the clinical behaviour of a very similar device across a population, equivalence data may carry more weight than a short literature review. The choice depends on what question is being asked.
The two hierarchies sometimes conflict. The cheapest source is not always the strongest for a given question. The CEP is where this trade-off is made explicit: which questions are answered by which sources, why, and what the residual uncertainty looks like. A plan that uses literature where only clinical investigation can answer the question will fail Notified Body review. A plan that defaults to clinical investigation where literature already answers the question wastes runway the startup cannot afford to waste.
How to document the mix in the CEP and CER
The clinical evaluation plan lists every clinical question, every source used to answer it, and every appraisal criterion. The clinical evaluation report then presents the evidence from each source, assessed against those criteria, and shows how the combination of sources closes the total evidence burden.
For literature, the CER includes the search strategy, the databases searched, the search dates, the inclusion and exclusion criteria, the appraisal results, and the evidence tables. For equivalence, the CER includes the three-dimensional comparison table (technical, biological, clinical), the justification for each claimed equivalence point, the assessment of differences, the access arrangements to the equivalent device's data, and the conclusion on whether the equivalence claim holds. For clinical investigation, the CER includes the protocol reference, the investigation report, the statistical analysis, and the conclusions relative to the pre-specified endpoints.
The final section of the CER ties the sources together. It restates each clinical question from the CEP, shows which source or combination of sources addressed it, and concludes whether the evidence is sufficient to demonstrate conformity with the relevant general safety and performance requirements. This traceability — CEP question, source used, appraisal outcome, conformity conclusion — is what a Notified Body reviewer looks for first. Missing traceability is the single most common reason a CER is sent back for rework.
Common mistakes startups make
- Defaulting to clinical investigation as the only "real" source and skipping the literature and equivalence evaluation entirely. This is where the biggest savings are left on the table.
- Treating equivalence under MDR as if it were equivalence under the former Directives. The three-dimensional demonstration and the access requirements are stricter, and MDCG 2020-5 is the authoritative reference.
- Assembling the literature review without a pre-specified search strategy, then reverse-engineering the appraisal criteria to match the papers found. Reviewers notice immediately.
- Claiming equivalence to a device without access to the technical and clinical data of that device. Marketing brochures and public summaries are not sufficient for an MDR equivalence demonstration.
- Designing a clinical investigation to answer every question in the CEP instead of only the questions the first two sources cannot answer. The investigation becomes larger, more expensive, and slower than it needs to be.
- Writing the CER before the CEP. The sources and appraisal criteria must be fixed before the data is collected, not after.
The Subtract to Ship angle — the Evidence Pass
The Subtract to Ship framework runs clinical evaluation through the Evidence Pass. The Evidence Pass asks one question: what is the minimum clinical evidence needed to demonstrate conformity, and what is the cheapest legitimate combination of sources that produces it?
The pass runs in a specific order. Define the intended purpose tightly. Derive the clinical questions from the intended purpose and the applicable Annex I requirements. For each question, evaluate the three sources in order of cost: literature first, equivalence second, clinical investigation last. Assemble the CEP around the cheapest legitimate combination. Only then design the clinical investigation activities needed to close the residual gaps.
A Graz-based company we worked with is the textbook example. The initial plan assumed two to three pre-market clinical investigations, budgeted at EUR 400,000-500,000, over 1-1.5 years. The Evidence Pass asked whether the underlying measurement methods were already covered by literature and recognised standards. They were. The clinical evaluation was rebuilt around a literature-and-standards-dominated combination, with clinical investigation reserved only for the genuinely novel aspects of the device. The Notified Body accepted the approach. The savings were real, and they traced back to a disciplined reading of Annex XIV Part A and the three sources it recognises.
The Evidence Pass does not cut required work. Every obligation in MDR Article 61 still holds. The pass cuts the default assumption that clinical investigation is the pathway, and replaces it with a deliberate choice among the three sources based on what the Regulation actually recognises.
Reality Check — Where do you stand?
- Can you list the specific clinical questions that your clinical evaluation needs to answer, derived from your intended purpose and the applicable Annex I requirements?
- For each question, have you evaluated all three sources — literature, equivalence under MDCG 2020-5, and clinical investigation — in order of cost before committing to one?
- Is your literature search strategy pre-specified, documented, and reproducible, or does it exist only as a folder of papers someone on the team found useful?
- If you are planning to claim equivalence, do you have the level of access to the equivalent device's technical and clinical data that MDCG 2020-5 requires, or are you relying on public information?
- If you are planning a clinical investigation, is it designed to answer only the specific questions that literature and equivalence cannot answer, or does it try to substantiate every clinical claim in the CEP?
- Does your CEP name the source used for every clinical question, together with the appraisal criteria and the acceptance threshold, before any data was collected?
- When the Notified Body asks "show me which source answered which question," can you produce that traceability from the CER in one sitting?
Frequently Asked Questions
What are the three clinical data sources under MDR? Annex XIV Part A of MDR recognises scientific literature, clinical data from an equivalent device, and clinical investigation of the device itself as the three legitimate sources for clinical evaluation. A clinical evaluation plan typically draws on a combination of these, with each source answering a defined subset of the clinical questions.
Is literature alone enough for MDR clinical evaluation? For some devices, yes. Devices built on well-established measurement principles, well-characterised materials, or technologies with substantial published clinical experience can build sufficient clinical evidence from a structured literature review, particularly when the literature aligns tightly with the intended purpose and the applicable general safety and performance requirements. The feasibility is device-specific and must be justified in the clinical evaluation plan.
How strict is equivalence under MDR compared with the former Directives? Stricter. MDR Annex XIV Part A paragraph 3 requires equivalence to be demonstrated across technical, biological, and clinical characteristics simultaneously, and MDCG 2020-5 (April 2020) is the authoritative guidance on how the three-dimensional demonstration is structured. For implantable and Class III devices, the manufacturer must additionally have sufficient levels of access to the equivalent device's data, which typically requires an ongoing contract between manufacturers.
When is a clinical investigation unavoidable under MDR? For most genuinely novel implantable and Class III devices that do not fit the exemption cases in Article 61(4)-(6), clinical investigation is the general rule. For non-implantable, non-Class III devices, clinical investigation is unavoidable whenever a specific clinical question cannot be answered by literature or equivalence and the question is necessary to demonstrate conformity with an applicable general safety and performance requirement.
Can the three sources be combined in a single clinical evaluation? Yes, and in practice most well-built clinical evaluations do combine them. The clinical evaluation plan specifies which source answers which clinical question before any data is collected, and the clinical evaluation report shows the combined evidence against each question. Combining sources is the norm, not the exception.
Does MEDDEV 2.7/1 Rev 4 still apply under MDR? MEDDEV 2.7/1 revision 4 (June 2016) is legacy guidance issued under the former Directives. It is still widely referenced as a structural guide for organising clinical evaluation, but the MDR text and MDR-era MDCG guidance take precedence wherever they diverge. For equivalence specifically, MDCG 2020-5 is the current authoritative reference.
Related reading
- What Is Clinical Evaluation Under MDR? — the pillar post for the Clinical Evaluation cluster and the foundation this post builds on.
- Sufficient Clinical Evidence Under MDR — how to decide when the combination of sources meets the conformity threshold.
- Equivalence Under MDR — the dedicated deep-dive on the equivalence pathway and MDCG 2020-5.
- Literature Review for Clinical Evaluation — the methodology for building a defensible literature-based contribution to the CER.
- Clinical Investigation Planning Under MDR — when clinical investigation is the right source and how to scope it.
- The Clinical Evaluation Plan (CEP) — the document where the source combination is pre-specified.
- The Clinical Evaluation Report (CER) — the document that presents the evidence from all three sources against the pre-specified questions.
- MDR Annex XIV Part A Explained — the annex walkthrough that underpins the three-source framework.
- PMCF and the Clinical Evidence Lifecycle — how post-market data feeds back into the clinical evaluation.
- The Subtract to Ship Framework for MDR Compliance — the methodology pillar, including the Evidence Pass applied in this post.
Sources
- Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 61 (clinical evaluation), Annex XIV Part A (clinical evaluation process and data sources). Official Journal L 117, 5.5.2017.
- MDCG 2020-5 — Clinical Evaluation — Equivalence: A guide for manufacturers and notified bodies, April 2020.
- MEDDEV 2.7/1 revision 4 — Clinical Evaluation: A Guide for Manufacturers and Notified Bodies under Directives 93/42/EEC and 90/385/EEC, June 2016 (legacy guidance; MDR text takes precedence where they diverge).
- EN ISO 14155:2020 + A11:2024 — Clinical investigation of medical devices for human subjects — Good clinical practice.
This post is part of the Clinical Evaluation & Clinical Investigations cluster in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. Choosing the right combination of clinical data sources is where a disciplined clinical evaluation strategy saves the most runway — and where a good sparring partner earns their keep.