---
title: Literature Search Protocols: Building a Reproducible Search Strategy
description: A reproducible literature search protocol is a CER prerequisite. Here is how to write one that an auditor can verify and a successor can re-run.
authors: Tibor Zechmeister, Felix Lenhard
category: Clinical Evaluation & Investigations
primary_keyword: literature search protocol clinical evaluation MDR
canonical_url: https://zechmeister-solutions.com/en/blog/literature-search-protocols-clinical-evaluation
source: zechmeister-solutions.com
license: All rights reserved. Content may be cited with attribution and a link to the canonical URL.
---

# Literature Search Protocols: Building a Reproducible Search Strategy

*By Tibor Zechmeister (EU MDR Expert, Notified Body Lead Auditor) and Felix Lenhard.*

> **A literature search protocol for MDR clinical evaluation is the written, pre-specified document that tells anyone reading it exactly how the search was conducted: which clinical questions it answers, which databases were queried, what search strings were run, what dates and languages were in scope, which inclusion and exclusion criteria were applied, and how every number in the PRISMA-style flow was produced. It is a prerequisite of the clinical evaluation plan under MDR Annex XIV Part A Section 1, not an optional add-on, and it is the single document most often missing, half-written, or retrofitted after the search at Notified Body review. The test of a good protocol is simple: a competent successor, reading only the protocol, can re-run the search and arrive at a comparable set of included records. If they cannot, the protocol is not a protocol. It is a description.**

**By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.**

---

## TL;DR

- A literature search protocol under MDR is the pre-specified, written document describing how the search for clinical data will be conducted, before any search is actually run.
- It is required as part of the clinical evaluation plan under MDR Annex XIV Part A Section 1, which specifies under Section 1.1 that the clinical evaluation follow a defined and methodologically sound procedure based on identification of available clinical data.
- MDCG 2020-5 (April 2020) governs how the protocol treats equivalence claims and the device-strand search. MEDDEV 2.7/1 Rev 4 (June 2016) remains the legacy structural reference for the four-stage methodology; the MDR text takes precedence where they diverge.
- The protocol fixes the clinical questions, the databases and interfaces, the verbatim search strings, the date range, the languages, the inclusion and exclusion criteria, the appraisal framework, and the reviewer and disagreement process. All before the first search is run.
- The single test of a good protocol is reproducibility: a competent successor reading only the protocol can re-run the search and arrive at a comparable set of included records.
- Retrofitting a protocol after a search has already been run is one of the most common and most damaging findings at Notified Body review, because it inverts the causal order the Regulation requires.

---

## Why the protocol exists before the search, not after

The order matters. The MDR clinical evaluation plan under Annex XIV Part A Section 1 is written before the clinical data is gathered. Section 1.1 of that Part specifies that the clinical evaluation must follow a defined and methodologically sound procedure based on the identification of available clinical data, the appraisal of that data, and the analysis of the evidence against the safety and performance claims of the device. The literature search protocol is the subsection of that plan that operationalises "defined and methodologically sound" for the identification stage.

Written in that order, the protocol forces the author to commit to what they will accept as evidence before they know what the evidence says. That commitment is the whole point. It is what stops the inclusion criteria from mysteriously aligning with the favourable studies. It is what stops the search strings from being tuned until the results look reassuring. It is what gives a Notified Body reviewer a document they can actually verify, because there is a plan that predates the results.

Written in the wrong order. The search first, the protocol second, reverse-engineered to match. The document loses its function. A reviewer with experience can usually tell within an hour. The appraisal criteria fit the data too well. The exclusion reasons line up too neatly with the uncomfortable studies. The dates on the plan and the search do not match, or the plan has no date at all. Every one of these is a finding. Every finding is rework. Rework on a clinical evaluation report is measured in months, not days.

This post walks through what a reproducible literature search protocol looks like, step by step, in a form that an auditor can verify and a successor can re-run.

## Step 1. Define the clinical questions

A search protocol begins with the clinical questions the evaluation must answer. Those questions come directly from the clinical evaluation plan: the intended purpose under Article 2(12), the target population, the intended clinical benefits with their outcome parameters, the specific general safety and performance requirements that need clinical data to support them, and the state of the art against which the benefit-risk ratio will be assessed.

Each question is written in structured form. PICO (Population, Intervention, Comparator, Outcome) works for therapeutic and diagnostic questions. A simpler "what adverse events have been reported for this technology in this population" works for safety questions. What must not happen is a vague "find the literature on the device" framing. A vague question produces a search that cannot be judged complete or incomplete because there is no standard against which to judge it.

Two parallel search strands are standard practice. The first targets the subject device. Or the equivalent device, if equivalence is being claimed under MDCG 2020-5. The second targets the clinical condition, the underlying technology, and the state of the art. The protocol declares both strands explicitly, with separate clinical questions for each.

## Step 2. Choose the databases and the interfaces

No single database covers the full published medical literature, so a systematic search under MDR queries at least two. PubMed / MEDLINE is always included. Embase is almost always included for broader European coverage. Cochrane Library and CENTRAL are added for therapeutic effectiveness questions. Clinical trial registries. ClinicalTrials.gov, the EU Clinical Trials Register, and relevant sections of Eudamed as they become available. Are added when unpublished or ongoing investigations could affect the conclusions. Grey literature, including the FDA MAUDE database for post-market adverse events, is added when the indexed databases alone cannot answer the question.

The protocol names the specific databases and the specific interfaces through which they are accessed. "We searched PubMed" is not a protocol entry; "PubMed / MEDLINE via the NCBI interface, accessed on {date}" is. Reproducibility depends on that level of specificity because the same database accessed through different interfaces can return different results, and a successor re-running the search needs to hit the same wall or cross the same finish line.

## Step 3. Write the search string verbatim

The search string is the most technical part of the protocol and the part most often described rather than recorded. Description is not enough. The string must appear in the protocol in the exact form it will be pasted into the database, with every Boolean operator, every field tag, every filter, and every parenthesis visible.

A defensible string has three layers. Controlled vocabulary (MeSH in PubMed, Emtree in Embase) captures records indexed under a concept regardless of which words the authors used. Free-text terms. Including synonyms, abbreviations, trade names, British and American spellings. Catch the recent records that have not yet been indexed. Boolean logic combines concepts: (device OR technology OR equivalent-device terms) AND (clinical condition OR population terms) AND (outcome OR safety terms), with language and publication-type filters applied afterward.

Every change to the string produces a different search. If iterating is needed. And iterating is legitimate, provided it happens before the protocol is finalised. Each iteration is recorded with a date and a short rationale. Once the protocol is signed, the string is frozen. A mid-search edit without a documented amendment invalidates the reproducibility claim.

## Step 4. Set the inclusion and exclusion criteria

Inclusion and exclusion criteria are the filter that decides which retrieved records enter appraisal and which do not. They are written before the search runs, applied consistently by every reviewer, and never modified mid-review without a documented amendment.

Inclusion criteria cover the population (is the target group studied?), the intervention (is it the device, an equivalent device under MDCG 2020-5, or the state-of-the-art comparator?), the outcomes (are the outcomes specified in the clinical evaluation plan reported?), the study design (can this design answer the question?), and the language and date range defined in the protocol. Exclusion criteria mirror these and add specific rejection reasons: case reports lacking methodological detail, duplicate publications, letters and editorials without original data, studies on clearly non-comparable devices.

The discipline that matters most here is the Annex XIV Part A Section 1(a) requirement that favourable and unfavourable data both be retained. Inclusion rules that filter out unfavourable findings under the guise of methodological quality are a direct finding at Notified Body review. Quality-based exclusion is legitimate. A methodologically weak study can be excluded. But only if the same rule is applied to favourable and unfavourable records alike. The protocol states this equal-treatment requirement in writing.

## Step 5. Document the dates and the results

Dates are not decoration. The protocol records the date range the search covers (the earliest and latest publication dates included), the date of the actual search run (when the database was queried), and the date the protocol itself was signed and frozen. Three dates, not one. A reviewer who cannot see the relationship between these three dates cannot verify that the search was run against a protocol that existed beforehand.

The results of the search are recorded at each stage: the raw hit count from each database, the count after de-duplication, the count after title-and-abstract screening with exclusion reasons, the count after full-text screening with exclusion reasons, and the final set of included records. These numbers become the PRISMA-style flow diagram, and they have to add up. A flow diagram in which 412 records are screened and 278 are excluded but only 120 proceed to full-text review has an unexplained gap of 14 records, and that gap is the kind of detail a reviewer will ask about.

The protocol also specifies the update cadence. Under MDR Article 61(3), the clinical evaluation must be updated throughout the life of the device, and for Class III and implantable devices the updates must happen at least annually. The protocol states how often the search will be re-run, which aligns with the post-market surveillance and post-market clinical follow-up plans.

## Step 6. Build the audit trail

An audit trail is what turns a search that was run once into a search that can be verified by a stranger years later. The trail contains: the signed and dated protocol, the exported search results from each database at the time of the search (not just the counts, the records themselves), the screening decisions at title-abstract and full-text level with the reviewer name and date, the disagreement resolution records where two reviewers disagreed, the appraisal scores for every included record, and the final PRISMA-style flow diagram.

The point of the audit trail is that the search becomes portable. If the original reviewer leaves the company, the successor can pick up the file and know exactly what was done, why, and where the evidence sits. This is not only a compliance concern. It is an operational concern. A clinical evaluation that only one person understands is a clinical evaluation one resignation away from collapse.

The trail is stored in the technical documentation under Annex II, linked to the clinical evaluation report, and re-read at every update cycle. A trail that is stored somewhere but never re-opened is a trail that will have decayed by the time it is needed.

## Step 7. Common mistakes in the MDR literature search protocol

After many reviews of clinical evaluation reports, the same protocol mistakes repeat and are all preventable:

- **Protocol written after the search.** The single most damaging mistake. The search ran, the results came in, the protocol was written to match, and the dates on the plan and the search do not line up or are missing. A reviewer usually catches this in the first hour.
- **Search string described, not recorded.** The protocol says "terms related to the device and the condition" rather than showing the exact string. A successor cannot reproduce it. A reviewer cannot verify it.
- **Only one database searched.** A PubMed-only review is not considered systematic. The protocol must name at least a second database with a stated reason for the combination.
- **No date on the protocol.** Without a signed date that pre-dates the first search, the pre-specification claim cannot be verified. This is the easiest possible fix and one of the most common omissions.
- **No reviewer process.** The protocol names one reviewer, no second check, no disagreement process. Single-reviewer screening is harder to defend and gives a Notified Body reviewer a reason to scrutinise individual inclusion decisions.
- **No update cadence.** The protocol covers the initial search only and is silent on how the review will be re-run. The first post-market update cycle then finds the literature base out of date and scrambles to rebuild.
- **No link to the risk file.** The protocol stops at the records flow and never specifies how unfavourable findings will feed into the risk management file under EN ISO 14971:2019+A11:2021. The link is missing before the search even starts, and the resulting review cannot show traceability.

Each of these is cheap to prevent at the protocol stage and expensive to repair after a finding.

## The Subtract to Ship angle on the literature search protocol

Subtract to Ship applied to the literature search protocol does not mean a shorter protocol. It means a tighter scope of claim that needs to be supported, so that the search is focused on the questions that actually matter and the record-handling work stays proportionate to the claim. The Evidence Pass of the framework is explicit about this order: define the intended purpose tightly under Article 2(12), identify the specific general safety and performance requirements that need literature support, then write the protocol against those questions only.

Subtraction happens at the clinical questions, not at rigour. Do not search for evidence on claims the device does not actually make, because every extra claim multiplies the search burden. Do not add exploratory questions "in case we want to extend later," because they balloon the inclusion set and dilute the focus. A review built from honestly scoped questions is smaller, faster, and more defensible than one built from a wish list.

What cannot be subtracted is the pre-specification, the verbatim search string, the equal treatment of favourable and unfavourable data, the reviewer process, the PRISMA-style flow, the audit trail, or the link into the risk file. These are the pillars that Annex XIV Part A Section 1 and Section 1.1 put in place, and they are what let a literature-dominated clinical evaluation legitimately carry the weight of the evidence base for established technologies.

## Reality Check. Where do you stand on your literature search protocol?

1. Is your literature search protocol written, signed, and dated before the first search was run, and is the date on the protocol earlier than the date on the search?
2. Does the protocol name the specific clinical questions from the clinical evaluation plan that each strand of the search is intended to answer?
3. Are the search strings recorded verbatim, in the exact form they were pasted into each database, with all Boolean logic and filters visible?
4. Does the protocol name the specific databases and the specific interfaces used to access them, not just the database families?
5. Are the inclusion and exclusion criteria written in a form that applies equally to favourable and unfavourable records, as Annex XIV Part A Section 1(a) requires?
6. Can a competent successor read only your protocol and re-run the search, arriving at a comparable set of included records?
7. Does the protocol specify the update cadence and tie it to the post-market surveillance and post-market clinical follow-up plans?
8. Is there a documented audit trail. Exported results, screening decisions, reviewer names, disagreement resolutions, appraisal scores. That sits alongside the protocol in the technical documentation?
9. Is there a stated, documented path from unfavourable findings in the search into the risk management file under EN ISO 14971:2019+A11:2021?
10. When was the last time you changed the protocol because the clinical questions in the evaluation plan changed, rather than letting the protocol drift out of alignment with the plan?

## Frequently Asked Questions

**Is a written literature search protocol legally required under MDR?**
MDR Article 61(3) requires the clinical evaluation to follow a defined and methodologically sound procedure, and Annex XIV Part A Section 1 and Section 1.1 require that procedure to be documented in the clinical evaluation plan, covering identification, appraisal, and analysis of clinical data. In practice, any manufacturer relying on literature as a clinical data source needs a written literature search protocol to meet this obligation credibly, and Notified Body reviewers treat the absence of a written protocol as a direct finding.

**Can I use MDCG 2020-5 as my literature search methodology?**
MDCG 2020-5 (April 2020) is guidance on clinical evaluation equivalence under MDR Article 61 and Annex XIV Part A. It is not a standalone literature search methodology, but it governs how the device strand of a search treats equivalent-device data and constrains what equivalence claims the search can support. Use it as the binding guidance on the equivalence aspects of the search, together with the MDR text itself for the broader methodology.

**Can I use MEDDEV 2.7/1 Rev 4 for my search protocol?**
MEDDEV 2.7/1 Rev 4 (June 2016) remains a useful legacy structural reference for the four-stage clinical evaluation process and for literature search methodology. Where it diverges from the MDR text or from MDCG 2020-5. Particularly on equivalence. The MDR text and the MDCG guidance take precedence. Treat MEDDEV 2.7/1 Rev 4 as a source of structure, not as current binding interpretation.

**How many databases does the protocol need to specify?**
The MDR does not specify a number. Current practice expects at least two independent biomedical databases. Typically PubMed / MEDLINE and Embase. With additional sources added when the clinical question requires them. A single-database protocol is not considered systematic and will attract a finding.

**How often should the protocol be re-run after the initial search?**
The protocol states the update cadence, which aligns with the clinical evaluation update obligation in Article 61(3). For Class III and implantable devices, this means at least annually. For lower-risk devices, it is proportionate to the risk and the novelty of the device, as specified in the post-market surveillance and post-market clinical follow-up plans.

**What happens if the clinical questions change between protocol versions?**
The protocol is amended with a documented version history, and the amended protocol is dated and signed before the new search is run. The old protocol and its results are retained in the audit trail so the history is visible. Silent changes to a protocol after results are known are the fastest possible route to a finding.

## Related reading

- [What Is Clinical Evaluation Under MDR?](/blog/what-is-clinical-evaluation-under-mdr) – the pillar post for the Clinical Evaluation cluster and the starting point for the whole topic.
- [MDR Article 61 Clinical Evaluation Requirements](/blog/mdr-article-61-clinical-evaluation-requirements) – the article-by-article walkthrough of the legal backbone.
- [MDR Annex XIV Part A: Clinical Evaluation Requirements](/blog/mdr-annex-xiv-part-a-clinical-evaluation) – the annex that governs the clinical evaluation plan and the search methodology.
- [The Clinical Evaluation Plan Under MDR](/blog/clinical-evaluation-plan-mdr) – the plan that contains the search protocol as one of its subsections.
- [How to Conduct a Systematic Literature Review for Clinical Evaluation](/blog/systematic-literature-review-clinical-evaluation) – the full eight-step review that this search protocol sits inside.
- [Appraisal of Clinical Data Under MDR](/blog/appraisal-clinical-data-mdr) – the stage that consumes the protocol's output and scores each retrieved record.
- [Clinical Evaluation Report Structure Under MDR](/blog/clinical-evaluation-report-structure-mdr) – how the search protocol and its audit trail are documented in the CER.
- [Equivalence Under MDR](/blog/equivalence-under-mdr) – how MDCG 2020-5 constrains the device-strand search in the protocol.
- [Literature Search Strategy Templates for MDR](/blog/literature-search-strategy-templates-mdr) – reusable search-string building blocks that can be dropped into a protocol.
- [The Subtract to Ship Framework for MDR Compliance](/blog/subtract-to-ship-framework-mdr) – the methodology pillar behind the Evidence Pass that governs how the protocol is scoped.

## Sources

1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 61 (clinical evaluation), Annex XIV Part A Section 1 (clinical evaluation plan), Annex XIV Part A Section 1.1 (defined and methodologically sound procedure). Official Journal L 117, 5.5.2017.
2. MDCG 2020-5. Clinical Evaluation. Equivalence: A guide for manufacturers and notified bodies, April 2020.
3. MEDDEV 2.7/1 revision 4. Clinical Evaluation: A Guide for Manufacturers and Notified Bodies under Directives 93/42/EEC and 90/385/EEC, June 2016 (legacy guidance, still referenced for the four-stage methodology; MDR text and MDCG 2020-5 take precedence where they diverge).
4. EN ISO 14155:2020 + A11:2024. Clinical investigation of medical devices for human subjects. Good clinical practice (referenced where the protocol covers clinical investigation records).
5. EN ISO 14971:2019 + A11:2021. Medical devices. Application of risk management to medical devices (destination for unfavourable findings identified through the search).

---

*This post is part of the Clinical Evaluation & Clinical Investigations cluster in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. A literature search protocol is the cheapest insurance policy a clinical evaluation can carry. And the hour spent writing it before the first search is the hour that keeps the whole evaluation defensible years later.*

---

*This post is part of the [Clinical Evaluation & Investigations](https://zechmeister-solutions.com/en/blog/category/clinical-evaluation) cluster in the [Subtract to Ship: MDR Blog](https://zechmeister-solutions.com/en/blog). For EU MDR certification consulting, see [zechmeister-solutions.com](https://zechmeister-solutions.com).*
