PMS tools and software for MDR startups fall into four categories — complaint management, literature monitoring, similar-device and registry tracking, and analytics — and none of them are required by Regulation (EU) 2017/745. Article 83 requires a PMS system proportionate to the risk class, not a platform. The efficient stack for a small MedTech company is the smallest combination of tools that makes every Annex III element traceable, every review reproducible, and every finding routable into the risk file. For most Class I and Class IIa startups, that stack is built, not bought, and expansion follows documented pressure rather than vendor pitches.
By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.
TL;DR
- MDR Article 83 and Annex III specify what a PMS system must produce. Neither specifies a tool, a platform, or a vendor. The tooling choice is downstream of the obligation, not upstream.
- PMS tooling groups into four functional categories: complaint management, literature monitoring, similar-device and registry tracking, and analytics. A compliant stack covers all four, at whatever level of automation the volume demands.
- MDCG 2025-10 (December 2025) confirms that traceability and reproducibility of the PMS activities are the compliance signals a notified body looks for, not the sophistication of the toolchain.
- The build-vs-buy decision is a risk-and-volume calculation. Low volume and low complexity favour building on tools the team already owns. High volume and cross-jurisdictional reporting favour buying.
- A lean PMS stack for a three-person startup can run on a shared tracker, a PubMed search string, a safety-notice feed reader, and a monthly calendar review — with total direct tool cost close to zero.
Why tool choice is the wrong first question
The first question most founders ask is "which PMS software should we buy." That question is downstream of three more important ones — what does Article 83 actually require, what does Annex III actually specify, and what volume of data will your device generate in the first eighteen months on the market. Until those three answers exist in writing, any vendor comparison is an expensive detour.
Regulation (EU) 2017/745, Article 83(1) requires every manufacturer to plan, establish, document, implement, maintain, and update a post-market surveillance system proportionate to the risk class and appropriate for the type of device. The word "proportionate" is doing real work in that sentence. A Class I non-invasive device generating a handful of complaints a year is not the same system as a Class IIb implantable generating hundreds of field reports a quarter. The same sentence justifies a shared tracker for one company and a full enterprise platform for another, and notified bodies routinely accept both when the system behind the tool is honest.
MDCG 2025-10, published in December 2025, is the current operational guidance notified bodies apply when assessing a PMS system. It is explicit that the PMS must interact with clinical evaluation, risk management, and vigilance as a set of linked processes — and that linkage is a process requirement, not a software requirement. A team that understands the linkage can build it on Google Sheets. A team that does not understand it will break the linkage on any platform.
For the framework this tooling sits inside, the prerequisite reading is how to build a PMS system on a startup budget and the PMS pillar post. This post is the tooling layer of that framework.
The four categories of PMS tooling
Every PMS activity Article 83 and Annex III require maps into one of four tool categories. The stack is the combination of tools that covers all four, at a level proportionate to the class and volume.
Complaint management
Complaint management is the reactive backbone of PMS. Every complaint has to enter the system with a timestamp, an owner, a category, and a status, and every complaint has to be traceable through investigation, corrective action, and closure. Annex III, Section 1.1, Element C requires the plan to describe the tools used to investigate complaints and analyse market-related experience, and Element G requires a procedure for corrective actions that actually closes the loop.
The category of tools that do this ranges from a shared tracker with version control at the low end, through lightweight ticket tools and help-desk platforms in the middle, to dedicated eQMS complaint modules at the high end. The compliance signal is not which tier a company picked — it is traceability, owner accountability, and an audit trail that cannot be silently edited. A notified body will ask to see the last thirty days of complaint intake and the investigation records for a sample. If those records are clean and consistent, the tool is fine.
For the startup stack, complaint management is the one place where investing slightly beyond the minimum pays off quickly. Even a free tier of a ticket tool usually beats a raw spreadsheet, because tickets force an owner, a status, and an immutable trail by default. For the intake workflow this tool supports, see complaint handling under MDR for startups.
Literature monitoring
Literature monitoring is the proactive backbone of PMS. Annex III, Element A requires the plan to cover collection of publicly available information, and literature is the main source. A defensible literature monitoring activity has a documented search string, a documented cadence, a documented database (PubMed at minimum, plus any specialty databases relevant to the device), an owner, and an archive of the results with the dates of the searches.
The tool category runs from a manual PubMed search with a saved string and a shared folder, through automated alerting services and RSS-based aggregators, to specialist literature-monitoring platforms built for MedTech PMS. Most startups do not need anything beyond the manual tier — what they need is the discipline to run the search on schedule, log the results, and triage them against the intended purpose and the risk file. The tool does not make that discipline exist. The calendar does.
Where tooling starts to matter is when the volume of literature returned by the search grows past what a single owner can triage in a monthly window. At that point an alerting service that pre-filters the stream, or a lightweight automation that deduplicates and tags the hits, earns its place. Before that point, the tool is a distraction from the activity.
Similar-device and registry tracking
Annex III, Element A also requires monitoring of information concerning similar devices on the market. This is where founders most often underbuild their tooling. The sources are concrete — national competent authority safety-notice feeds, the FDA MAUDE database for comparable devices on the US market, manufacturer field safety notices, recall databases, and any voluntary or mandatory device registries relevant to the device category.
The tool category is mostly about feed subscriptions and folder structure, not about software. A shared folder per source, a monthly review with a signed record, and a cross-reference to the risk file is the lean version. For larger companies, specialist platforms consolidate these feeds into one dashboard — but for a startup, the dashboard is usually a spreadsheet with one row per source, one column per month, and a signature column proving the review happened. MDCG 2025-10 is clear that the content of the review matters more than the form of the dashboard.
Analytics and trend reporting
Element D of Annex III requires protocols for trend reporting under Article 88. Trend reporting is the statistical layer of the system — the explicit rule the company uses to detect when a non-serious incident pattern crosses into something that has to be reported to the competent authority. The rule has to be written down, and the data the rule runs against has to be tracked.
For a small manufacturer with low volume, the analytics layer is a simple count-based rule — "any three complaints of the same category within a rolling thirty-day window trigger a formal trend assessment" — implemented in the same tracker that logs the complaints. For a manufacturer with high volume, the rule becomes a statistical test with control limits, and a tool that can run that test on the complaint stream earns its place. Neither tier is better than the other — they are proportionate to different volumes. For the article behind this element, see trend reporting under MDR Article 88.
What MDR requires regardless of the tool
Across all four categories, the compliance requirements are tool-independent. Any PMS stack — lean or luxurious — has to produce the same outputs and the same traceability. The checklist is short and strict.
Every activity in the PMS plan has a named owner and a cadence, and the executed activity produces a dated, signed record. Every complaint has an immutable audit trail from intake to closure. Every literature search has an archived result set and a documented search string. Every similar-device review has a signed monthly or quarterly record. Every trend assessment runs against a written statistical rule. Every finding that changes the risk profile flows into the risk management file under EN ISO 14971:2019 + A11:2021 and back out into design, labelling, or IFU changes. Every required output — the PMS Report under Article 85 for Class I, or the PSUR under Article 86 for Class IIa and above — is produced on the required cycle.
The tools that support this checklist can be free or expensive. The checklist itself is the compliance line, and it is the same line for every stack.
Test — the build-vs-buy decision
The build-vs-buy decision is a two-dimensional calculation, not a brand preference. The axes are volume and complexity, and the decision changes as either axis grows.
Volume is the number of complaints, literature hits, and similar-device signals per month. Complexity is the number of device variants, markets, reporting jurisdictions, and regulatory frameworks the same PMS system has to serve. A single-device Class I startup with one CE market and one complaint a month has low volume and low complexity — and for that team, a built stack using tools already in the company runs faster, cheaper, and with fewer failure modes than any bought platform.
A multi-device company selling in the EU, UK, US, and Switzerland has high complexity regardless of volume, and a bought platform that handles multi-jurisdictional reporting can earn its place even at moderate volume because the cost of getting multi-jurisdictional reporting wrong is high. A single-device company with a Class IIb implantable generating hundreds of complaints a month has high volume regardless of complexity, and a bought complaint management tool with proper audit trails and statistical analytics can earn its place.
The test is not "what do other companies buy." The test is a documented calculation: here is our monthly volume today, here is our projected volume twelve months out, here is our complexity across markets and devices, here is the Annex III element set we have to cover, and here is the stack that covers it at the lowest total cost of ownership. Cost of ownership is not just licence fees — it is setup time, training, maintenance, audit preparation time, and the risk cost of the tool silently failing. A free tool that the team runs diligently usually has lower total cost of ownership than a paid tool that the team half-uses.
For the underlying decision framework, see the Subtract to Ship framework for MDR compliance. The Subtract to Ship test applies directly: any tool that does not trace to a specific Annex III element or Article obligation is not a tool the budget owes.
Ship — the lean PMS stack playbook
The lean stack for a Class I or Class IIa startup covers every Annex III element with minimum friction. One sensible build looks like this.
Complaint management runs on a free or low-tier ticket tool with an immutable audit trail, integrated into a shared email intake address that every person in the company knows. Every ticket has an owner, a status, and a category, and the category list matches the harm categories in the risk file so that complaint-to-risk-file mapping is one lookup, not a translation exercise.
Literature monitoring runs on a saved PubMed search string, a monthly calendar invite, and a shared folder with one subfolder per month. The search string is documented in the PMS plan, and changes to the string are logged. An optional free PubMed alert email adds push notification without adding a platform.
Similar-device and registry tracking runs on a list of feed sources in the PMS plan, a monthly calendar invite covering all of them, and a signed review record in the same folder structure as literature. National competent authority safety-notice pages and FDA MAUDE searches are bookmarked. For the EU side, see using EUDAMED data for PMS.
Analytics and trend reporting runs on a simple count-based rule in the same tracker as complaints, with the rule written into the PMS plan as Element D. Any trigger produces an escalation to a formal trend assessment, which in turn produces a signed record and, if warranted, a trend report to the competent authority under Article 88.
The monthly review is a thirty-minute calendar invite with the quality lead, a product owner, and a signed minute. The minute covers complaint trends, literature results, similar-device findings, trend-rule status, and any corrective actions. The review is the heartbeat of the stack. Without it, none of the tools matter.
Governance lives in the QMS under EN ISO 13485:2016 + A11:2021. The PMS procedure references the tools by category and by function, not by vendor name, so that a tool swap does not require a QMS revision. For cloud-hosted tools, see cloud-based QMS tools for MDR compliance, and for PMS automation specifically see using AI in PMS automation.
Reality Check — where do you stand?
- Can you list every PMS tool in your stack and map each one to a specific Annex III element or Article obligation?
- Does your complaint management tool produce an immutable audit trail from intake to closure, or can entries be silently edited?
- Is your literature search string documented in the PMS plan, and are the monthly results archived with dates and an owner signature?
- Do you monitor at least one national competent authority safety-notice feed and one equivalent-device recall source on a defined cadence?
- Is your trend-reporting rule written down as a specific statistical or count-based protocol, not a general "we will review trends" statement?
- Does every tool in your stack contribute to a signed monthly review record, or are some tools running without ever producing an artefact the notified body will see?
- Have you done a documented build-vs-buy calculation for each category in the last twelve months, or is the current stack inherited from whoever set it up first?
Frequently Asked Questions
Does MDR require dedicated PMS software?
No. Neither Regulation (EU) 2017/745 nor MDCG 2025-10 requires a specific software product. Article 83 requires a system proportionate to the risk class, and Annex III fixes the content of the plan. The obligation is to operate the system traceably, not to buy a platform. A stack built on tools the company already owns can fully satisfy Article 83 for most Class I and Class IIa devices.
What is the minimum PMS tool stack for a Class I startup?
A complaint tracker with an immutable audit trail, a saved PubMed search string with a monthly review cadence, a list of safety-notice feeds with a signed monthly review, a simple count-based trend rule, and a monthly review meeting with a signed minute. The total direct tool cost can be close to zero. The real cost is the time the team spends running the monthly cycle honestly.
When should a startup upgrade from a built PMS stack to a bought platform?
When documented pressure from volume, complexity, or multi-jurisdictional reporting exceeds what the current stack can cleanly handle, and when the upgrade traces to a specific obligation the current stack is struggling to meet. Upgrading earlier is a cost the budget does not need to carry. Upgrading later than the pressure demands creates audit risk.
Can we use a general-purpose ticket tool for MDR complaint management?
Yes, provided it gives an immutable audit trail, an owner per ticket, a status field, and a category scheme that maps to the risk file. A free or low-tier ticket tool often beats a dedicated eQMS module for a small team because it is simpler to learn and harder to misuse. The compliance test is traceability, not product category.
How do we handle literature monitoring without a paid database subscription?
PubMed is free and is the primary biomedical database used by notified bodies for MedTech literature work. A documented PubMed search string, a monthly calendar cadence, an archived result set, and a triage note against the intended purpose and the risk file is a fully compliant literature-monitoring activity. Paid databases add value when the device category requires specialty coverage PubMed does not provide.
Does MDCG 2025-10 change the tooling requirements?
No. MDCG 2025-10 confirms that the PMS system must interact with clinical evaluation, risk management, and vigilance as linked processes, and that the activities must be traceable and reproducible. It does not prescribe tools. It does, however, raise the bar on evidence of execution — a notified body applying MDCG 2025-10 will want to see that the stack actually produces signed records on the documented cadence, whatever the tool stack looks like.
Is a spreadsheet acceptable for PMS complaint tracking at audit?
Yes, for a small manufacturer with low volume, provided the spreadsheet has version control, access restrictions that prevent silent edits, and a traceable history of changes. Many notified bodies have accepted spreadsheet-based complaint logs for Class I and Class IIa startups. The moment the spreadsheet cannot cleanly support volume or trend analysis, it is time to upgrade — but not before.
Related reading
- How to build a PMS system on a startup budget — the seven-step build this tooling layer sits inside.
- Cloud-based QMS tools for MDR compliance — the governance layer the PMS stack plugs into.
- Using AI in PMS automation — how automation earns its place in literature and complaint triage.
- Complaint handling under MDR for startups — the intake workflow the complaint tool supports.
- Trend reporting under MDR Article 88 — the article behind the analytics layer.
- The Subtract to Ship framework for MDR compliance — the build-vs-buy test.
Sources
- Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 83 (PMS system of the manufacturer), Article 84 (PMS plan), Article 85 (PMS Report), Article 86 (PSUR), Article 88 (trend reporting), and Annex III (technical documentation on post-market surveillance). Official Journal L 117, 5.5.2017.
- MDCG 2025-10 — Guidance on post-market surveillance of medical devices and in vitro diagnostic medical devices. Medical Device Coordination Group, December 2025.
- EN ISO 13485:2016 + A11:2021 — Medical devices — Quality management systems — Requirements for regulatory purposes.
- EN ISO 14971:2019 + A11:2021 — Medical devices — Application of risk management to medical devices.
This post is part of the Post-Market Surveillance & Vigilance series in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. The right PMS stack is the smallest one that makes every Annex III element traceable and every monthly review reproducible. Tool choice is downstream of the obligation, not upstream of it.