A PMS system on a startup budget is built by starting from the Article 83 obligations, sizing every activity to the risk class under the proportionality rule, and wiring the system into the tools the team already uses rather than buying a dedicated platform. Regulation (EU) 2017/745 requires every manufacturer to plan, establish, document, implement, maintain, and update a post-market surveillance system proportionate to the risk class and appropriate for the type of device. It does not require expensive software, a dedicated PMS manager, or a twenty-page plan. It requires a system that actually runs and produces the outputs Annex III and Article 84 specify. This post walks the seven-step build that a three-person startup can execute in a week and then run on a sustainable monthly cadence.

By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.


TL;DR

  • MDR Article 83 requires a PMS system proportionate to the risk class of the device. Proportionality is the budget lever — a Class I system is not a Class III system, and the Regulation says so directly.
  • The lean build has seven steps: define proportionality to class, set up data sources, write the lean PMS plan, set the cadence, link to the risk file, run the first cycle, and expand only when needed.
  • Annex III, Section 1.1 of Regulation (EU) 2017/745 fixes the mandatory content of the PMS plan. The budget version addresses every element without adding volume.
  • MDCG 2025-10 (December 2025) is the current operational guidance that notified bodies apply when reviewing a PMS system, including at small manufacturers.
  • The feedback loop into the risk management file under EN ISO 14971:2019 + A11:2021 is not optional. It is the reason the system exists.
  • The system is built on tools the team already uses — a shared drive, a simple tracker, a recurring calendar review — not on a dedicated PMS platform bought on day one.

Why a lean PMS is a real PMS, not a cheap one

The phrase "PMS on a startup budget" sounds like a compromise. It is not. MDR Article 83(1) of Regulation (EU) 2017/745 requires the PMS system to be proportionate to the risk class and appropriate for the type of device. Proportionality is written into the Regulation itself. A Class I reusable instrument is not required to run the PMS architecture of a Class III implantable, and a small manufacturer is not required to run the PMS architecture of a multinational. The Regulation sets the floor for each class. The floor for a Class I or Class IIa startup is substantially lower than the ceiling, and building to the floor deliberately is not a shortcut — it is the correct reading of Article 83.

The sleep-monitoring arm strap from the PMS pillar post is the reference story for this post too. The manufacturer was small. The PMS system that caught the skin-irritation pattern was not elaborate. It was a logged complaint intake, a monthly review with a named owner, a trend check against the risk file, and a corrective action path that closed the loop. That system cost almost nothing to run, and it did the one thing a PMS system exists to do — it caught a real-world signal that lab testing had missed, and it caught it early enough to fix the material specification before more patients were affected. A lean PMS that runs beats a luxurious PMS that does not.

This post shows the seven-step build. It assumes you have read what is post-market surveillance under MDR, MDR Articles 83 to 86 — the PMS framework explained, and the PMS plan under MDR Annex III. Those three set the framework. This post builds the system.

Step 1 — Define proportionality to your class

Before anything else, write down what proportionality means for your specific device. Article 83(2) says the PMS system must be proportionate to the risk class and appropriate for the type of device. That sentence is the budget lever, and using it correctly starts with a written answer to five questions.

What is the device class under Annex VIII — the specific rule, not just "Class I" or "Class IIa." What is the expected annual volume of devices placed on the market — ten, a hundred, ten thousand. What is the clinical risk profile — non-invasive, surface contact, short-term invasive, implantable, and so on. What is the expected incident rate based on pre-market evidence and similar devices. And what cadence of PMS activity is defensible against that combination.

These answers determine the entire build. A Class I non-invasive device with fifty units a year and no pre-market incident signals runs a quarterly review cycle with a small complaint stream. A Class IIa measuring device with several thousand units a year runs a monthly cycle with trend thresholds. A Class IIb or Class III device runs continuous monitoring and shorter analysis windows. The Regulation does not name specific numbers, but it does require the answer to be written down and defensible. MDCG 2025-10 (December 2025) is the current operational guidance that notified bodies use to assess whether a given proportionality call is reasonable.

This step is the cheapest one in the build and the one most often skipped. A written proportionality statement — one page — is the anchor every later decision hangs from. Without it, every "how often" and "how deep" question becomes a negotiation with yourself.

Step 2 — Set up data sources using tools you already own

Element A of Annex III, Section 1.1 requires the plan to describe the processes for collecting and using the available information — complaints, user reports, similar-device data, publicly available information, and serious-incident data. Most startups assume this means buying a platform. It does not. A lean build uses the tools the team already owns.

Reactive sources first. Complaint intake needs one channel everyone in the company knows about — a shared email inbox, a form on the website, or a support-desk entry point. Every complaint is logged in a single tracker with a timestamp, an owner, a category, and a status. A shared spreadsheet with version control works. A lightweight ticket tool works. A notion database works. The compliance signal is not which tool you picked — it is that every complaint has a timestamp, an owner, and a status, and that the team can produce the full intake history on demand.

Proactive sources next. Literature monitoring runs against a defined PubMed search string on a defined cadence, with the string and the results archived. Similar-device monitoring runs against the national competent authority safety notice feeds and any relevant recall databases. Both activities live in a calendar invite with a named owner. Neither requires software beyond a browser and a shared folder.

The budget version does not add a dashboard. It does not add a dedicated PMS platform. It does add the minimum tooling required for traceability: a tracker for complaints, a folder for literature results, a folder for similar-device reviews, and a place where every review is logged with a date and a signature. That is the data layer of the system. For the complaint intake specifics, see complaint handling under MDR for startups.

Step 3 — Write the lean PMS plan

Article 84 of Regulation (EU) 2017/745 requires the PMS plan, and Annex III, Section 1.1 fixes its mandatory content. The lean build writes the plan as a table, not as prose. One row per Annex III element. Each row names the data source or method, the owner, the cadence, and the document reference that proves the activity runs.

The elements to address in the table: data collection sources (Element A), assessment methods (Element B), tools to investigate complaints and analyse field experience (Element C), trend-reporting protocols under Article 88 (Element D), communication channels (Element E), reference to the procedures that fulfil Article 83, 84, and 86 (Element F), systematic procedures for corrective actions (Element G), traceability tools (Element H), and the PMCF plan or a written justification for why PMCF is not applicable (Element I).

The lean version of this plan is three to five pages. Not twenty. The volume is not the compliance signal — the coverage of every element is. A three-page table that addresses all nine elements beats a twenty-page prose plan that skips Element D or Element I. For the full walkthrough of the Annex III elements, see the PMS plan under MDR Annex III.

The two elements most often missed in first-draft plans are Element D (the explicit statistical rule for trend reporting) and Element I (the PMCF plan or written justification). Write them in the first draft — skipping them and promising to add them later is how startups end up with a notified-body finding at the first audit.

Step 4 — Set the cadence

A PMS system without a cadence is not a system. It is a folder. Step 4 assigns specific frequencies to every activity in the plan, consistent with the proportionality statement from Step 1.

For a small Class I or Class IIa manufacturer, a realistic budget cadence looks like this. Complaint intake is continuous — every complaint is logged when it arrives. Complaint review and classification runs weekly or biweekly, owned by the quality lead. A full PMS review cycle runs monthly, with a thirty-minute meeting and a signed record, covering complaint trends, literature results, similar-device findings, and any open corrective actions. A quarterly deeper review checks alignment between the product as sold and the authorised intended purpose, and reviews the feedback into the risk file. An annual review produces the PMS Report under Article 85 for Class I devices or aligns with the PSUR cycle under Article 86 for higher-class devices.

This cadence is sustainable because it fits into calendar time the team actually has. Monthly reviews become a recurring calendar invite. Quarterly reviews become the QMS management review cadence. Annual outputs land on the same cycle as the clinical evaluation refresh. The cadence is the bridge between the plan on paper and the system running in reality.

The feedback loop into the risk management file is the element that separates a real PMS from a decorative one. EN ISO 14971:2019 + A11:2021 establishes risk management as a lifecycle activity — the risk file is a living document, not a pre-market artefact. The PMS plan has to describe how findings flow from the data sources into the risk file and back out into design, labelling, or IFU changes.

The lean version of this link is a single paragraph in the plan and a single row in the monthly review template. The paragraph says: findings from complaint trending, literature monitoring, and similar-device review are assessed against the hazards identified in the risk file. If a finding represents a hazard not previously identified, or an occurrence rate higher than the pre-market estimate, or a severity higher than estimated, the risk file is updated under the change-control process, new control measures are evaluated, and the updated file flows back into the design, IFU, or labelling. The monthly review template has a row that asks, for every finding, "does this trigger a risk-file update, yes or no, and if yes, which hazard." That row is the hinge the whole system swings on.

The sleep arm strap story is this hinge working correctly. The complaint signal arrived, the monthly review identified the pattern, the risk team updated the hazard estimate for prolonged skin contact under perspiration, the material specification was updated, the labelling was refreshed, and the loop closed. No dashboard was required. A shared folder, a signed review record, and a risk-file update note were enough. For the risk-file mechanics, see risk management file updates driven by PMS.

Step 6 — Run the first cycle

The first monthly cycle is where the plan becomes a system. The first cycle does three things that define whether the rest of the year works.

It surfaces the gaps in the data sources. You discover that the complaint inbox nobody checked in two weeks is the wrong inbox. You discover that the literature search string returns three hundred results and the owner has no time to triage. You discover that the similar-device feed you planned to monitor requires a subscription. Every gap found in the first cycle is a gap fixed before the second cycle, not after the first audit.

It produces the first signed review record. This is the document the notified body will ask for when they want evidence the system runs. A signed, dated record of a review that happened, with attendees, findings, and actions, is worth more than any platform receipt. Keep the record in the folder, link it from the plan, and version it.

It tests whether the cadence is realistic. If the monthly review took two hours and felt tight, keep it monthly. If it took thirty minutes and felt thin, the cadence might be too frequent for the volume — and the proportionality statement from Step 1 may need a revision. MDCG 2025-10 supports calibrating cadence against actual data volume and risk profile, as long as the calibration is documented.

The first cycle is also the moment to confirm that every row of the plan has an owner who actually executed it. Unowned rows are rows that will not run. Fix them now.

Step 7 — Expand only when needed

The budget build deliberately starts small. It uses the tools the team already owns, it sets the cadence at the lightest defensible rhythm for the class, and it produces the outputs Annex III and Article 84 require. The temptation after the first few cycles is to add — a platform, a dashboard, a formal trend-analysis tool, more proactive sources.

The Subtract to Ship test from the Subtract to Ship framework for MDR applies directly. For every proposed addition, ask: what specific Annex III element, Article obligation, or MDCG 2025-10 guidance section does this addition serve, and what current activity would it replace or reinforce. If the addition traces to a specific obligation and the current activity cannot meet it, expand. If the addition is "it would be nice to have a dashboard," wait.

The triggers for real expansion are concrete. Complaint volume grows past what a spreadsheet can cleanly handle — move to a ticket tool. Trend analysis needs a statistical rule the spreadsheet cannot compute cleanly — add a lightweight stats tool. Literature volume grows past what a single owner can triage — distribute or automate the search. A new device class in the portfolio changes the proportionality statement — rewrite the plan. Every expansion is a response to a specific pressure, not a general instinct that "we should probably have more."

This step is the discipline that keeps the PMS system affordable over the three to five years before the company either scales or does not. For the PMCF side of the build, see PMCF under MDR — a guide for startups, and for the vigilance interface see what is vigilance under MDR.

The Subtract to Ship angle

The Subtract to Ship framework applied to PMS produces a single rule: build the smallest system that satisfies every Article 83 obligation for your device class, make every component run, and expand only against documented pressure. The budget is the friend of the framework. Resource constraints force the discipline that unconstrained teams often lose — every activity has to justify itself, every tool has to earn its place, every review has to produce a real record, and every finding has to close a real loop. A startup that builds a PMS on a budget and runs it honestly usually has a better system, at audit, than a well-funded company that bought a platform and assumed the platform was the system.

The lean PMS build is not a compromise against quality. It is the correct reading of Article 83's proportionality clause, the correct reading of Annex III's content-over-format principle, and the correct reading of MDCG 2025-10's guidance that the system must work in practice. Everything else is subtraction bait.

Reality Check — where do you stand?

  1. Do you have a written proportionality statement for your device — class, volume, risk profile, cadence — or is proportionality an assumption nobody has documented?
  2. Is your complaint intake channel known to every person in the company, and can you produce the last thirty days of intake records on demand?
  3. Does your PMS plan fit on three to five pages as a table, or is it a twenty-page prose document most of the team has never opened?
  4. Is there a specific calendar invite for the monthly PMS review, with a named owner and a signed record template?
  5. Does the plan describe, in one paragraph or one row, how PMS findings flow into the risk file and back out into design, IFU, or labelling changes?
  6. Have you run at least one full cycle against the plan, and does a signed review record exist for it?
  7. For every activity in the plan, is there a named owner who actually executed it in the last cycle?
  8. Has the plan been updated since MDCG 2025-10 was published in December 2025?
  9. For every tool or addition you are considering, can you trace it to a specific Annex III element or Article obligation, or is it "nice to have"?

Frequently Asked Questions

How much does it cost to build a PMS system for a small MedTech startup?

The direct tool cost for a Class I or Class IIa lean PMS build can be close to zero if the team already uses a shared drive, a tracker, and a calendar. The real cost is staff time — typically one to two days to build the plan, half a day for the first cycle, and a few hours per monthly cycle after that. A platform is not required by MDR. The obligation is to run the system, not to buy software.

Can a three-person startup run a compliant PMS system?

Yes, for Class I and most Class IIa devices, provided the plan is proportionate to the class under Article 83(2), every Annex III element is addressed, the cadence is sustainable, and the feedback loops into the risk file and clinical evaluation are documented. Higher-class devices typically require more capacity, especially for PMCF, but even a small team can run a compliant system with external support where depth is needed.

Do we need dedicated PMS software to satisfy MDR?

No. Neither Regulation (EU) 2017/745 nor MDCG 2025-10 requires a specific tool. The obligation is to operate a PMS system that actually collects, analyses, and acts on data, with traceable records. A shared tracker, a folder structure, and a calendar-driven review rhythm can satisfy Article 83 and Annex III for a small manufacturer. Software becomes useful when volume outgrows manual tools, not before.

How often should a startup run PMS reviews?

The Regulation does not mandate a specific frequency — it requires proportionality. For a small Class I or Class IIa manufacturer, continuous complaint intake, weekly or biweekly triage, monthly full reviews, quarterly deeper reviews including intended-purpose alignment, and annual reporting is a defensible cadence. Higher-class devices run shorter cycles. The cadence must be documented in the plan and actually executed.

What is the minimum viable PMS plan for a Class I device?

A three-to-five-page table addressing every element of Annex III, Section 1.1 — data sources, assessment methods, investigation tools, trend-reporting protocol under Article 88, communication channels, reference to Article 83, 84, and 86 procedures, corrective-action procedures, traceability tools, and a PMCF plan or written justification. Every row has a named owner, a cadence, and a document reference. That is the minimum viable plan.

Can we use the same tools for PMS that we use for general operations?

Yes, provided they give you traceability, version control, and restricted access where needed. A shared tracker used for PMS complaints is fine if every entry has a timestamp, an owner, and a status, and if nobody can delete entries without a trace. The compliance test is traceability, not tool choice.

When should we upgrade from a lean PMS to a platform?

When complaint volume, literature volume, or trend-analysis complexity exceeds what manual tools can cleanly support, and when the pressure is documented in a signed review record. Upgrading earlier is a cost the budget does not need to carry. Upgrading later than the pressure demands creates audit risk.

Sources

  1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 83 (post-market surveillance system of the manufacturer), Article 84 (post-market surveillance plan), and Annex III (technical documentation on post-market surveillance, Section 1.1). Official Journal L 117, 5.5.2017.
  2. MDCG 2025-10 — Guidance on post-market surveillance of medical devices and in vitro diagnostic medical devices. Medical Device Coordination Group, December 2025.
  3. EN ISO 14971:2019 + A11:2021 — Medical devices — Application of risk management to medical devices.

This post is part of the Post-Market Surveillance & Vigilance series in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. A PMS system built on a startup budget and run honestly, cycle after cycle, is not a compromise — it is the correct reading of Article 83's proportionality clause and the kind of system that actually catches real-world signals when they arrive.