---
title: Customer Complaint Handling Under MDR: A Process Guide for Startups
description: Complaint handling is where most PMS systems break down. Here is the process under MDR and ISO 13485 that actually catches safety signals early.
authors: Tibor Zechmeister, Felix Lenhard
category: Post-Market Surveillance & Vigilance
primary_keyword: customer complaint handling MDR
canonical_url: https://zechmeister-solutions.com/en/blog/customer-complaint-handling-mdr
source: zechmeister-solutions.com
license: All rights reserved. Content may be cited with attribution and a link to the canonical URL.
---

# Customer Complaint Handling Under MDR: A Process Guide for Startups

*By Tibor Zechmeister (EU MDR Expert, Notified Body Lead Auditor) and Felix Lenhard.*

> **Customer complaint handling under MDR is the documented process by which a manufacturer receives, logs, assesses, investigates, and acts on every piece of feedback that suggests a device may not be performing as intended. It sits inside the post-market surveillance system required by Articles 83 to 86 of Regulation (EU) 2017/745, and it is the operational entry point for the vigilance obligations under Articles 87 to 92. The quality system requirements are set out in EN ISO 13485:2016+A11:2021, specifically clause 8.2.2 on customer feedback and clause 8.5.2 on corrective action. A complaint process that runs properly catches safety signals early; one that does not is where most PMS systems quietly break down.**

**By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.**

---

## TL;DR

- Customer complaint handling is required by EN ISO 13485:2016+A11:2021 clause 8.2.2 and feeds directly into the PMS system required by MDR Articles 83 to 86 and Annex III.
- Every complaint must be logged, assessed for severity, triaged for potential serious incident reporting under Article 87, and either closed out or escalated into a full investigation and corrective action.
- Serious incident reporting timelines under Article 87 range from immediate for serious public health threats to 15 days for other serious incidents. MDCG 2023-3 Rev.2 is the current operational guidance.
- Trend analysis against defined metrics is required by Article 88 and is the mechanism that catches clusters of non-serious events before they become serious ones.
- Complaint findings must feed back into the clinical evaluation report and the risk management file. A complaint system that does not close these loops is broken.

---

## Why complaint handling is where PMS systems break down

A company I worked with had a sleep-monitoring device with an upper-arm sensor strap. Biocompatibility was tested per EN ISO 10993-1 before launch. The notified body accepted the file. The product shipped. Then, weeks in, skin irritation complaints started arriving through the support inbox — two the first week, five the next, then more. Most were mild. A few were not.

The reason the pattern was caught and the device was fixed before the irritations turned into a field safety corrective action was not that the complaint system was elaborate. It was that the complaint system existed, every message was logged, the intake owner flagged the textile-polymer interface as a recurring theme, and the trend triggered an investigation that updated the material specification. The whole thing worked because someone had built the boring parts first.

The boring parts are where most startup complaint systems fail. Complaints arrive through Intercom, Zendesk, email, phone calls, the app store, sales calls, and the CEO's personal inbox — and they die in every one of those channels because no single process captures them, assesses them against a severity scale, and pulls the ones that matter into the regulatory workflow. The founder reads a bad review, forwards it to engineering, engineering files it in Jira, and the regulatory file never hears about it. Six months later the auditor asks for the complaint log and it does not exist.

This post walks through the process that actually works, step by step, so you can build the lean version before your notified body asks for it.

## Step 1 — Receive and log every complaint through one channel

Clause 8.2.2 of EN ISO 13485:2016+A11:2021 requires the manufacturer to document procedures for a feedback process that provides for collection of data from post-production activities. MDR Annex III paragraph 1.1(a) reinforces this by requiring the PMS plan to describe processes for collecting and using information, in particular complaints and reports from healthcare professionals, patients, and users on their experience with the device.

The practical rule: every touchpoint where a user can reach your company must funnel into one complaint log. Support email, in-app feedback, phone support, sales calls, clinical advisory board notes, distributor reports, social media mentions — all of it. You do not need a dedicated complaint-management platform for this. You need a spreadsheet or a database with the following fields for each entry: unique ID, date of receipt, source channel, reporter details, device identification including UDI where available, lot or serial number, a verbatim description of what the user reported, the initial severity classification, the owner assigned to assess it, and the current status.

The non-negotiable part is the word "every." Not "complaints that look serious." Not "the ones engineering thinks are bugs." Every piece of feedback that touches device performance, safety, labelling, or intended use is logged. The assessment of whether it rises to a complaint in the regulatory sense happens in the next step — not at the inbox.

Train everyone in the company who might receive user feedback on the rule: if a user says anything about how the device worked or did not work, open a ticket in the complaint log. Sales does not get to triage. Support does not get to decide it is a feature request. The log catches it, then the process filters it.

## Step 2 — Assess severity and classify the complaint

Once a complaint is logged, the next step is an initial severity assessment by a qualified reviewer. This is where MDCG 2023-3 Rev.2 becomes operational. The guidance walks through the distinction between an incident, a serious incident, a user or use error, and a complaint that does not rise to either.

The initial assessment asks four questions:

1. Does the report describe a malfunction or deterioration in the characteristics or performance of the device, an inadequacy in the information supplied by the manufacturer, a user or use error, or an undesirable side effect?
2. If yes, did the event lead to, or could it have led to, the death of a patient, user, or other person, or a serious deterioration in their state of health, or a serious public health threat?
3. Is there a plausible link between the device and the reported outcome?
4. What is the timeline starting now — when does the manufacturer's awareness clock under Article 87 start running?

The answers determine the routing. A complaint that describes normal use and a feature request gets closed as "no device-related issue." A complaint that describes a minor performance deviation with no health consequence gets routed into the investigation and trending workflow. A complaint that describes a potential serious incident gets immediately triaged to vigilance — see step 3.

The reviewer performing this assessment must be competent to do so. EN ISO 13485:2016+A11:2021 clause 6.2 requires personnel performing work affecting product quality to be competent on the basis of appropriate education, training, skills, and experience. Whoever classifies complaints must understand the MDR vigilance definitions and the device well enough to recognise the difference between an annoying bug and an early safety signal. For most startups, this is the regulatory affairs lead or a trained support lead with a defined escalation path.

Every assessment is documented in the complaint record with the name of the reviewer, the date, the classification, and the reasoning. "Looks fine" is not a classification.

## Step 3 — Triage to vigilance if the event is a serious incident

If the severity assessment concludes that the event is or may be a serious incident as defined in MDR Article 2(65), the complaint immediately enters the vigilance workflow under Articles 87 to 92. This is the single most time-critical step in the entire process, because the reporting clock starts the moment the manufacturer becomes aware of the event.

The reporting deadlines fixed by Article 87(3) are:

- **Immediately, and not later than 2 days** after the manufacturer becomes aware of a serious public health threat.
- **Not later than 10 days** after the manufacturer becomes aware of a serious incident that resulted in or might have resulted in the death of a patient, user, or other person, or an unanticipated serious deterioration in a person's state of health.
- **Not later than 15 days** after the manufacturer becomes aware of any other serious incident.

Field safety corrective actions are reported under Article 89 without undue delay and not later than the time the FSCA is initiated. Periodic summary reports are handled under Article 87(9) where the competent authority has agreed. Trend reporting under Article 88 is triggered by a statistically significant increase in the frequency or severity of incidents that are not themselves serious incidents — the mechanism that catches a cluster before it becomes a vigilance event.

The practical implication for a startup: the triage decision in step 2 cannot take a week. If there is any plausible signal that a complaint describes a serious incident, the vigilance clock is already running, and the team needs to confirm the classification fast, assemble the initial report, and submit it through the competent authority's channel within the deadline. For the detail on serious incident definitions and reporting mechanics, see the companion post on [serious incidents under MDR](/blog/serious-incidents-mdr). For the full walkthrough of the vigilance framework, see [what is vigilance under MDR](/blog/what-is-vigilance-mdr).

A complaint that is triaged to vigilance does not leave the complaint process. It continues to be tracked in the complaint log, with cross-references to the vigilance case file, and the investigation and corrective action steps still run in parallel.

## Step 4 — Investigate the root cause

Every complaint that is not closed out at initial assessment enters investigation. The depth of investigation scales with the severity and the potential regulatory impact, but the structure is the same in every case.

Investigation asks: what happened, why did it happen, how did the device contribute, and is there a broader population at risk? The toolkit includes review of device history records, analysis of returned devices where available, review of lot or batch data, check against similar prior complaints, literature review for analogous events on similar devices, and engineering analysis of the failure mode. For software devices, log analysis and telemetry review replace physical device return. For devices with clinical components, clinician interviews may be required.

The investigation record must be traceable. Who did the analysis, what data they reviewed, what they concluded, and how they reached the conclusion. An auditor reading the file six months later should be able to reconstruct the reasoning without calling the original investigator.

The critical decision at the end of investigation is the root-cause determination. "User error" is not a root cause unless the investigation actually demonstrates the user was trained, the IFU was clear, and the device performed as specified — otherwise "user error" is often a disguise for inadequate usability engineering or an incomplete IFU. A notified body auditor who sees a pattern of complaints closed as "user error" without supporting analysis will press on it. Be honest about what the investigation shows.

## Step 5 — Drive corrective action under ISO 13485 clause 8.5.2

Where the investigation identifies a nonconformity — whether in the device, the process, the documentation, or the labelling — corrective action is required under EN ISO 13485:2016+A11:2021 clause 8.5.2. Corrective action is not the same as a quick fix. Clause 8.5.2 requires the manufacturer to review the nonconformity, determine its causes, evaluate the need for action, determine and implement the action, verify that the action does not adversely affect the ability to meet applicable regulatory requirements or the safety and performance of the medical device, and review the effectiveness of the corrective action taken.

In practice, this is the CAPA process. The complaint record links to the CAPA record. The CAPA captures the root cause, the chosen action, the verification plan, the effectiveness check, and the sign-off. Where the corrective action changes the device, the risk file under EN ISO 14971:2019 + A11:2021 is updated. Where the action changes labelling or the IFU, the technical documentation is updated. Where the action affects devices already on the market, a field safety corrective action under Article 89 may be triggered, which reopens the vigilance workflow.

A common startup mistake is closing a complaint with a temporary fix and calling it done. The corrective action is not done when the immediate problem stops; it is done when the root cause is eliminated and effectiveness has been verified. Effectiveness verification typically means monitoring the metric that represented the original nonconformity across a defined window to confirm the issue does not recur.

## Step 6 — Run trend analysis against defined metrics

Article 88 requires manufacturers to report any statistically significant increase in the frequency or severity of incidents that are not serious incidents, or of expected undesirable side-effects, that could have a significant impact on the benefit-risk analysis. This is not optional, and it is not something a manufacturer can perform by eyeballing the complaint log at quarter-end.

Trend analysis requires predefined metrics and predefined thresholds. The PMS plan under Annex III specifies both. For each complaint category — for example, skin irritation reports, battery performance issues, connectivity dropouts, IFU misinterpretation events — the plan sets an expected baseline frequency, a method for detecting a statistically significant deviation, and the action to take when the threshold is crossed. The method does not have to be elaborate. For a low-volume device, a control chart with a simple rule (for example, any week above the 95th percentile of historical weekly reports) may be enough. For a higher-volume device, the method may need to be more sophisticated.

The discipline is that the method exists in writing before the data arrives, not after. A trend analysis invented retrospectively to explain why a cluster was not reported is a trend analysis that will not survive an audit. MDCG 2025-10 (December 2025) is the current PMS guidance and the document to read for the operational expectations.

When a trend crosses the threshold, two things happen. First, the trend is reported under Article 88. Second, the underlying complaints are re-assessed together — a cluster that is non-serious individually may represent a serious signal collectively, and the re-assessment may promote the cluster to a full vigilance event.

## Step 7 — Close the loop into the CER and the risk file

The final step of every complaint cycle is the feedback loop into two other documents that a notified body will look at alongside the complaint log: the clinical evaluation report and the risk management file.

MDR Article 61(11) requires the clinical evaluation to be updated throughout the lifetime of the device with data from the PMCF plan and from the PMS system. Complaint data — specifically complaints about clinical performance, clinical safety, undesirable side-effects, or benefit-risk — feeds into the CER update cycle. A CER that was updated without reference to the complaint data from the same period is a CER that failed the loop.

EN ISO 14971:2019 + A11:2021 establishes risk management as a lifecycle activity. Complaint findings are a primary input to the risk file update. Each complaint that reveals a new hazard, a higher occurrence rate than assumed, or a more severe consequence than estimated triggers a risk file review. The residual risk is re-evaluated, new risk controls may be introduced, and the benefit-risk determination is refreshed. Article 83(3)(a) explicitly names the update of the benefit-risk determination as a required use of PMS findings.

The complaint system that does not close these two loops is a system that files complaints in a drawer. The complaint system that closes them is the system that catches an arm-strap skin irritation cluster, traces it to the textile-polymer interface, updates the material specification, refreshes the risk file, updates the CER, and keeps the device on the market safely.

## Common mistakes startups make

- **One channel per complaint, no central log.** Support, sales, and engineering each have their own tracking, nothing reconciles, and the complaint that mattered lives in a Slack DM.
- **Severity assessed by whoever picked up the ticket.** The person classifying complaints is untrained on MDR vigilance definitions, and the serious-incident triage fails because nobody recognises the signal.
- **"User error" as a default closure.** Any complaint that does not have an obvious device fault gets closed as user error without investigation, and the pattern of inadequate IFU or poor usability engineering is never identified.
- **Trend analysis invented at audit time.** The PMS plan mentions trend analysis, but no predefined metrics or thresholds exist, and no analysis has actually been performed since launch.
- **Corrective action closed without effectiveness verification.** A temporary fix is applied, the complaint is marked resolved, and the root cause is never confirmed eliminated.
- **No feedback into the CER or the risk file.** The complaint log exists, but the annual CER update and the risk file review never reference it, breaking the loop that Article 61 and Article 83 both require.
- **Vigilance clock missed because triage took too long.** A potential serious incident sits in the intake queue for four days before anyone looks at it, and the 10-day or 15-day clock under Article 87 is already compromised.

## The Subtract to Ship angle — the minimum complaint system that runs

The [Subtract to Ship framework for MDR](/blog/subtract-to-ship-framework-mdr) applied to complaint handling produces a clear rule: build the smallest complaint process that satisfies every obligation under Articles 83 to 92 and EN ISO 13485:2016+A11:2021 clauses 8.2.2 and 8.5.2, and make sure every step actually runs. Everything beyond that is waste. Everything less is a nonconformity.

Concretely, for a three-person startup on a Class IIa device, the minimum that runs looks like this. One intake form that every channel feeds into. One spreadsheet or lightweight database as the complaint log, with versioning. One trained reviewer — usually the regulatory lead or a trained founder — who performs the initial severity assessment within a defined service-level target, for example within 24 hours of receipt. One decision tree that routes to vigilance, investigation, or closure. One CAPA log linked to the complaint log. One trend analysis run on a defined cadence, for example monthly, against metrics defined in the PMS plan. One quarterly review that closes the loop into the CER and the risk file. And one annual audit of the complaint process itself against the PMS plan.

That is a lean complaint system. It is not a token complaint system — every element traces to a specific article, annex, or ISO clause. But it can run inside a small team without swallowing anyone's week, and it will survive an audit. For a broader walkthrough of the minimum-viable PMS pattern, see [post-market surveillance under MDR](/blog/what-is-post-market-surveillance-mdr).

What the minimum does not include: a six-figure complaint-management platform nobody uses, a monthly dashboard nobody reads, a CAPA queue that tracks five hundred open items none of which anyone remembers opening, or a complaint categorisation scheme so detailed that classification takes longer than investigation. Every one of those is subtraction bait.

## Reality Check — where do you stand?

1. Does every complaint channel in your company — support, sales, phone, email, social media, distributor reports — funnel into one logged complaint record with a unique ID? If any channel is uncaptured, the log is incomplete.
2. Can you name the person who performs initial severity assessment, the training they have received on MDR vigilance terms, and the service-level target from receipt to classification?
3. If a potential serious incident arrived in your inbox right now, how long until the vigilance clock under Article 87 is acknowledged and the initial report is drafted?
4. Does your PMS plan define trend analysis metrics and thresholds in writing, and has trend analysis actually been performed on the cadence the plan specifies?
5. When a complaint closes with "user error," is there a documented investigation that demonstrates the user was trained, the IFU was clear, and the device performed as specified?
6. Is every CAPA linked to the originating complaint, with documented root cause, action, and effectiveness verification?
7. Does your most recent clinical evaluation update reference the complaint data from the same period, and does the risk file show updates driven by PMS findings where applicable?
8. Have you read MDCG 2023-3 Rev.2 and MDCG 2025-10 end to end, or have you only skimmed them?

## Frequently Asked Questions

**Is customer complaint handling the same as vigilance?**

No. Complaint handling is the broader quality-system process required by EN ISO 13485:2016+A11:2021 clause 8.2.2 for managing all customer feedback related to device performance. Vigilance, under MDR Articles 87 to 92, is the subset of reporting obligations triggered when a complaint describes a serious incident or a field safety corrective action. Every vigilance case starts as a complaint; not every complaint becomes a vigilance case.

**Do I need to log every complaint, even minor ones?**

Yes. The complaint log must capture every piece of feedback that could reflect on device performance, safety, labelling, or intended use. The initial severity assessment decides what happens next, but the log is the gate. Filtering at intake rather than at assessment is how complaint systems lose signals.

**Who is qualified to perform the initial severity assessment?**

Per EN ISO 13485:2016+A11:2021 clause 6.2, the reviewer must be competent on the basis of education, training, skills, and experience. In practice for a startup this is the regulatory affairs lead or a trained support lead who understands MDR vigilance definitions, knows the device, and has a defined escalation path for ambiguous cases.

**What are the reporting timelines for a serious incident under Article 87?**

Immediately and not later than 2 days for a serious public health threat; not later than 10 days for a serious incident that resulted in or might have resulted in death or unanticipated serious deterioration in a person's state of health; not later than 15 days for any other serious incident. The clock starts when the manufacturer becomes aware of the event, as clarified by MDCG 2023-3 Rev.2.

**When does a cluster of non-serious complaints become reportable?**

Under MDR Article 88, any statistically significant increase in the frequency or severity of non-serious incidents or expected undesirable side-effects that could have a significant impact on the benefit-risk analysis must be reported. The threshold for "statistically significant" must be defined in the PMS plan before the data arrives, not invented afterward.

**How does complaint handling connect to the CER and the risk file?**

Complaint findings are a primary input to both. MDR Article 61(11) requires the clinical evaluation to be updated with PMS data throughout the device lifetime, and EN ISO 14971:2019 + A11:2021 establishes risk management as a lifecycle activity fed by post-market information. A complaint system that does not drive updates to the CER and the risk file where warranted is not closing the loops the Regulation requires.

## Related reading

- [What is post-market surveillance under MDR?](/blog/what-is-post-market-surveillance-mdr) — the pillar on the PMS system that complaint handling feeds into.
- [MDR Articles 83 to 86 — the PMS framework explained](/blog/mdr-articles-83-86-pms-framework) — the article-by-article walkthrough of the PMS obligations.
- [The PMS plan under MDR Annex III](/blog/pms-plan-mdr-annex-iii) — where complaint handling is documented as part of the PMS plan.
- [The PMS Report for Class I devices under Article 85](/blog/pms-report-class-i-devices) — how complaint findings are summarised for Class I.
- [What is vigilance under MDR?](/blog/what-is-vigilance-mdr) — the vigilance framework that complaint handling triages into.
- [Serious incidents under MDR — definition and reporting](/blog/serious-incidents-mdr) — the definitions and timelines triggered by complaint triage.
- [Field safety corrective actions under MDR](/blog/field-safety-corrective-actions-mdr) — what happens when a complaint investigation leads to an FSCA.
- [Trend reporting under MDR Article 88](/blog/trend-reporting-mdr-article-88) — the statistical trigger mechanism for non-serious event clusters.
- [CAPA under ISO 13485 for MedTech startups](/blog/capa-iso-13485-startups) — the corrective action process complaint findings feed into.
- [The Subtract to Ship framework for MDR compliance](/blog/subtract-to-ship-framework-mdr) — the methodology behind the lean complaint system.

## Sources

1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Articles 83 to 86 (post-market surveillance system, plan, PMS Report, and PSUR), Articles 87 to 92 (vigilance, serious incident reporting, FSCAs, trend reporting, analysis of serious incidents and FSCAs), and Annex III (technical documentation on post-market surveillance). Official Journal L 117, 5.5.2017.
2. MDCG 2025-10 — Guidance on post-market surveillance of medical devices and in vitro diagnostic medical devices. Medical Device Coordination Group, December 2025.
3. MDCG 2023-3 Rev.2 — Questions and Answers on vigilance terms and concepts as outlined in Regulation (EU) 2017/745 and Regulation (EU) 2017/746. Medical Device Coordination Group, first publication February 2023; Revision 2, January 2025.
4. EN ISO 13485:2016+A11:2021 — Medical devices — Quality management systems — Requirements for regulatory purposes. Clause 8.2.2 (Feedback and complaint handling) and clause 8.5.2 (Corrective action).
5. EN ISO 14971:2019 + A11:2021 — Medical devices — Application of risk management to medical devices.

---

*This post is part of the Post-Market Surveillance & Vigilance series in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. The complaint process described here is the lean version that runs inside a small team without collapsing under its own weight — and that is the only version worth building, because the one that does not run is the one that misses the signal.*

---

*This post is part of the [Post-Market Surveillance & Vigilance](https://zechmeister-solutions.com/en/blog/category/pms-vigilance) cluster in the [Subtract to Ship: MDR Blog](https://zechmeister-solutions.com/en/blog). For EU MDR certification consulting, see [zechmeister-solutions.com](https://zechmeister-solutions.com).*
