A Predetermined Change Control Plan (PCCP) is a mechanism in the FDA framework that lets an AI/ML medical device evolve after clearance within boundaries the manufacturer and the agency agreed to in advance. Instead of submitting a new marketing application each time the model is retrained or updated, the manufacturer describes upfront which changes are anticipated, how those changes will be implemented, and how their safety and effectiveness will be verified. The MDR has no identical instrument. Under Regulation (EU) 2017/745 significant changes to an AI/ML device generally trigger reassessment by a Notified Body, and the "predetermined change" logic has to be reconstructed through careful intended purpose scoping, risk management, and software lifecycle discipline. This post explains the PCCP at the general framing level and compares it to how an EU startup has to think about AI/ML change control under the MDR.
By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.
TL;DR
- The FDA's Predetermined Change Control Plan is a way to describe, in advance, the AI/ML changes a manufacturer expects to make after clearance and the controls that will govern them.
- When accepted as part of a marketing submission, the PCCP lets the manufacturer implement those pre-agreed changes without filing a new submission each time the model is retrained or re-tuned.
- The MDR does not contain an equivalent named instrument. Under Regulation (EU) 2017/745 significant changes typically require a fresh conformity assessment step with the Notified Body.
- EU startups working in AI/ML still have to plan change control carefully — through the intended purpose under Article 2(12), the risk management file under EN ISO 14971, the software lifecycle under EN 62304, and the PMS system under Article 83.
- Even for EU-first startups, understanding the PCCP concept is useful because it clarifies the kind of discipline any AI/ML device needs to survive an audit on either side of the Atlantic.
Why this matters for AI/ML startups
Founders building AI/ML medical devices meet the same brick wall in every first strategy meeting. The product they are building is designed to get better over time. The model will be retrained as more data comes in. The thresholds will be tuned. The feature set will evolve. That is not a bug in the product vision — it is the entire point of using machine learning in the first place. A frozen model, shipped once and never updated, throws away the core advantage of the technology.
Then the regulatory reality lands. Every meaningful change to a medical device software touches the conformity basis of the device. Under the MDR, a "significant change" to a CE-marked device can trigger a fresh Notified Body step. Under the FDA framework, a change beyond the scope of the original clearance can trigger a new marketing submission. Neither outcome is compatible with the weekly or monthly retraining cadence the data science team had in mind.
The Predetermined Change Control Plan is the FDA's structured answer to this tension. It is worth understanding even for founders who will only ever sell in the EU, because the discipline it forces on the manufacturer is the same discipline the MDR expects — just organised differently.
What the PCCP is, at a general framing
At a general framing level — and this post stays at that level deliberately, because FDA execution belongs to US specialists — the PCCP is a document included in an FDA marketing submission for an AI/ML-enabled device that describes three things.
First, the planned modifications. Which specific changes does the manufacturer anticipate making after clearance? Examples in the public FDA discussion of the concept include retraining on new data, changes to input features, changes to performance thresholds, or updates that target specific subpopulations. The planned modifications are described specifically enough that both the manufacturer and the agency know what is in scope and what is out of scope.
Second, the modification protocol. How will those changes be implemented in a controlled way? This covers data management, retraining methodology, verification and validation approach, performance evaluation, and the decision criteria under which a change will or will not be released to users.
Third, the impact assessment. How will the manufacturer demonstrate that each planned change preserves the safety and effectiveness of the device? This ties back to risk management and performance monitoring, and it is the part that gives the agency confidence that the pre-agreed flexibility will not be abused.
When the FDA accepts a PCCP as part of the marketing submission, changes that fall inside the agreed scope can be implemented under the existing clearance. Changes outside the scope still require the normal change control process, which may include a new submission. The PCCP does not give a blanket permission to modify the device. It gives a structured, bounded permission to modify it in the specific ways the manufacturer described and the agency reviewed.
The important framing point for this post is that the PCCP is part of the FDA framework. It is not recognised as a standalone instrument under the MDR, and a manufacturer cannot reference an FDA-cleared PCCP to modify a CE-marked device without engaging the Notified Body under the MDR change control process. The two systems are separate. See FDA regulation of medical devices: a primer for EU startups for the overall FDA orientation.
How the PCCP changes AI/ML regulation in practice
The PCCP changes the cadence problem in one specific way. Before PCCPs existed as a structured concept, an AI/ML manufacturer facing a significant model update essentially had two bad options: ship the change and risk being outside the scope of the clearance, or file a new submission for every meaningful update and watch the release cadence collapse under administrative weight.
The PCCP creates a third option. The manufacturer negotiates a scope of anticipated changes upfront, locks the controls that will govern those changes, and then operates inside that envelope for the life of the clearance. The release cadence becomes a function of the manufacturer's own change control discipline rather than a function of the submission queue.
The trade-off is front-loaded rigour. To earn the PCCP, the manufacturer has to think clearly about what the model will need to change, why, how, and under which safety envelope — before the first clearance lands, not after. The kind of hand-waving that sometimes survives an initial submission ("we will figure out retraining later") does not survive a PCCP review, because the PCCP is the plan for the retraining.
For founders, this matters because the PCCP work rewards product teams that already have real machine learning operations discipline, and it punishes teams that have been treating the model as a black box. The best-prepared AI/ML teams we see are the ones that could describe their retraining cadence, their data drift monitoring, and their acceptance criteria in one whiteboard session before anyone mentioned regulation.
Comparison with the MDR-adjacent approach
The MDR does not use the term "Predetermined Change Control Plan." It does not contain a mechanism by which a Notified Body pre-approves a set of model changes that the manufacturer can then implement without further involvement. Any assertion to the contrary has to be checked carefully against the current consolidated text of Regulation (EU) 2017/745, and at the time of writing no such instrument exists in the MDR text.
What the MDR does contain is a set of obligations that together force the manufacturer to think about changes in structured ways that are adjacent to the PCCP logic, even if the instrument is not the same.
The intended purpose under MDR Article 2(12) is the anchor. A change that stays inside the intended purpose and inside the technical characteristics documented in the technical file is handled through the manufacturer's QMS change control. A change that alters the intended purpose or materially changes the risk profile is a different device and typically requires Notified Body involvement. Scoping the intended purpose carefully at the outset is how EU startups create breathing room for later updates — not through a named instrument, but through the width of the purpose statement they defend to the Notified Body.
The risk management file under EN ISO 14971:2019+A11:2021 is where anticipated model behaviour and the controls on that behaviour are documented. For AI/ML devices, this file has to cover not only the risks of the current model but the risks that specific kinds of change would introduce. If a retraining event would change the risk profile, the risk file has to account for it.
The software lifecycle under EN 62304:2006+A1:2015 is where the discipline of how changes are implemented is documented. The software safety class, the problem resolution process, the verification and validation approach, and the release criteria all live here. For AI/ML work, this is also the standard that most struggles with the ML-specific concepts, which is why many manufacturers also reference emerging AI-specific guidance and internal ML quality processes on top of EN 62304.
The cybersecurity lifecycle under EN IEC 81001-5-1:2022 and MDCG 2019-16 Rev.1 covers the security-relevant side of updates — including model updates that could introduce new attack surfaces or affect the security posture of the device.
Classification under MDR Annex VIII Rule 11, with the guidance in MDCG 2019-11 Rev.1, determines the class of the software as a medical device and therefore the rigour of the conformity assessment and the appetite of the Notified Body for change scope discussions.
The post-market surveillance system under MDR Article 83 closes the loop. The PMS plan has to describe how the manufacturer will monitor the performance of the device in the field, including — for AI/ML devices — the specific monitoring needed to detect data drift, performance degradation, or subgroup effects.
None of these is a "PCCP." Together, they force a startup working on AI/ML under MDR to build the same kind of disciplined change envelope that a PCCP would formalise under the FDA framework, with the difference that the EU envelope is constructed from multiple obligations rather than assembled into a single named document and pre-approved by the regulator in advance.
For the deeper discussion of locked versus adaptive algorithms under the MDR see Locked vs adaptive AI algorithms under MDR. For the continuous learning discussion see Continuous learning AI under MDR in 2026. For the change management side see AI/ML change management: retraining and assessment. The dedicated post on the PCCP concept in the AI-device context lives at Predetermined change control plan for AI devices.
When to consider a PCCP
This section is framed carefully because the decision to pursue a PCCP belongs to an FDA specialist working with the manufacturer on a specific device. At the orientation level, however, a handful of conditions tend to make a PCCP worth considering.
The device is AI/ML-enabled and the manufacturer genuinely intends to update the model after clearance. If the model is going to be frozen at release and never touched, there is nothing to pre-agree and no cadence problem to solve.
The anticipated changes are describable in advance with enough specificity that the agency can reason about them. "We will retrain the model on more data with the same architecture, the same input features, and the same acceptance criteria" is describable. "We will improve the model" is not.
The change control, risk management, and performance monitoring processes are real, documented, and followed. A PCCP is only as credible as the quality system behind it. A team whose MLOps maturity is a slide deck will not pass a PCCP review.
The business case for post-clearance flexibility is substantial enough to justify the upfront investment. PCCPs are not free to prepare, and the sensible trade only exists if the flexibility they buy is worth more than the preparation cost.
Whether a specific device meets these conditions is a conversation to have with an FDA specialist, not with a blog post. The orientation here is simply: the PCCP exists, it is useful in the right situations, and it is worth understanding before the product architecture is frozen.
How EU startups should think about it
For an EU-first startup with US ambition later, the PCCP is a useful organising concept even though it will not be a live FDA artefact until the US submission is on the table.
Treat the PCCP as a forcing function for disciplined thinking. Write down, for your AI/ML device, the list of changes you expect to make after launch. Write down the controls you will apply to each one. Write down how you will know the changes preserved safety and performance. If you can answer those questions crisply, you are in better shape for both the MDR technical documentation and any future FDA submission. If you cannot, the model itself may not be as well-understood as the team assumes.
Treat the PCCP as a sanity check on the intended purpose. If the list of anticipated changes pushes outside the intended purpose you defended to the Notified Body, either the intended purpose was scoped too narrowly for the real product, or the product roadmap is silently drifting into unauthorized territory. Both are problems, and both are better caught early.
Treat the PCCP as a mental model for MDR Article 83 PMS planning. A lot of AI/ML PMS plans we see are thin because the team has not thought through what performance monitoring in the field actually looks like for a model that evolves. The PCCP concept forces that thinking to happen.
What the PCCP cannot do for an EU startup is substitute for the MDR obligations themselves. There is no mechanism by which a Notified Body will recognise a PCCP as a pre-approval for MDR changes. The MDR change control process still applies, and substantial changes still flow through the Notified Body under the conformity assessment route appropriate to the device class.
See FDA software guidance and the EU comparison for the broader FDA-vs-EU software framing, and FDA 510(k) process and clearance for the submission context in which a PCCP typically lives.
Common mistakes startups make
- Treating the PCCP as a replacement for real MLOps discipline. The PCCP documents the discipline — it does not create it. A team without real change control, versioning, and validation cannot fake it into a PCCP.
- Assuming a PCCP exists under the MDR. It does not. The MDR obligations that approximate the discipline are spread across intended purpose scoping, EN ISO 14971 risk management, EN 62304 software lifecycle, Article 83 PMS, and the change control clauses of EN ISO 13485.
- Writing the anticipated changes list too broadly. "Any improvement to the model" is not a scope. A PCCP built on vague change descriptions will not survive review, and an MDR change envelope built on the same vagueness will not survive a Notified Body audit.
- Ignoring data drift monitoring. An AI/ML device without data drift monitoring has no basis for claiming that its post-deployment performance matches its pre-clearance performance. This is a gap that shows up on both sides of the Atlantic.
- Pursuing a PCCP without an FDA specialist. The PCCP is an FDA instrument and the execution belongs to people who practice inside the FDA system daily. Generalist consultants who claim to handle it from Europe without deep FDA experience are not the right partners.
The Subtract to Ship angle
Subtract to Ship applied to AI/ML change control is not about reducing the regulatory work. It is about not doing the wrong regulatory work. For AI/ML devices the wrong work tends to look like this: elaborate documentation that freezes a model architecture the product team already knows will change; risk analyses that ignore the retraining cadence; PMS plans that talk about vigilance in the abstract without any specific provision for model performance monitoring; change control procedures copied from a hardware company and grafted onto a software team that ships weekly.
The subtraction in this area is the fantasy work — the compliance theatre that assumes the model will behave after clearance the way it behaved in validation, and then invents paperwork to keep the illusion alive. Subtract that. Keep the real work: a tight intended purpose, a risk file that acknowledges the cadence the product team actually runs, a software lifecycle that is honest about ML, a PMS plan that genuinely monitors what the model is doing in the field, and change control procedures that match the way the team works. See The Subtract to Ship framework for MDR for the foundational methodology this applies.
The PCCP concept — even when it will not be a live FDA artefact for a long time — fits into this because it forces the team to do the honest version of the thinking. If the team cannot describe its anticipated changes clearly, the team does not understand its own model well enough to ship it safely, PCCP or not.
Reality Check — Where do you stand on AI/ML change control?
- Can you describe, in one paragraph, the specific changes you expect to make to your AI/ML model in the first two years after launch?
- For each of those anticipated changes, can you name the controls you will apply, the validation you will run, and the release criteria you will use?
- Is your intended purpose wide enough that the changes you anticipate stay inside it, or are you silently planning to drift outside it?
- Does your risk management file address the risks specific to retraining and model evolution, or only the risks of the frozen release?
- Does your PMS plan include specific monitoring of model performance in the field, including data drift and subgroup performance?
- Does your software lifecycle documentation acknowledge the ML-specific aspects of your development, or does it read like a standard waterfall software project?
- If you want the US market later, have you at least sketched what a PCCP for your device would look like, so your product architecture does not accidentally foreclose the option?
If you cannot answer at least four of these clearly, the change control side of your AI/ML strategy is not yet ready for either regulator.
Frequently Asked Questions
Does the MDR have an equivalent of the FDA PCCP? No. Regulation (EU) 2017/745 does not contain a named Predetermined Change Control Plan instrument. Under the MDR, the discipline of controlling AI/ML changes is constructed from intended purpose scoping under Article 2(12), risk management under EN ISO 14971, software lifecycle under EN 62304, cybersecurity lifecycle under EN IEC 81001-5-1, classification under Annex VIII Rule 11 with MDCG 2019-11 Rev.1, and post-market surveillance under Article 83. These obligations together approximate the PCCP logic without being a single pre-approved document.
Can an FDA-accepted PCCP help me with a Notified Body? Not directly. A PCCP is a mechanism of the FDA framework and has no legal standing under the MDR. A Notified Body will still assess your device under the MDR change control process. The work you did to prepare the PCCP can inform your MDR technical documentation and your QMS change control, but it does not substitute for the MDR assessment.
Does every AI/ML device need a PCCP? No. A PCCP is useful when the manufacturer genuinely intends to make describable changes after clearance and has the quality system to support them. For a model that will be frozen at release, there is nothing to pre-agree. For a team whose MLOps discipline is immature, a PCCP will not survive review.
How does a PCCP interact with the 510(k) process? At the orientation level, a PCCP is included in the marketing submission for an AI/ML device and, when accepted, defines the scope of post-clearance changes that can be implemented without a new submission. The specifics of how the PCCP is presented, negotiated, and reviewed belong to an FDA specialist and are beyond the scope of this post. See FDA 510(k) process and clearance for the 510(k) orientation.
What is the biggest risk with AI/ML change control, regardless of the regulator? The biggest risk is the gap between the model the team validated and the model the users are actually seeing, combined with no monitoring that would detect the gap. This failure mode is regulator-agnostic. Both the FDA and the Notified Body will come down hard on it if it shows up in an inspection or an audit. The PCCP concept, and the MDR obligations that sit adjacent to it, exist to prevent this failure mode from reaching patients.
Related reading
- AI medical devices under the MDR: the regulatory landscape — the MDR framing for AI devices.
- Locked vs adaptive AI algorithms under MDR — the MDR-side discussion of frozen and evolving models.
- Continuous learning AI under MDR in 2026 — where the MDR currently sits on post-deployment learning.
- AI/ML change management: retraining and assessment — the practical change control discussion under MDR.
- Predetermined change control plan for AI devices — the dedicated PCCP-concept post.
- FDA regulation of medical devices: a primer for EU startups — the overall FDA orientation.
- FDA 510(k) process and clearance — the submission context for most PCCPs.
- FDA software guidance and the EU comparison — the broader FDA-vs-EU software framing.
- The Subtract to Ship framework for MDR — the methodology behind this blog.
Sources
- Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices (MDR), Article 2(1), Article 2(12), Article 10, Article 83, Annex I Section 17 (software), Annex VIII Rule 11 (software classification). Official Journal L 117, 5.5.2017.
- MDCG 2019-11 Rev.1 — Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 - MDR and Regulation (EU) 2017/746 - IVDR (October 2019, Rev.1 June 2025).
- EN 62304:2006 + A1:2015 — Medical device software — Software life-cycle processes.
- EN ISO 14971:2019 + A11:2021 — Medical devices — Application of risk management to medical devices.
- EN IEC 81001-5-1:2022 — Health software and health IT systems safety, effectiveness and security — Part 5-1: Security — Activities in the product life cycle.
- U.S. Food and Drug Administration — Center for Devices and Radiological Health (CDRH), public materials on Predetermined Change Control Plans for AI/ML-enabled device software functions. Referenced at the general framing level only. https://www.fda.gov/medical-devices
This post is part of the FDA & International Market Access series in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. A note on the limits of our expertise: Tibor's authority is in the EU MDR and the Notified Body system. The FDA Predetermined Change Control Plan is discussed here at the orientation level only, so EU founders can understand the concept and how it compares to the MDR-adjacent approach. For any specific question about preparing, negotiating, or executing a PCCP for your device, work with a US regulatory specialist who practices inside the FDA system daily. This primer is the mental model. The specialist is the execution.