The DIY versus experts question is not a single choice. It is a series of scoped decisions — one per task, one per deliverable, one per irreversible commitment. The framework is to split regulatory work into four buckets (strategic decisions, drafting, review, audit preparation), score each bucket against your team's real competence and the cost of getting it wrong, and allocate accordingly. Strategic decisions almost always need expert input. Drafting almost never does. Review and audit prep are where the hybrid arrangements live. Run the framework task by task and you stop wasting money on help you do not need and stop gambling on decisions you cannot afford to get wrong.

By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.


TL;DR

  • The DIY vs experts regulatory decision framework is not binary. It is a scoped decision made separately for every task in your regulatory plan.
  • The four task buckets are strategic decisions, drafting, review, and audit preparation. Each one has a different default answer.
  • Strategic decisions (intended purpose, classification, clinical evidence pathway, Notified Body selection, PRRC arrangement under MDR Article 15) almost always benefit from expert input because the cost of getting them wrong is an order of magnitude larger than the cost of help.
  • Drafting (SOPs, training records, vendor files, PMS data collection infrastructure) can usually be done in-house by a disciplined founder with good reading skills.
  • Review and audit preparation are the hybrid zones — the work is done in-house, the expert sits next to it and catches the errors nobody inside the company would see.
  • The framework fails the moment a founder says "we have a dedicated expert handling regulatory" about a person whose competence has never been independently verified.

Why this post exists

Founders ask the question wrong. They ask "should we hire a regulatory consultant?" as if the answer is yes or no for the whole project. It is not. A regulatory project is fifty or a hundred distinct tasks, and the right answer for each one is different. Bundling them into a single decision produces two failure modes: founders who pay a senior rate to have someone draft their training SOP, and founders who save money on the intended purpose conversation and then rebuild the entire technical file eighteen months later.

The framework in this post is the one we actually use when a startup asks us where to spend their regulatory budget. It is the same framework whether the expert on the other side is us, a competitor, or an internal hire. Use it to evaluate us. Use it to evaluate anyone.

This post is the decision framework. The companion posts are DIY vs. Hiring an MDR Regulatory Consultant, which covers the evaluation of consultants themselves, and When to Bring in a Regulatory Consultant, which covers the timing signals. This one covers the task-level allocation.

The decision is not binary

Every startup we have worked with that got the DIY versus experts question badly wrong had the same underlying mistake: they treated it as one decision. One budget line. One vendor. One signature.

The correct frame is that regulatory work is a portfolio of tasks, each with its own answer. Some of those tasks have a clear DIY default. Some have a clear expert default. Some sit in between and need a hybrid arrangement where the work is done in-house with external supervision at named checkpoints.

Treating the question as binary produces two symmetrical mistakes. The founder who decides "we will do everything ourselves" ends up with an intended purpose that drifts between documents, a classification rationale that points at the wrong Annex VIII rule, and a first Notified Body interaction that burns months of queue time. The founder who decides "we will outsource everything" ends up paying a senior consultant rate to write documents their own team could have drafted in a weekend, and still has to do the work of reviewing and owning the output because a consultant cannot run your QMS for you.

The framework fixes both mistakes by forcing the decision down to the task level.

The four task buckets

Split every regulatory activity into one of four buckets. The bucket determines the default answer. The default can be overridden for a specific situation, but only deliberately and with a reason.

Bucket 1 — Strategic decisions. These are the decisions where a wrong answer compounds into months of rework or, in the worst case, a file that will not hold up. Intended purpose definition, classification under Annex VIII, clinical evidence pathway under Article 61 and Annex XIV, Notified Body selection and first engagement, the PRRC arrangement under MDR Article 15. The default for this bucket is expert input, every time, regardless of how confident the founding team feels.

Bucket 2 — Drafting. The work of turning a decision into a document. Writing the SOPs that describe how your team actually operates. Writing the training materials for your own processes. Populating vendor files. Capturing risk management documentation under a framework that has already been decided. Drafting the first pass of literature-based clinical evaluation input. Building the PMS data collection infrastructure. The default for this bucket is DIY, with an expert available for questions that arise during the drafting.

Bucket 3 — Review. The work of checking whether the drafted documents actually say what they need to say, whether the classification rationale holds up, whether the risk file is internally consistent, whether the clinical evaluation strategy will survive a Notified Body reading. The default for this bucket is hybrid. The draft is produced in-house; an expert reviews it at named checkpoints and catches the errors nobody inside the company would see because the inside-the-company perspective is the wrong vantage point to catch them.

Bucket 4 — Audit preparation. The work of getting a technical file ready for Notified Body review or an internal audit ready for an external certification audit. The default for this bucket is hybrid, leaning toward expert-led. A good consultant who has been through many of these audits can predict the reviewer questions and the weak spots in a file. A founding team that has never been through one cannot.

Each bucket has a default. Each task can override the default if there is a specific reason. The framework is the discipline of asking the question for every task, not the answer to the question.

What DIY works for

A disciplined founding team with reading skills and prior exposure to quality management can handle more than most consultants will admit. DIY works best when three conditions hold: the rules are written down clearly, the work is maintenance-heavy rather than decision-heavy, and the cost of a small error is a correction rather than a catastrophe.

Tasks that almost always sit safely in the DIY bucket:

  • Procedural QMS documents once a competent baseline structure exists — document control, training, supplier evaluation, corrective action.
  • Training records and internal training delivery. Nobody knows your processes better than your own team.
  • Internal audit execution at the level of "are we doing what we said we would do?"
  • Vendor files, purchasing records, and supplier assessments for routine suppliers.
  • Basic risk management documentation under competent supervision.
  • First drafts of literature searches for clinical evaluation input.
  • Post-market surveillance data collection infrastructure, once the PMS plan has been correctly designed upstream.

DIY does not mean "do it alone, once, and never revisit." It means the work lives inside the company and is done by people who will still be there in two years. That is the correct long-term shape of a competent MedTech organisation. No external party can run your QMS for you forever, and if one tries to, that is a signal to walk away. See Hiring Regulatory Affairs in a Startup for how to structure this internally.

What requires experts

Some work is the opposite. The rules are written down, but reading them correctly requires judgment built across many devices, many audits, and many failure modes. A small error at the start compounds into a large error at the end. These are the tasks where DIY is a false economy for almost every startup.

  • Intended purpose definition. The single most leveraged sentence in your entire regulatory file. Every downstream document is built on it.
  • Classification under Annex VIII. Not a lookup table. The rules involve interpretation, and the interpretation depends on experience with how Notified Bodies actually read them.
  • Clinical evidence strategy. The difference between literature, equivalence, and full clinical investigation is hundreds of thousands of euros and a year or more.
  • Notified Body selection and first engagement. The Notified Body market is tight, queue times are real, and the first impression shapes every audit that follows.
  • PRRC arrangement under MDR Article 15. The Person Responsible for Regulatory Compliance is a legal role with specific qualification criteria under Article 15(1). Micro and small enterprises can use an external PRRC arrangement under Article 15(2). Getting the structure wrong is legal exposure, not just regulatory exposure. See PRRC and MDR Article 15, PRRC Options for Startups, and PRRC: Hiring, Outsourcing, and Training in a Startup.
  • Technical file architecture. Annex II tells you what must be in the file. Experience tells you how to organise it so a specific Notified Body will read it in a specific way.

The underlying rule: if the decision is hard to reverse, get expert input before you make it.

Hybrid arrangements

Most startups should not be choosing between pure DIY and full outsourcing. They should be designing a hybrid. The hybrid is where the framework earns its keep — it allocates each task to the cheapest competent party, keeps ownership inside the company, and spends expert time where the leverage is highest.

Three hybrid patterns we see working:

Pattern 1 — Internal work, expert review at checkpoints. The founding team drafts everything. An expert reviews the intended purpose, the classification rationale, the clinical evidence strategy, and the technical file architecture at named milestones. The team owns the work; the expert catches the errors. Costs are bounded because the expert hours are scoped to review, not production.

Pattern 2 — Expert-led strategic phase, internal execution phase. The expert sits with the founders through the strategic decisions — Bucket 1 — and then steps back. The internal team handles drafting, maintenance, and operations. The expert comes back for audit preparation and the first Notified Body engagement. This pattern fits founders who have a capable internal hire who needs a senior sparring partner for the high-stakes moments.

Pattern 3 — External PRRC plus internal operations. The PRRC role is covered externally under MDR Article 15(2) by a competent external person who reads the file seriously and flags problems early. The rest of the work runs in-house. This is a common and defensible structure for micro and small enterprises and is often the best fit for an early-stage Class I or Class IIa startup. The trap is an Article 15(2) arrangement that exists only on paper — a name without engagement. See Building a QA/RA Quality Team in a Startup.

In every hybrid, the ownership of the work stays inside the company. The expert is a sparring partner, not a substitute.

The evaluation matrix

For each task in your regulatory plan, score two dimensions on a 1–5 scale and read off the bucket.

Dimension A — Internal competence. How confident are you that the person inside the company assigned to this task can do it correctly? A 5 means a co-founder or hire who has done this exact work on a previous device that reached market. A 1 means nobody on the team has ever seen this task done before.

Dimension B — Cost of getting it wrong. What is the downstream cost of an error on this specific task? A 5 means months of rework, a failed Notified Body submission, or worse. A 1 means a correction in the next version of a document.

Read the matrix as follows. If Dimension A is 4 or 5 and Dimension B is 1 or 2, DIY. If Dimension A is 1 or 2 and Dimension B is 4 or 5, expert. Everything else is hybrid — the work gets done inside with external review at named checkpoints, or the expert is on call for questions during the drafting.

The matrix is crude on purpose. Sophistication is not the point; the point is forcing founders to actually score the tasks instead of bundling them into one gut decision.

Common mistakes

A few patterns we have seen repeatedly.

Treating the decision as binary. The single biggest mistake. The question is not "do we hire a consultant?" but "which tasks need one?"

The fake expert sentence. There is an Austrian company we came across where the founders had been telling every investor and every incoming auditor the same sentence: "We have a dedicated expert handling regulatory." The sentence did its job. Investors relaxed. Partners relaxed. Nobody independently verified what the expert was actually producing. The person turned out to be beginner-level — fluent in the vocabulary, short on the regulation. QMS documents looked right on the outside. Under the surface, intended purpose was inconsistent between documents, classification rationale pointed at the wrong Annex VIII rule, the clinical evaluation strategy would not have survived a serious review, and the risk file was cosmetic. Nobody caught it until a real audit hit. Rebuilding cost a multiple of what getting it right would have cost, and they were lucky the gap was found before it reached a patient. The framework in this post exists because of cases like that one.

Outsourcing decisions, owning drafting. The inverse of the right answer. Founders who pay a consultant to make the strategic decisions and then do the drafting themselves end up with documents that describe decisions the founders do not actually understand. When the auditor asks why, nobody in the room can answer.

Owning decisions, outsourcing drafting. Almost as bad. The founders make the strategic decisions with no expert input, and then pay a consultant senior rate to type them up. The expensive work is the wrong work.

Hybrid on paper only. Naming an expert who reviews nothing, or retaining a consultant who sends invoices but never looks at the actual file. The arrangement looks real from the outside. It is not.

No scoring. Allocating tasks by gut rather than by the matrix. Gut allocation tends to put everything in whichever bucket feels safest at the moment — usually DIY when money is tight and outsourced when a deadline is close. Neither is the right answer at the task level.

The Subtract to Ship angle

The Subtract to Ship framework applies to the DIY vs experts decision the same way it applies to every other MDR decision. Start with the full list of regulatory tasks between here and your next milestone. For each one, score Dimension A and Dimension B. Then ask the Subtract to Ship question: does this task trace to a specific MDR article, annex, or harmonised standard? If not, cut it before allocating it at all.

What remains after subtraction is the work that has to get done. That is the list you run the evaluation matrix on. The matrix then tells you where the money should go. The result is an allocation where DIY covers the drafting, experts cover the strategic decisions, and hybrid arrangements cover everything else — and nothing is paid for that should not exist in the first place.

Subtraction and task-level allocation together produce a regulatory budget that is smaller than the default, better spent than the default, and more defensible than the default.

Reality Check — Where do you stand?

  1. Have you actually listed every regulatory task between now and your next milestone, or do you have a vague sense of "regulatory work"?
  2. For each task, can you score Dimension A (internal competence) honestly, without inflating the number to feel better?
  3. For each task, can you score Dimension B (cost of getting it wrong) honestly, without deflating the number to save money?
  4. How many of the Bucket 1 strategic decisions — intended purpose, classification, clinical evidence, Notified Body, PRRC — have already been made without expert input?
  5. Is anyone on your team telling investors "we have a dedicated expert handling regulatory" about a person whose output has never been independently reviewed?
  6. If your current consultant disappeared tomorrow, could you list the specific decisions they have made and explain the rationale for each one?
  7. Does your regulatory budget match the matrix, or does it reflect whichever arrangement felt safest when the contract was signed?

Frequently Asked Questions

Is the DIY vs experts regulatory decision framework the same as choosing a consultant? No. Choosing a consultant is a downstream decision. The framework comes first — it tells you which tasks need a consultant at all. Only then do you pick one. See DIY vs. Hiring an MDR Regulatory Consultant for the consultant evaluation side.

Can a startup really do the drafting bucket in-house without regulatory experience? With reading skills, discipline, and at least one honest review pass from an experienced reader, yes. Drafting is a readable, repeatable activity. The parts that are not repeatable — the strategic decisions behind the drafts — are what should be in the expert bucket.

What if my team has no one who can handle any of the four buckets competently? Then the first expert engagement is not a consultant. It is a hire — either a senior regulatory person who joins the team, or an external PRRC under MDR Article 15(2) who is engaged enough to catch errors. A startup with zero competent regulatory presence cannot run the framework at all.

How often should I rerun the framework? At every major milestone. The task list changes as the project moves, the competence of the team changes as people gain experience, and the cost of errors changes as the file matures. Static allocation is the enemy of good allocation.

Does this framework work for Class III devices? Yes, but the expert bucket is larger. For a Class III device, Buckets 1 and 4 expand, Bucket 2 shrinks, and Bucket 3 becomes effectively continuous rather than checkpoint-based. The framework still applies; the defaults shift with the risk class.

Is this post an ad for Zechmeister Strategic Solutions? No. The framework is meant to be used against any regulatory partner, including us. If the right answer for your specific situation is to keep more work in-house, or to hire a different consultant, we would rather you reach that answer well than hire us badly.

Sources

  1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 15 (person responsible for regulatory compliance), Article 15(1) (qualification requirements), Article 15(2) (external PRRC arrangements for micro and small enterprises). Official Journal L 117, 5.5.2017.

This post is part of the Startup Strategy & PMF series in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. The framework is meant to be used against any regulatory partner, including us. If it helps you allocate regulatory work well and spend less on help you do not need, it has done its job.