The seven most common cybersecurity mistakes MedTech startups make under MDR are: treating "not connected" as a complete defense, skipping the SBOM, running a one-time pentest instead of a lifecycle program, ignoring the GDPR overlap, not monitoring CVEs after launch, having no incident response plan, and shipping without a secure update mechanism. Each one has a cheap fix if caught early and an expensive fix if caught by a notified body.

By Tibor Zechmeister and Felix Lenhard.

TL;DR

  • MDR Annex I Section 17.2 and 17.4 make IT security a General Safety and Performance Requirement, not an optional add-on.
  • MDCG 2019-16 Rev.1 and EN IEC 81001-5-1:2022 define the expected state of practice for secure medical device software.
  • The most common mistakes are lifecycle blind spots: teams ship a secure snapshot, then stop investing.
  • Every mistake below has been observed repeatedly across audits Tibor has led with startups bringing their first device through certification.
  • The fixes are all cheaper than the notified body findings that result from ignoring them.
  • Startups that treat cybersecurity as a continuous process, not a launch checklist, pass audits with fewer surprises.

Why these mistakes matter (Hook)

Cybersecurity is the area where Tibor sees the widest gap between what startups think they have done and what a notified body expects to see. Teams build smart wearables, connected diagnostics, and home-health devices with impressive clinical functionality, and then walk into a Stage 2 audit with a thin risk assessment, no software bill of materials, and a penetration test from eighteen months ago.

The pattern is remarkably consistent. Across the startups Tibor has audited over fifteen years, the same seven mistakes appear in the same form. Sometimes all seven in the same technical file. This post lists them in the order of how often they trip founders up, with the fixes that Tibor and Felix walk teams through when they engage early enough to still have options.

This is the summary post for the Subtract to Ship cybersecurity cluster. Every mistake below is expanded in its own post. The goal here is pattern recognition: read the list, check the ones that apply to your own product, and then dig deeper where you need to.

What MDR actually says (Surface)

Two MDR provisions anchor everything that follows.

Annex I Section 17.2 requires that software be developed and manufactured in accordance with the state of the art, taking into account the principles of development lifecycle, risk management, including information security, verification, and validation.

Annex I Section 17.4 requires that manufacturers set out minimum requirements concerning hardware, IT network characteristics, and IT security measures, including protection against unauthorised access, necessary to run the software as intended.

Two documents translate those provisions into practice:

  • MDCG 2019-16 Rev.1 (December 2019, revised July 2020) on cybersecurity for medical devices. The authoritative guidance document interpreting the GSPRs for information security.
  • EN IEC 81001-5-1:2022 on health software security activities across the product lifecycle. The standard notified bodies reference when they want to see evidence that a secure development lifecycle is in place.

Risk integration happens through EN ISO 14971:2019+A11:2021. Cybersecurity risk is medical device risk. It belongs in the same risk management file as the electrical, mechanical, and usability hazards.

The seven mistakes (Test)

Mistake 1: Treating "not connected" as a complete defense

"It's not connected so we don't need cybersecurity" is the sentence Tibor hears most often. It is partially true only if the device has zero wireless capability whatsoever. The moment the microcontroller supports Bluetooth or Wi-Fi, even if the radio is disabled in firmware, the manufacturer has to prove that the radio cannot be re-enabled by an attacker.

The fix. Draw a clean boundary. If the device genuinely has no wireless hardware, document it and move on. If the device has the hardware but the radio is disabled, run a threat model that covers the "attacker re-enables the radio" path. Either outcome is acceptable. Waving the question away is not.

Mistake 2: Skipping the software bill of materials

The SBOM is the document most commonly missing at first notified body engagement. Founders treat it as a separate artefact to be produced later. In reality, a good SBOM follows naturally from the configuration item list required by EN 62304. Libraries, SOUP components, versions, all tracked. Updated on every software change.

The fix. Do not build the SBOM as a standalone deliverable. Build it as a view of the 62304 configuration management record that the team already maintains. One source of truth. Regenerated on every release.

Mistake 3: One-time penetration test instead of a lifecycle program

Startups with a limited budget often do one penetration test the month before submission, hand the report to the notified body, and consider cybersecurity "done." The test itself is valuable. A one-time test as the entire cybersecurity story is not.

Tibor has seen credible pentest reports save startups that were weak on internal process. Pentesting is rarely wasted money. It is external proof where internal process was weak. But notified bodies want to see that the activity is scheduled to repeat, tied to the post-market surveillance plan, and triggered by significant software changes.

The fix. Put penetration testing on a schedule. Annual at minimum, plus re-test on significant software change, plus re-test after any high-severity CVE affecting a component in the SBOM. Write the trigger logic into the PMS plan so the notified body can see it.

Mistake 4: Forgetting the GDPR overlap

Founders design software that processes personally identifiable information, and then forget that the GDPR applies alongside MDR Annex I. Tibor frames it as less a conflict than an oversight. The data is missed in the privacy policy. It is missed in the cybersecurity risk file as "data worth protecting." The fix is awareness, not legal complexity.

The fix. Map every data flow once. Classify each data element as "personal data under GDPR," "clinical data under MDR," or both. Feed the classification into the threat model so that the confidentiality, integrity, and availability analysis covers the GDPR-relevant flows explicitly. One spreadsheet, one afternoon, finding closed.

Mistake 5: No post-market CVE monitoring

A device that is secure at launch can become insecure three months later when a library in the SBOM publishes a CVE. Tibor recalls a case where a library used in a medical device had a publicly exploited vulnerability, and it took the manufacturer several weeks to notice. No patient harm resulted in that case, but the window was open long enough that something could have happened.

The fix. Subscribe to vulnerability feeds for every component in the SBOM. GitHub Dependabot, the NVD feed, vendor mailing lists, whatever fits the stack. Set a service level: time from CVE publication to triage, time from triage to decision, time from decision to patch. Write those numbers into the PMS plan. The notified body will read them.

Mistake 6: No incident response plan

If a vulnerability is discovered, a patient is harmed, or a journalist calls, what happens in the first twenty-four hours? Most startups cannot answer. They assume the team will figure it out in the moment. MDR Article 87 vigilance reporting deadlines do not care whether the team figured it out in the moment.

The fix. Write the plan before the incident. Name the decision-maker. Document the escalation path. Pre-draft the templates for notifying users, patients, and the competent authority. Practice the plan at least once a year. Tibor treats an unpractised incident response plan as equivalent to no plan at all.

Mistake 7: No secure update mechanism

Shipping a connected medical device without a way to deliver signed, verified, rollback-capable updates is shipping a device that cannot respond to Mistake 5. When the CVE hits, the team has no way to close the window.

The fix. Design the update mechanism into the architecture from day one. Signed firmware. Verified on boot. Rollback on failure. Tied to the change control process so that every update is traceable to the configuration management record and the risk file. Retrofitting this capability after certification is a change control nightmare with notified body re-engagement.

The Subtract to Ship cybersecurity playbook (Ship)

Felix frames the fix set as a single Subtract to Ship principle: subtract the documents you do not need, ship the evidence your notified body actually reads. For cybersecurity that means six artefacts, all of them living:

  1. Threat model. One document, updated on every architectural change, traceable to the ISO 14971 risk file.
  2. SBOM. A view of the EN 62304 configuration management record, regenerated on every release.
  3. Security requirements specification. Aligned with EN IEC 81001-5-1:2022 clause structure so a notified body can check it in an afternoon.
  4. Verification evidence. Unit, integration, and system-level security tests in the same framework as functional tests. Plus the pentest report, dated within the last year.
  5. Post-market cybersecurity plan. CVE monitoring, patch service levels, update cadence, all tied to MDR Article 83 PMS and MDR Article 87 vigilance.
  6. Incident response plan. Practised, dated, signed.

If the team builds these six artefacts as a natural output of the development process, the notified body audit for cybersecurity becomes an evidence review rather than an archaeology dig.

Reality Check

Answer honestly. Every "no" is a finding waiting to happen.

  1. Does your threat model cover every wireless capability the hardware supports, including disabled radios?
  2. Is your SBOM generated from the same source as your EN 62304 configuration management record, or is it a separate document that drifts?
  3. When was your last penetration test, and when is the next one scheduled?
  4. Does your cybersecurity risk file explicitly list personal data under GDPR as an asset to be protected?
  5. If a high-severity CVE dropped today on a library in your SBOM, how long until it is triaged, decided, and patched?
  6. Is your incident response plan written down, and when was it last practised?
  7. Does your device support signed, verified, rollback-capable updates delivered through a controlled channel?

If any answer is "no" or "we will figure that out later," that item is the cheapest cybersecurity investment available to the startup right now.

Frequently Asked Questions

We are a Class I self-certified device. Do these mistakes still apply? Yes. Annex I Section 17 applies to all device classes that incorporate programmable electronic systems. Self-certification does not remove the obligation. It removes the notified body review of whether the obligation was met, which means the competent authority may find the gap later during market surveillance.

Is a SOC 2 report enough to cover MDR cybersecurity? No. SOC 2 is an organisational control framework. It is useful as input, but it does not replace device-specific threat modelling, the SBOM, the security lifecycle per EN IEC 81001-5-1:2022, or the integration into the MDR risk file.

How often should we update the threat model? On every architectural change, every significant software change, every new connectivity pathway, and at least once per year as part of the PMS review. Tibor treats a threat model older than twelve months as a red flag during audits.

Does the US FDA cybersecurity framework cover EU requirements? Partially. The frameworks overlap heavily, but MDCG 2019-16 Rev.1 and FDA Section 524B diverge on specific documentation expectations. A dual-market startup needs to map both and build to the superset.

What happens if the notified body finds one of these mistakes at Stage 2? It becomes a major or minor nonconformity depending on severity. A missing SBOM is usually a major. A stale pentest is usually a minor. A missing incident response plan for a connected device can be a major. All of them delay certification.

Sources

  1. Regulation (EU) 2017/745 on medical devices, consolidated text. Annex I Section 17.2, Section 17.4. Articles 83, 87.
  2. MDCG 2019-16 Rev.1, Guidance on Cybersecurity for medical devices (December 2019, revised July 2020).
  3. EN IEC 81001-5-1:2022, Health software and health IT systems safety, effectiveness and security. Part 5-1: Security. Activities in the product life cycle.
  4. EN 62304:2006+A1:2015, Medical device software. Software life cycle processes.
  5. EN ISO 14971:2019+A11:2021, Medical devices. Application of risk management to medical devices.
  6. Regulation (EU) 2016/679, General Data Protection Regulation.