Electrical safety testing for medical devices is the test campaign that demonstrates your device meets the basic safety and essential performance requirements of EN 60601-1:2006+A1+A12+A2+A13:2024, which in turn gives you presumption of conformity with MDR Annex I Sections 14 and 17. The testing itself is done at a certified test lab, not in your office. But the work that decides whether the campaign passes or fails is the work you do before the lab ever sees the device. The risk file, the essential performance definition, the standards scoping, the pre-compliance sweep, and the documentation pack you bring with you. Startups that treat the test lab as a black box routinely burn two to three times the budget of startups that walk in prepared.

By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.


TL;DR

  • Electrical safety testing is done at an accredited test lab against EN 60601-1:2006+A1+A12+A2+A13:2024 and whatever collaterals and particular standards apply to your device.
  • The lab executes the test plan. The lab does not write the test plan for you, does not define your essential performance, and does not debug your design.
  • The readiness work before the lab. Risk file, standards scoping, essential performance in writing, pre-compliance sweep, documentation pack. Is what determines whether the campaign passes or turns into a multi-iteration spend.
  • A realistic test sequence covers electrical, thermal, mechanical, and single-fault categories, with cross-references out to the EMC collateral and any applicable particular standard.
  • Costs and timelines scale with the number of iterations. One clean iteration after honest pre-compliance is far cheaper than three iterations at an accredited facility.
  • Design changes after the test report can trigger partial or full retest. The change control process in your QMS, driven by the risk management file, decides the scope.

Why the test lab is the wrong place to start

Every experienced test lab has a version of the same story. A founder calls, asks for "a 60601 test," books a week of lab time, and arrives with a device the engineers have never seen written up. No risk file open on screen. No essential performance defined. No idea which collaterals apply. No pre-compliance data. By the end of day one, the lab is quoting a redesign. By the end of week one, the founder is rebooking the lab for three months later and calling the board to explain.

The lesson is not "test labs are hard." The lesson is that electrical safety testing is the output of a design and documentation process, not the input. The lab can only verify what you bring. If what you bring is incomplete, the lab cannot make it complete for you. And it is not the lab's job to try.

This post is the practical overview of what a startup should know before the lab is booked. It sits downstream of the hub post on MDR electrical safety requirements and the post on basic safety and essential performance, and upstream of the deeper posts on specific failure modes, cost structure, and test documentation. Read this one to get oriented, then drop into the spokes for the details.

What the test lab actually tests

A certified electrical safety test lab executes the clauses of EN 60601-1:2006+A1+A12+A2+A13:2024 that apply to your device, plus the clauses of any collateral standards in scope (EMC, alarm systems, home healthcare environment, and others depending on the device), plus any particular standard for your device category. The lab does not decide which of those apply. You decide that, in writing, before you arrive.

The test categories the general standard covers include protection against electric shock under normal and single-fault conditions, protection against mechanical hazards, protection against unwanted or excessive radiation, protection against excessive temperatures and other hazards, accuracy of controls and instruments, hazardous situations and fault conditions, programmable electrical medical systems, construction of ME equipment, and requirements for ME systems. Each category has its own set of clauses, its own measurement setups, and its own acceptance criteria.

The thing the lab does not test is your interpretation. If your documentation says the device is a Type BF applied part, the lab tests it as Type BF. If your documentation does not define essential performance, the lab cannot produce a valid pass or fail decision on any test whose acceptance criterion depends on essential performance. The lab executes; you specify.

Pre-test readiness. What has to be true before you book

A clean test lab visit is the output of a readiness checklist, not a hope. The minimum pre-test readiness state for a startup looks like this.

The risk management file under EN ISO 14971:2019+A11:2021 is open and live. Hazards are identified, risk controls are in place, residual risks are evaluated, and the file is internally consistent with the design that will be tested. The standard references the risk file repeatedly; without one, the lab cannot close out many clauses.

Essential performance is defined in writing. The specific clinical functions, the numerical limits, and the pass/fail thresholds are on paper, tied to the risk file, and specific enough that a lab engineer can verify them without asking you questions. Post 504 walks through what this means in practice.

The applicable standards are scoped and justified. A written list names the general standard, every collateral in scope and why, every particular standard in scope and why, and every collateral or particular explicitly excluded with a justification. The Notified Body will want this list in the technical file. The lab will want it in the test plan.

Pre-compliance testing has been done. Not a full accredited campaign. A cheap screening at a pre-compliance facility or in-house, covering the highest-risk categories. Typically basic dielectric, leakage, and a radiated emissions pre-scan. The goal is to find the big failures while they are cheap to fix.

The device under test is frozen. The hardware revision, the firmware build, and the configuration you bring to the lab are the same as what will ship. Changes after the test campaign trigger retest. Changes during the campaign are worse.

The documentation pack is assembled. Technical documentation, schematics, layouts, risk file extracts, applied part classification, essential performance definition, standards scoping, prior pre-compliance data. Assembled, printed or on a tablet, ready to hand the lab engineer on day one.

The test sequence. What the week looks like

Test sequencing varies by lab and by device, but a realistic campaign for a Class IIa or Class IIb active device with a mains connection follows a recognisable shape.

Day one. Intake and inspection. The lab engineer reviews the documentation pack, confirms the applied part classification and the Means of Protection scheme, inspects the device physically for construction-level clauses (creepage, clearance, markings, labelling, mains connection integrity), and confirms the essential performance definition. Any ambiguity here stops the clock.

Dielectric and insulation tests. Hipot tests at the specified voltages across each isolation barrier, insulation resistance, protective earth continuity. These are relatively fast if the design holds and slow if it does not.

Leakage current measurements. Earth leakage, touch current, patient leakage current under normal and single-fault conditions, at 110 percent of rated mains voltage, in every applicable polarity and supply condition.

Thermal measurements. Temperature rise on enclosure surfaces, applied parts, operator-accessible surfaces, and internal components, under the worst-case operating mode defined by the manufacturer. Thermal measurement runs long because steady state takes time to reach.

Mechanical tests. Enclosure strength, impact, stability for equipment on stands or trolleys, drop tests for portable equipment, moving-parts protection where relevant. Some of these tests damage the device permanently, which is why test labs often ask for two samples.

Single-fault and abnormal operation tests. The device is subjected to each single fault the risk file and the standard require. One insulation barrier shorted, one protective earth opened, one component failed. And the lab verifies that basic safety is maintained.

Particular and collateral tests. If a particular standard applies (infusion, ECG, dialysis, ventilation, and so on) its additional clauses are executed. If the alarm system collateral IEC 60601-1-8 applies, alarm signalling, priorities, and reset behaviour are tested.

EMC testing under EN 60601-1-2:2015+A1:2021 is almost always a separate booking at a separate facility, because EMC labs need specific anechoic chambers and instrumentation. Post 511 covers the EMC campaign as its own topic. The general-standard lab and the EMC lab are sometimes co-located and sometimes not.

Cost and timeline. Realistic ranges for a startup

Test lab cost is not a single number. It depends on the device, the applicable standards, the number of samples, the complexity of the setups, the number of iterations, and whether the EMC campaign is bundled or separate. That said, here is the shape of the cost curve.

A single clean iteration for a small Class IIa active device. General standard, EMC collateral, one or no particular. At an accredited European lab runs in the low to mid tens of thousands of euros for the general-standard work, plus a separate similar-scale bill for EMC. A device that needs a particular standard adds lab days. A device that fails and has to re-test doubles or triples the bill per affected category.

Timelines track the same pattern. A clean campaign for a well-prepared device can run two to four weeks at the general-standard lab plus a similar window at the EMC lab. An unprepared campaign with multiple failures and redesign iterations can stretch over six months or more, because each iteration requires redesign, rework, new samples, and a new lab slot. Lab slots are not always available next week.

The cost post (506) gives deeper ranges and breaks down where the money actually goes. The rule of thumb for a startup: budget for one clean iteration, plus a contingency for partial retest on one or two categories. If you are budgeting for two full iterations, your pre-compliance work is too shallow.

Design changes that trigger retest

Once the test report is issued, the report covers the specific hardware revision, firmware build, and configuration that was tested. A change to the device can trigger a partial or full retest, and the QMS change control process. Driven by the risk management file. Is what decides the scope. A rough guide to what typically triggers what.

Changes that typically trigger a partial retest. A firmware change affecting alarm behaviour, dosing control, motor control, safety monitoring, or any function whose output is part of the essential performance definition. A component substitution on the mains side or in an isolation barrier. A mechanical change that affects enclosure strength, stability, or thermal path. A labelling change that affects the classification or the applied part marking.

Changes that typically do not trigger a retest. A pure cosmetic change that does not affect any tested clause. A firmware change to a non-safety, non-essential-performance function. For example, a log format or a UI translation. Where the change control justification clearly shows no test clause is affected.

Changes that trigger a full retest. A change to the applied part classification. A change to the isolation scheme or the Means of Protection count. A change to the power supply that alters the leakage or dielectric profile. A change that introduces a new hazard category not previously in scope.

The principle is simple. Every change goes through change control. Change control asks the risk file what the change affects. The risk file tells you which clauses of the standard depend on the affected function. Those clauses are the scope of the retest. Skipping this analysis and assuming "it was only a small change" is how teams end up with invalid test reports at Notified Body review.

What to bring to the lab

A practical list. Bring more than you think you need. A printed copy and a digital copy.

  • Two or three samples of the device under test, all the same revision, all with the same firmware build, all configured identically.
  • The risk management file, or at least the extracts the lab will need (hazard list, essential performance derivation, single-fault analysis).
  • The essential performance definition, in writing, with numerical limits.
  • The standards scoping document. General standard, collaterals, particulars, exclusions with justification.
  • Schematics and PCB layouts, for the lab engineer's inspection of creepage, clearance, and insulation.
  • The applied part classification and the Means of Protection derivation.
  • Prior pre-compliance test data.
  • Declarations of conformity for critical components (power supply, transformer, isolation components).
  • The instructions for use and labelling as they will ship.
  • A spare set of fuses, a spare cable set, a programming adapter, and whatever else the device needs to be brought up on the bench.
  • A competent engineer from your team, on site, for at least the first day. Not a subcontractor, not a message group, a real human who can answer questions in the room.

Common failures and how they show up

Post 534 covers the top eight failure patterns in detail on common IEC 60601-1 test failures. At the overview level, the pattern is unvarying: every common failure is a design or documentation omission that survived to the test bench because no one on the team had read the relevant clauses early enough. Leakage exceeded, dielectric inadequate, creepage violation, EMC immunity crash, EMC emissions over the limit, thermal surface over the limit, mechanical strength failure, and alarm system non-conformity. Each one is cheaper to prevent at schematic stage than to fix at the lab.

The Subtract to Ship angle on electrical safety testing

The subtractive move here is not to test less than the Regulation requires. It is to test exactly the clauses that apply to your actual device, with a clean documentation pack and a clean pre-compliance sweep, so that one accredited-lab iteration is enough. Every additive mistake in this process. A collateral added "to be safe" that does not apply, a particular pulled in late without a scoping decision, a test plan written by the lab because the team did not write one. Adds real money and real weeks. Every subtractive mistake. A missing essential performance definition, a skipped pre-compliance sweep, a risk file that was not open before the design was frozen. Adds even more.

The MDR is the North Star. EN 60601-1:2006+A1+A12+A2+A13:2024 is the harmonised route. The certified test lab is the place where the route is verified. Do the readiness work so the verification is fast, and the cost curve stays on the cheap side of the page. For the broader methodology on how this thinking applies to the whole certification path, see post 065 on the Subtract to Ship framework for MDR compliance.

Reality Check. Where do you stand?

  1. Is your risk management file under EN ISO 14971:2019+A11:2021 open and live, and does it cover every hazard the device can produce?
  2. Do you have essential performance defined in writing, with numerical limits a lab engineer can verify without asking you questions?
  3. Do you have a written standards scoping document naming every standard in scope, every standard excluded, and the justification for each decision?
  4. Have you run pre-compliance testing on the highest-risk categories before booking the accredited lab?
  5. Is the hardware revision and firmware build you plan to bring to the lab the same revision and build that will ship, or do you still expect changes?
  6. Is your documentation pack assembled, printed, and ready for day one intake at the lab?
  7. If a change is made after the test report issues, do you have a change control process that can tell you within a day which clauses need to be retested?

Frequently Asked Questions

Can I do electrical safety testing in my own office? No, not for the formal test report that feeds the technical file. The formal report must come from an accredited test lab whose accreditation covers the clauses tested. In-house work is valuable as pre-compliance screening. Dielectric pre-checks, leakage sanity checks, radiated emissions pre-scans. But it does not replace the accredited lab report that the Notified Body will expect to see.

How long does an electrical safety test campaign take for a startup device? For a well-prepared Class IIa active device with the general standard, EMC collateral, and at most one particular standard, a clean campaign typically runs two to four weeks at the general-standard lab plus a similar window at the EMC lab. Unprepared campaigns with multiple iterations can stretch over six months or more. Pre-compliance readiness is the single biggest factor in how long the campaign takes.

Do I need to be present at the test lab during the campaign? Yes, at least for intake on day one and for any day where ambiguity in the documentation is likely to come up. A competent engineer from your team on site answers questions in real time and prevents the clock from stopping while the lab waits for an email reply. Labs almost always run cheaper and faster with a manufacturer engineer in the room.

What is the difference between pre-compliance testing and accredited lab testing? Pre-compliance testing is the screening you run at a cheaper facility (or in-house where feasible) to find the big failures before the accredited lab campaign. It does not produce a formal test report and does not feed the technical file. Accredited lab testing is the formal campaign at a lab whose accreditation covers the clauses in scope, producing the test report that becomes evidence against MDR Annex I Sections 14 and 17.

Does a firmware update always trigger retest? No. A firmware update triggers retest only if the change affects clauses that were covered in the original report. A cosmetic change to a non-safety, non-essential-performance function typically does not trigger retest, provided the change control documentation shows the analysis. A change that affects alarms, dosing, motor control, or any essential performance parameter typically does trigger partial retest on the affected clauses.

Who writes the test plan. The lab or the manufacturer? The manufacturer writes the test plan, derived from the standards scoping document, the risk file, and the essential performance definition. The lab can review the plan, point out gaps, and suggest sequencing improvements. The lab does not write the plan from scratch, because the plan depends on decisions the manufacturer owns. Applied part classification, essential performance definition, standards scoping, and single-fault analysis.

Sources

  1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Annex I Chapter II, Section 14 (construction of devices and interaction with their environment) and Section 17 (electronic programmable systems). Official Journal L 117, 5.5.2017.
  2. EN 60601-1:2006+A1+A12+A2+A13:2024. Medical electrical equipment. Part 1: General requirements for basic safety and essential performance.

This post is part of the Electrical Safety & Systems Engineering Under MDR series in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. The MDR is the North Star. EN 60601-1:2006+A1+A12+A2+A13:2024 is the harmonised route. The certified test lab is where the route is verified. And the readiness work you do before you arrive is what decides whether the verification is fast and cheap or slow and expensive.