---
title: Software Anomaly Management: Handling Known Bugs in Released Medical Software
description: EN 62304 lets you release medical software with known anomalies if they are documented and risk-evaluated. Here is how to handle them.
authors: Tibor Zechmeister, Felix Lenhard
category: Software as a Medical Device
primary_keyword: software anomaly management medical software
canonical_url: https://zechmeister-solutions.com/en/blog/software-anomaly-management
source: zechmeister-solutions.com
license: All rights reserved. Content may be cited with attribution and a link to the canonical URL.
---

# Software Anomaly Management: Handling Known Bugs in Released Medical Software

*By Tibor Zechmeister (EU MDR Expert, Notified Body Lead Auditor) and Felix Lenhard.*

> **EN 62304:2006+A1:2015 permits a manufacturer to release medical device software with known anomalies — residual bugs, unexpected behaviours, limitations — provided each anomaly is documented, evaluated against the risk management file, and shown not to compromise safety. Section 5.8 of the standard makes this explicit for the release activity, and Section 9 defines the problem resolution process that governs how anomalies are captured, investigated, and closed across the life of the product. MDR Annex I Section 17.2 is the regulatory obligation the whole process traces back to. The goal is not zero bugs at release — that is not achievable for any non-trivial software. The goal is a controlled, documented, risk-evaluated set of known anomalies that the manufacturer can defend to the Notified Body and communicate honestly to the users.**

**By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.**

---

## TL;DR

- EN 62304:2006+A1:2015 Section 5.8 permits release of medical device software with known residual anomalies, subject to documentation and risk evaluation.
- Section 9 of the standard defines the software problem resolution process that captures, classifies, investigates, and closes anomalies across the life of the product.
- Each known anomaly must be evaluated against the EN ISO 14971:2019+A11:2021 risk file before release, and the evaluation recorded in the release documentation.
- MDR Annex I Section 17.2 is the regulatory obligation behind the whole activity — software must be developed and maintained under the state of the art, which includes a controlled approach to residual anomalies.
- The anomaly list is a living record. It grows with every new report from field use, shrinks when fixes ship, and is reviewed at every release gate.
- Communication to users — through the IFU, release notes, or known-issues pages — is not optional for anomalies that can affect how the device is used.
- The most common failure mode is treating bugs as "internal engineering concerns" instead of regulatory artefacts. Every anomaly on a released product is both.

---

## Why the known-anomaly conversation matters

There is a myth in MedTech software that a compliant release has no bugs. The myth is destructive because it pushes teams to choose between two equally bad options — delay the release until the backlog is empty, which never happens, or pretend the backlog does not exist, which breaks the moment an auditor asks to see it. Both options end the same way. The release either never happens or happens without the records to defend it.

EN 62304:2006+A1:2015 was written by people who understood how software actually works. A non-trivial code base always has residual anomalies at release. Some are cosmetic. Some are edge cases that will never fire in clinical use. Some are real limitations that the user needs to know about. The standard does not demand zero — it demands control. You are allowed to ship software with known bugs. You are required to know which bugs, to have evaluated each one against the risk file, to have documented the evaluation, and to have communicated what the user needs to know. Control is the ask. Perfection is not.

The startups that run this well ship on time with a clean set of records. The startups that do not run it well either miss their release window trying to reach zero, or ship in a rush and then cannot produce the anomaly list the Notified Body wants to see at the next surveillance audit. The difference is not engineering talent. The difference is whether anomaly management was treated as a lifecycle activity from day one or as an afterthought bolted on at the release gate.

MDR Annex I Section 17.2 requires that software be developed and manufactured in accordance with the state of the art taking into account the principles of development life cycle, risk management including information security, verification, and validation. (Regulation (EU) 2017/745, Annex I, Section 17.2.) Anomaly management is inside the lifecycle and inside risk management. It is not a side activity.

## What counts as an anomaly under EN 62304:2006+A1:2015

The standard uses the term anomaly deliberately. An anomaly is any condition that deviates from expectations based on requirements, design specifications, reference documents, standards, or someone's reasonable expectation of how the software should behave. The definition is wider than "bug" and narrower than "anything a user dislikes." It captures functional defects, performance issues, usability problems that deviate from the specified behaviour, unexpected interactions with the operating environment, and deviations identified during verification or in the field.

What is not an anomaly: a feature the user wishes existed but was never specified, a change request that is genuinely new scope, a preference about colour or layout that has no specified requirement behind it. These are feature requests and they run through the development or maintenance process, not the anomaly process. Conflating the two clogs the anomaly list with noise and hides the real issues.

The practical test the standard implies: is there a requirement, a specification, a reference, or a documented expectation that the observed behaviour violates? If yes, it is an anomaly and it enters the problem resolution process under Section 9. If no, it is a feature request and it enters the maintenance planning process under clause 6. The two paths run side by side and share the same change control discipline, but the regulatory artefacts they produce are different.

## Release with known anomalies — what Section 5.8 actually allows

Section 5.8 of EN 62304:2006+A1:2015 is the software release activity. It is the gate between "software we are working on" and "software that is available for clinical use." The activity requires that before release, the manufacturer confirm that all planned activities have been performed, that the software has been verified against its requirements, that the configuration is identified and reproducible, and that known residual anomalies have been documented and evaluated.

The crucial phrase is "documented and evaluated." Section 5.8 does not require zero anomalies. It requires that any anomaly still present in the released software has been captured in the problem resolution records under Section 9, that the risk of leaving it unfixed has been assessed against the risk management file, and that the assessment concludes the anomaly does not compromise safety or the software's ability to meet its intended use. The assessment is documented. The release is authorised on the basis of the assessment. The anomaly ships with the software, under control.

This is what lets a medical software team actually release something. Without Section 5.8's explicit allowance, every open ticket would be a release blocker, and releases would never happen. With it, the team can make a reasoned decision for each anomaly — fix now, fix later, accept with documentation, accept with user communication — and ship when the decisions are made and recorded. The standard is respecting the reality that software is never done; it is just released or not released.

The release record for a version carries the list of known anomalies, each with its identifier, description, affected components, risk evaluation, decision, and — where relevant — the planned fix version. The list is part of the release documentation the Notified Body reviews at audit. Missing this list is one of the sharper findings in a software audit because it reveals that the team either does not know what anomalies exist or does not consider them part of the regulatory record.

## The documentation set for each anomaly

For every known anomaly at release, the documentation set has a small number of required elements. The exact format is flexible — the standard does not prescribe a template — but each element must be present and traceable.

Identifier and description. A stable ID that does not change across the life of the anomaly, and a description that a later reviewer can understand without access to the original reporter. "Bug 47" is not enough. "Date parsing fails for 29 February in non-Gregorian locales when the year is entered as two digits" is.

Affected components and versions. Which software items are involved, and which released versions carry the anomaly. If the anomaly exists in version 2.4 but was introduced in 2.2 and fixed in 2.5, all three facts belong in the record.

Reproduction conditions. What inputs, configurations, or sequences cause the anomaly to manifest. If the reproduction is intermittent, the record says so and describes the conditions under which it has been observed.

Impact analysis. What the anomaly does to the software's behaviour and, through the behaviour, to the user and the patient. This is where the link to the risk file is made explicit. Every anomaly is analysed for its contribution to identified hazards and for whether it could introduce a hazard not previously considered.

Risk evaluation. The formal output of pushing the anomaly through the EN ISO 14971:2019+A11:2021 risk management process. The evaluation concludes either that the residual risk is acceptable, that a risk control is required, or that the anomaly must be fixed before release.

Decision and rationale. Fix now, fix in a later version, accept with user communication, or accept without communication. The rationale is recorded and signed by the authority named in the software development plan.

Planned resolution and target version, where applicable. If the decision is to fix later, the record says in which version and why that timeline is acceptable.

The seven elements above, recorded consistently for every anomaly, are what the Notified Body reads at audit. A spreadsheet is enough if the fields are there and kept current. A full issue tracker is better if the workflow supports the required fields. What is not enough is a free-text list of bugs with no risk evaluation, because that is the exact artefact Section 5.8 exists to prevent.

## Risk evaluation — the link to EN ISO 14971:2019+A11:2021

The risk evaluation of a known anomaly is not a separate risk assessment that lives on its own. It runs through the same EN ISO 14971:2019+A11:2021 process the manufacturer already operates for the device, and its output updates the same risk management file. The anomaly is treated as a potential contributor to a hazardous situation, analysed for its probability and severity in the context of the device's intended use, and evaluated against the residual risk acceptability criteria the manufacturer has defined in the risk management plan.

Three outcomes are possible. First, the anomaly contributes to a hazardous situation already in the risk file and the residual risk remains within the acceptability criteria — the anomaly can be accepted and the risk file records the evaluation. Second, the anomaly contributes to an existing hazardous situation but pushes the residual risk above acceptability — a risk control must be added, which might be a software fix, a procedural control in the IFU, or both. Third, the anomaly reveals a new hazardous situation that was not previously in the risk file — the risk file is updated to include the new situation, and the analysis runs from there.

The process is the same process that governs any risk-relevant change to the software. What matters is that it is actually run for each known anomaly before the release is authorised, and that the output is linked to the anomaly record in the problem resolution system. Anomalies without a documented risk evaluation are the single most common finding in this category.

## Communication to users — IFU, release notes, known issues

Not every anomaly needs to be communicated to users. The ones that do are the ones where a user's awareness of the anomaly can affect how they use the device safely or effectively. A cosmetic misalignment in a settings screen that nobody has to interact with during clinical use does not need a user notice. A date-parsing edge case that could lead a clinician to misread a timestamp does.

The communication channel depends on the nature of the anomaly and the regulatory context. Anomalies that affect safe use of the device during clinical operation belong in the instructions for use — the IFU is the regulatory document the user is expected to consult before use, and MDR Annex I Chapter III governs what it must contain. Anomalies that affect workflow or performance but not safe clinical use can live in release notes or a known-issues page that the user can access. Anomalies that are internal to the software and invisible to the user can stay in the internal problem resolution records.

The decision about which channel applies is part of the risk evaluation and part of the release authorisation. It is documented. The startups that run this well integrate the user communication decision into the release record — every anomaly has a field for "user communication required: yes/no, channel, content" — so the auditor can see that the decision was made deliberately and that the communication was produced.

When the decision is to communicate, the content has to be honest. Hedging language that obscures the anomaly does not protect anyone. A clear statement of the observed behaviour, the conditions under which it manifests, and the workaround (if any) is what the user needs and what a Notified Body will read without finding fault.

## The link to the maintenance process

Every known anomaly at release is an input to the software maintenance process under clause 6 of EN 62304:2006+A1:2015. The anomaly list is not a snapshot that expires with the release. It carries forward into the next maintenance cycle, where each open anomaly is re-evaluated in the context of new information from the field, new risk data from post-market surveillance, and new code that has been written since.

The problem resolution process under Section 9 of the standard is the continuous engine that feeds this. Section 9 requires the manufacturer to prepare problem reports, investigate problems, assess their effect on safety, advise relevant parties, implement corrective actions, and maintain records. The maintenance process under clause 6 picks up the output of problem resolution and turns approved changes into new software versions. Each new release runs its own Section 5.8 gate, with an updated known-anomaly list that reflects what was fixed, what was introduced, and what remains open.

The regulatory picture to hold in mind: anomaly management is not a release-day activity. It is a continuous process that runs from first release until the software is decommissioned, with release gates as the moments where the current state of the list is frozen, evaluated, and authorised. For the full maintenance process that surrounds the anomaly flow, see post 388. For the release process under Section 5.8, see post 387. For the Section 9 problem resolution process in detail, see post 390.

## Common mistakes startups make

- Treating open bug tickets as engineering-only artefacts. The ticket is also a regulatory record from the moment the software is released.
- Running the release gate without a known-anomaly list. Section 5.8 requires the list. Its absence is a finding.
- Skipping the risk evaluation for "small" anomalies. Small to the engineer is not the same as small to the patient. The evaluation is required regardless of perceived size.
- Writing impact analyses that do not touch the risk file. An impact analysis that does not update or reference EN ISO 14971:2019+A11:2021 is not a risk evaluation — it is an engineer's opinion.
- Letting the user communication decision default to "no communication." The default should be explicit, not implicit, and documented with reasoning.
- Carrying the same anomaly forward across five releases without re-evaluating. Each release is a new gate, and each gate requires a current evaluation.
- Conflating feature requests with anomalies and flooding the problem resolution system with noise that hides the real issues.
- Closing anomalies without recording the verification that they are actually fixed. A closed anomaly without a verification record is an anomaly that might still be there.

## The Subtract to Ship angle

Anomaly management is a place where Subtract to Ship earns its keep in both directions. Subtract the noise from the problem resolution system — feature requests, duplicate reports, cosmetic preferences dressed up as defects — so the real anomalies are visible and the release gate is informative. Subtract the bureaucracy from the anomaly record so engineers actually use it — the seven required elements are enough, nothing more. Subtract the temptation to chase zero bugs before release — the standard does not require zero, and the pursuit of zero produces either missed release windows or pretended records. Keep what earns its place — the documented list, the risk evaluation, the authorised decisions, the user communication where it matters — and remove everything else. For the broader framework applied to MDR, see post 065.

## Reality Check — Is your anomaly management audit-ready?

1. Do you have a current list of known anomalies for every released version of your software, with stable identifiers and versioned scope?
2. Is each anomaly linked to a risk evaluation that runs through your EN ISO 14971:2019+A11:2021 risk management process?
3. Does your release authorisation under EN 62304:2006+A1:2015 Section 5.8 require confirmation that the known-anomaly list is current and risk-evaluated?
4. Does each anomaly record name a decision — fix now, fix later, accept — and the authority that made the decision?
5. Is the user communication decision captured for each anomaly, with the channel and content recorded where communication is required?
6. Does your IFU include the anomaly content for anomalies that affect safe clinical use of the device?
7. Are anomalies carried forward across releases re-evaluated at each release gate, rather than inheriting their previous evaluation unchanged?
8. Can you produce, for any released version, the full known-anomaly list that was current at the time of release, with the risk evaluations and release decisions attached?
9. Is your problem resolution process under Section 9 of EN 62304:2006+A1:2015 capturing anomalies from field reports, post-market surveillance, internal testing, and security monitoring in a single stream?
10. When an anomaly is closed, is the closure backed by verification evidence that the fix actually resolves the reported behaviour?

Any question you cannot answer with a clear yes is a gap the next audit will find. Closing the gap while the anomaly volume is small is cheaper than closing it across an accumulated backlog.

## Frequently Asked Questions

**Can I legally release medical device software with known bugs under the MDR?**
Yes, provided the known anomalies are documented, risk-evaluated, and shown not to compromise safety or intended use. EN 62304:2006+A1:2015 Section 5.8 explicitly permits release with known residual anomalies on those conditions, and the standard is the harmonised reference for the software lifecycle obligations in MDR Annex I Section 17.2. Zero bugs is not the legal standard. Controlled, documented, risk-evaluated residual anomalies is.

**What must the known-anomaly record contain?**
At minimum: a stable identifier, a description, the affected software items and versions, the reproduction conditions, an impact analysis linked to the risk management file, a risk evaluation under EN ISO 14971:2019+A11:2021, a release decision and rationale, and the planned resolution version where applicable. The format is flexible — a tracker or a structured file both work — but all the elements must be present and the record must be traceable from the release authorisation.

**When do I have to tell users about a known bug?**
When the user's awareness of the bug affects how they use the device safely or effectively. Cosmetic issues that do not touch clinical use do not require user communication. Issues that could lead to misuse, misinterpretation of output, or workflow errors during clinical use do — and the communication lives in the IFU if it affects safe use, or in release notes or a known-issues page for issues that affect performance without affecting safe clinical use. The decision is part of the risk evaluation and is recorded.

**Does the anomaly evaluation have to be re-done at every release?**
Yes. Each release is a separate Section 5.8 gate, and each gate requires a current evaluation of the known anomalies for the version being released. Anomalies carried forward from previous releases are re-evaluated in light of any new information from field use, post-market surveillance, or changes elsewhere in the software. Inheriting a previous evaluation unchanged without review is not an acceptable substitute for current evaluation.

**How does anomaly management connect to post-market surveillance?**
Post-market surveillance under MDR Articles 83 to 86 feeds the problem resolution process under Section 9 of EN 62304:2006+A1:2015 with real-world anomaly data from field use, complaints, vigilance reports, and trend data. The anomalies identified through PMS run through the same evaluation and decision process as anomalies identified during development or internal testing. The two processes are coupled — PMS without anomaly management produces reports nobody acts on; anomaly management without PMS is blind to how the software actually performs in the field.

**Is a bug the same as a non-conformity?**
No. A bug is an anomaly under EN 62304:2006+A1:2015 — a deviation from specified behaviour that is managed through the problem resolution process. A non-conformity is a QMS concept under EN ISO 13485:2016+A11:2021 — a failure to meet a specified requirement in the QMS itself. A single event can be both — for example, a released anomaly that should have been caught by a planned verification activity that was skipped is an anomaly and also a QMS non-conformity. The two records link but are not the same, and both processes have to run.

## Related reading

- [MDR Software Lifecycle Requirements: How IEC 62304 Helps You Demonstrate Conformity](/blog/mdr-software-lifecycle-iec-62304) — the full lifecycle context the anomaly process sits inside.
- [Software Unit and Integration Testing Under EN 62304](/blog/software-unit-integration-testing-iec-62304) — the verification activities that catch anomalies before release.
- [Software Release Process Under EN 62304](/blog/software-release-process-iec-62304) — the Section 5.8 release activity where the known-anomaly list is confirmed.
- [MDR Software Maintenance: Managing Updates and Bug Fixes via IEC 62304](/blog/mdr-software-maintenance-iec-62304) — the clause 6 maintenance process that anomalies feed into.
- [MDR Significant Change Assessment for Software](/blog/mdr-significant-change-assessment-software) — the higher-level decision that runs alongside anomaly-driven changes.
- [Software Problem Resolution Under EN 62304 Clause 9](/blog/software-problem-resolution-iec-62304-clause-9) — the continuous problem resolution process that captures anomalies across the life of the product.
- [Software Configuration Management Under EN 62304](/blog/software-configuration-management-iec-62304) — the configuration management records that anomalies link to.
- [Post-Market Surveillance for Medical Device Software](/blog/post-market-surveillance-medical-device-software) — the PMS inputs that feed the anomaly process.
- [MDR Software Compliance Checklist for Startups](/blog/mdr-software-compliance-checklist-startups) — the practical pre-audit checklist that covers anomaly records.
- [The Subtract to Ship Framework for MDR Compliance](/blog/subtract-to-ship-framework-mdr) — the methodology pillar this post applies to anomaly management.

## Sources

1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Annex I, Section 17. Official Journal L 117, 5.5.2017.
2. EN 62304:2006+A1:2015 — Medical device software — Software life-cycle processes (IEC 62304:2006 + IEC 62304:2006/A1:2015), Section 5.8 Software release, and Section 9 Software problem resolution process. Harmonised standard referenced for the software lifecycle under MDR Annex I Section 17.2.
3. EN ISO 14971:2019+A11:2021 — Medical devices — Application of risk management to medical devices. Harmonised standard referenced for risk management under MDR Annex I.

---

*This post is a category-9 spoke in the Subtract to Ship: MDR blog, focused on how known anomalies are managed in released medical device software under EN 62304:2006+A1:2015 Section 5.8 and Section 9. Authored by Felix Lenhard and Tibor Zechmeister. The MDR is the North Star for every claim in this post — EN 62304:2006+A1:2015 is the harmonised tool that operationalises the anomaly management obligation, not an independent authority. For startup-specific regulatory support on anomaly management, release decisions, and risk evaluation of residual bugs, Zechmeister Strategic Solutions is where this work is done in practice.*

---

*This post is part of the [Software as a Medical Device](https://zechmeister-solutions.com/en/blog/category/samd) cluster in the [Subtract to Ship: MDR Blog](https://zechmeister-solutions.com/en/blog). For EU MDR certification consulting, see [zechmeister-solutions.com](https://zechmeister-solutions.com).*
