I’ve worked on CE marking for software-driven devices long enough to have the same conversation with three different notified bodies, two contract manufacturers, and one over-caffeinated product manager. The theory on paper is tidy: software is a medical device if it meets the intended purpose in Article 2, classify per Annex VIII (Rule 11), design to IEC 62304 and manage risk to ISO 14971, and document everything in Annex II. To be fair, those are the right touchpoints. In practice this means a decade-old development model bumping into a regulation built for traceability, auditability, and — crucially — clinical evidence.
Where the gap shows up
A few recurring gaps I keep seeing:
- Classification ambiguity. Rule 11 sounds straightforward but, in practice, whether a function is “information to take decisions” makes the difference between Class I and Class IIa/IIb. Notified bodies interpret borderline functions differently. That translates to rework.
- Clinical evidence expectations. MDR Article 61 and Annex XIV are clear that clinical performance is required. For SaMD this often means a notified body asking for performance validation or retrospective real-world data that development teams did not plan for.
- Lifecycle vs. continuous delivery. Agile teams push updates frequently; IEC 62304 expects software lifecycle processes and configuration management. Notified bodies want change-control records and evidence that risk, validation, and documentation accompany each release.
- Cybersecurity and real-world performance. Regulators expect post-market monitoring of vulnerabilities and real-world performance metrics, but many companies have a developer-centric patch workflow, not a regulated post-market plan.
- Traceability and impact analysis. Auditors want to see links: requirement → hazard analysis → verification → clinical data → post-market actions. Too often these links are implicit, scattered across tools, or missing entirely.
Why this matters (beyond paperwork)
Treating the gap as mere bureaucracy misses the point. SaMD updates change clinical behaviour: how clinicians interpret an output, how a workflow runs, how an alarm looks. If you can’t show you considered the risk and validated performance, a notified body will either slow you down or require post-market studies you’re not prepared for. I’ve watched teams face months of delay because a routine UI tweak was classified as a change requiring additional clinical evidence.
Practical adjustments that actually work
These are the things I insist on early, before a design review or a CE submission:
- Map intended purpose at the function level. Don’t stop at “diagnostic support”; list each algorithmic output, who uses it, and the clinical decision it influences. This is the single clearest way to resolve Rule 11 ambiguity.
- Perform software-specific risk analysis (ISO 14971 + IEC 62304). Include use-related hazards and consider failure modes for updated algorithms. In practice this means a software hazard table tied to requirements.
- Predetermine change-control plans. Define categories of change (e.g., security patch vs algorithm weight update) and the required evidence per category: unit tests, integration tests, clinical re-validation, PMCF entry. This mirrors the “predetermined change control” approach auditors like to see.
- Build traceability early. Link requirements → design → verification/validation → clinical evidence → release notes. If you use an eQMS, native workflow integration that shows these links saves hours in an audit.
- Design PMCF and performance monitoring into release. For SaMD, plan telemetry, usage metrics, false-positive/negative logging, and a dashboard that feeds your PSUR/PMCF analysis.
- Talk to your notified body early. Share your function map and change categories. You’ll get different answers; capture them and treat them as part of your risk acceptance/justification.
A small checklist for your next sprint
- Have you defined the intended purpose at function level?
- Is each function mapped to a classification rationale under Rule 11?
- Do you have software hazard analysis and traceability to verification?
- Is there a predetermined change-control plan for software updates?
- Are telemetry and clinical performance metrics specified and collected?
- Can you demonstrate how a patch or algorithm change would flow through your QMS (change → risk assessment → validation → release)?
If you use an eQMS, look for features that make these concrete: automatic traceability, change-impact mapping, connected workflow for CAPAs and changes, and built-in artefacts for PMCF/PSUR. Automated CAPAs and AI-guided assistance are useful — but only if the outputs are reviewable and traceable. Controlled assistance, not magic, is what passes audits.
Final note — on notified bodies and reality
Notified bodies want to protect patients; the variability comes from translating new software realities into a regulatory framework. To be fair, the guidance is catching up (IMDRF principles, MDCG documents on software classification), but the practical work remains on manufacturers: be explicit, be auditable, and treat updates as regulated events. Like choosing the right route before you set off on a steep alpine climb, choosing the right documentation strategy before your next major software release saves a lot of backtracking.
What’s the single biggest friction you face when trying to align your software release cadence with MDR expectations?
Top comments (0)