DEV Community

James Whitfield
James Whitfield

Posted on

How we built PMS reports that actually survive a notified-body audit

I’ve owned post-market surveillance (PMS) for Class II devices through two notified-body audits. The audits weren’t hostile — they were meticulous. The difference between a report that passes and one that generates two follow-up requests is almost entirely in how you show the decision-making trail: what data you looked at, how you evaluated signals, how risk-management and clinical evaluation fed back into product actions.

Below are the practical things that helped our PMS reports pass without surprise findings. This is grounded in our Class II setup (ISO 13485 + MDR awareness, working with a mid-size notified body), and YMMV for higher-risk/device types.

What notified bodies actually check

From my experience, auditors look for a few consistent things:

  • Evidence of an active PMS system: documented plan, data sources, frequency, responsibilities.
  • Traceability from raw data (complaints, service reports, registries, literature) to findings and decisions.
  • Risk-based signal detection and documented rationale for actions (or for no action).
  • Links to other QMS processes: complaints, vigilance, CAPA, change control, risk management, clinical evaluation.
  • Timeliness: periodic reviews done on schedule and follow-through on identified actions.

They’re not just checking boxes — they want to see the logic. A one-page summary saying “no signals” isn't convincing unless you can show what you looked at and why that’s sufficient.

How we prepared PMS reports that made audits easy

We treated each PMS report like a mini-audit trail. Concrete steps we used:

  1. Start with a clear scope and sources

    • Define the time window and product variants covered.
    • List all sources (complaints DB, service records, distributor feedback, registries, literature searches, social media monitoring if used, complaint samples).
    • For each source, document the owner and refresh cadence.
  2. Executive summary + decision log

    • One-page executive summary that states: conclusion, highest risks identified, and actions taken.
    • A decision log that records each signal considered, evaluation outcome, and rationale (e.g., “no action because root cause outside device control; monitoring only”).
  3. Signal evaluation methodology

    • Spell out how signals are detected (trend thresholds, qualitative triggers).
    • Describe any statistical tests or sampling approaches — auditors don’t need to be statisticians, but they need to see that you didn’t eyeball it.
  4. Link to risk management and clinical evaluation

    • For every identified issue, reference the risk-management file and any updates to residual risk, mitigations, or warnings.
    • Show how the PMS findings affected the clinical evaluation (and vice versa).
  5. Documented follow-up and verification

    • For actions (CAPA, labeling, supplier change), include closure evidence and outcomes monitoring.
    • If you decided not to act, show monitoring steps and review dates.
  6. Keep the report navigable

    • Use a table-of-contents, hyperlinks to evidence, and a simple traceability map: Data source → Finding → Decision → Action → Verification.

We used our eQMS to maintain most artifacts (PMS plan, raw data exports, CAPA records). The eQMS made it easier to produce an “audit pack” — but the key was the content and traceability, not the tool.

Audit-day evidence pack (what I prepare, and what auditors ask for)

Bring more than the PMS report. We shipped a digital pack with:

  • The current PMS Plan and last approved version.
  • The PMS report (signed, dated) and its change history.
  • Raw data extracts for any data source referenced (complaints, service logs, literature search outputs).
  • Signal evaluation worksheets / decision log.
  • Relevant Risk Management File excerpts (with cross-references).
  • CAPA records triggered by PMS findings and closure evidence.
  • Minutes of multidisciplinary PMS review meetings.
  • Training records for staff who ran the evaluations.

If you can show a single source-of-truth system where each item is traceably linked, audits go much faster.

Common pitfalls that trigger nonconformities

  • Vague methodology: “we looked at complaints” without saying how or how many.
  • Missing rationale for inaction: auditors expect documented decisions, not silence.
  • Poor linkage to risk management / clinical evaluation.
  • Incomplete data provenance: where did the complaint counts come from, and who pulled them?
  • No evidence of periodic review cadence or missed reviews without documented reason.

Small automation wins we relied on

I’m engineering-brained, so we automated where it reduced audit friction:

  • Scheduled exports from the complaints system to a controlled folder, with hash/versioning for provenance.
  • Dashboards for key metrics (complaints over time, open CAPAs linked to PMS findings) that we could snapshot and export into the report.
  • Webhook alerts for spikes that feed into the signal evaluation queue.

You don’t need full-blown ML to pass an audit — auditors care more that your pipeline is reproducible and reviewable than that it’s fully automated.

Final practical checklist before you submit the report

  • Does the exec summary state concrete conclusions and next steps?
  • Can you point to the raw data behind every conclusion?
  • Is each conclusion linked to risk-management/clinical-eval files?
  • Are actions documented, assigned, and tracked to closure?
  • Is the report versioned, signed, and stored in your QMS?

If you can answer “yes” to those five, you’re in a good place.

I’m curious: how are other teams balancing automated signal detection with the need for human-reviewed, auditable rationale? Are you keeping automated flags in a separate queue, or integrating them directly into the formal PMS decision log?

Top comments (0)