Closing a CAPA ticket is easy. Demonstrating that the corrective action prevented recurrence, reduced risk, and is sustainable is where you earn your audit points — and where many teams stumble.
I’ve been responsible for CAPA programmes on Class II devices long enough to watch good root-cause work undone by weak effectiveness checks. Notified bodies consistently ask for more than a signed “completed” checkbox; per ISO 13485 section 8.5.2 and FDA 21 CFR 820.100 you must verify, validate where appropriate, and document evidence that the action was effective. In practice this means planning the effectiveness check at the CAPA creation stage, not as an afterthought.
Start with a clear objective, then choose the metric
Too often the effectiveness step reads “monitor” or “review in 30 days.” That’s not an objective. An effectiveness check needs a measurable criterion.
- Objective: what specific undesirable outcome are we preventing? (e.g., “incoming inspection rejects for component X”)
- Metric: how will you measure that outcome? (e.g., “reject rate per 1,000 parts” or “number of field complaints related to symptom Y”)
- Threshold: what level counts as effective? (e.g., “reject rate reduced to <0.5% and sustained for three consecutive months”)
- Data source: where does the evidence come from? (incoming inspection logs, complaint database, production SPC charts)
To be fair, not every CAPA permits a numeric KPI. For software or training actions you may be looking at audit non-conformances or observed operator errors instead. Still: name the evidence and the acceptance criteria.
Plan the check when you open the CAPA
Annex II of many notified-body questionnaires and auditors alike expect to see the verification/validation plan as part of the CAPA file. I write the effectiveness-check row in the CAPA form the same day I write the root-cause hypothesis.
In practice this means the CAPA record includes:
- planned metric, acceptance criteria, data source
- planned sampling method and size (if sampling is needed)
- timeframe for evaluation
- reviewer (usually someone independent of the CAPA owner)
- link to the change control or corrective procedure (traceability)
This avoids the common post-hoc rationalisation where the CAPA owner selects convenient data instead of representative evidence.
Distinguish verification from validation
Verification: did we implement the fix as intended? (e.g., supplier changed the inspection jig; the jig now exists and meets drawings)
Validation: did the fix actually reduce risk or recurrence in production and the field?
Auditors want to see both where relevant. For example, a design change should be verified by design outputs and validated by production/process data or clinical feedback. For procedural fixes (training, work instructions), verification may be training records; validation may be observed performance or a drop in related non-conformances.
Use a risk-based, CAPA-driven risk assessment
Tie the effectiveness criteria to residual risk. If the root cause removal alters risk, make that explicit in the CAPA file and in the risk management file (ISO 14971 linkage). Show the risk acceptability decision and evidence that residual risk controls are in place.
CAPA-driven risk assessment makes the CAPA more defensible with notified bodies and clarifies when you need longer-term monitoring versus a short check.
Sampling, duration, and independence matter
Two traps I repeatedly see:
- Tiny sample sizes that don’t represent production variability.
- Only short-term checks that miss recurrence.
Decide sampling and duration based on risk and process variability. High-risk or low-frequency events often need longer monitoring. Also ensure the effectiveness review is done or witnessed by someone not directly responsible for implementing the CAPA — independent reviewability is a favourite audit theme.
Capture the data in a connected workflow
If your QMS is siloed, CAPA evidence ends up scattered across spreadsheets, WIs, and emails. Connected workflow — one place where change, CAPA, risk, and document control link — saves time during evidence collection and audit requests. Automated CAPAs and AI-assisted tagging can help surface related documents, but the controls and reviewer decisions must remain explicit and traceable.
Practical tip: include hyperlinks or UDI references in the CAPA record to the Technical File sections, change controls, and supplier corrective actions. Traceability speaks louder than narrative.
Trend and close the loop — not just close a ticket
An effectiveness check is not a single pass/fail. Where possible, show trend data:
- Before/after charts for the metric you set
- Comparison to control lines or historical baselines
- Any unintended consequences (did the fix introduce a new failure mode?)
If the metric improves but shows signs of drifting back, escalate to further actions rather than closing. Closure should include a planned re-check or transfer into routine monitoring when stability is proven.
Document decisions clearly
Auditors read CAPA records for three things: what you thought, what you did, and how you proved it worked. Keep the language specific:
- “Root cause = supplier plating variability leading to corrosion”
- “Action = incoming inspection acceptance criterion tightened and supplier corrective action implemented”
- “Effectiveness metric = corrosion-related field complaints per month; target <1 complaint/6 months; evaluated over 6 months; reviewer QA Manager (not CAPA owner)”
This level of explicitness makes the story auditable.
Final thought
I’ve seen CAPAs that looked robust on paper but collapsed under audit because the effectiveness proof was vague or absent. Conversely, CAPAs with modest actions but strong, well-planned effectiveness checks survive scrutiny and actually reduce risk.
How do you decide the acceptance criteria and monitoring duration for CAPA effectiveness on high‑risk issues in your organisation?
Top comments (0)