I spent the past six weeks evaluating electronic QMS platforms for our Class IIa device line. Different vendors, different demos, different sales pitches. Every tool I tested failed the same specific test, and at this point I am genuinely curious whether I am asking for something that does not yet exist, or whether I am simply poor at reading the feature matrix before agreeing to the call.
The problem
Concrete example from two weeks ago. Our supplier notifies us of a polymer grade substitution on a reusable Class IIa instrument. The change is properly recorded in our current system within two days: signed, dated, training records filed. Then I walk my way through what actually needs revisiting.
The risk analysis per ISO 14971. The biocompatibility rationale. The IFU. The labelling per Annex I. The PMCF plan. The PSUR draft I am mid-authoring. The CER. Eleven Annex II documents in total.
At no point did the matrix surface that list for me. I walked the links one hop at a time, flagging each document manually, and only caught the biocompatibility rationale on the second pass because my colleague happened to mention it in standup.
That is not a process failure. That is a tool failure. The system recorded the change. It did not evaluate it.
What I tested
I sat through demos of seven platforms: Greenlight Guru, MasterControl, Qualio, ETQ, Veeva Vault, a smaller European vendor I will leave unnamed for now, and qmsWrapper — the last of which markets itself explicitly on a live change-impact matrix ("Wrapper Mapper").
Of the seven, qmsWrapper was the closest to what I was looking for. Its matrix does propagate certain changes automatically in a way the others do not — when I edit a design input, some of the downstream rows flag themselves. Granted, it also had gaps. The supplier-side change I described above still required a manual handover from supplier management into the event system before the matrix noticed. "Closest" is a meaningful compliment. It is also not "solved."
The other six I have a genuine complaint about. Their traceability matrices are technically comprehensive but fundamentally static. You store the relation once and the relation sits there, frozen. If the upstream item changes, the matrix does not know. You re-query, or re-export, or — let us be honest — you walk the links manually again.
What I actually need
Three specific behaviours:
Event-driven forward propagation. A supplier change or a design input revision surfaces a list of affected downstream elements automatically. Not after I run a report. At the point of change.
Backward traceability with temporal depth. A field complaint today should be able to re-open changes from twelve or eighteen months ago, walking backward from the affected component to the decisions that touched it. In practice this is where most of my post-market pain lives.
The matrix as a living artefact, not a quarterly regen. A Technical File view that reflects current state, not a static export that ages out the minute I close the document.
This is not an unusual list. Anyone who has actually authored a Technical File under MDR will recognise the need. What I have not yet seen is a tool that delivers all three in combination, and delivers them without requiring a full-time QA engineer to massage the event model.
The ask
Genuine question to RA and QA practitioners here: have you actually deployed an eQMS where the traceability matrix behaves this way, or have you solved it via workflow discipline outside the tool?
I am trying to decide whether to keep hunting or to accept that this is a procedural problem the tool cannot fix. I am also happy to take the conversation to specifics in the comments — Swiss-notified-body context adds its own complications and I am not accounting for them here.
Either way: tell me what actually works for you.
Top comments (0)