If you run anomaly detection or DLP correlation on Microsoft Purview audit events sourced from Dataverse, your rules went silent on May 1, 2026.
The events still arrive. The row counts in Purview are the same. The activity dashboards look identical. Every audit envelope continues to ship the metadata you'd expect — actor, timestamp, table, action type, record ID. The only thing missing is the part most security teams were actually using.
The before-and-after field values are gone.
This is Microsoft 365 Message Center post MC1239891: "Information regarding removal of field-level value changes in audit events sent to Microsoft Purview". Effective May 1, 2026. Field-level OldValue / NewValue payloads are stripped from Dataverse audit events as they cross into the Purview unified pipeline.
Microsoft frames it as a privacy improvement, and they're not wrong — Purview-side consumers (SIEM forwarders, third-party connectors, e-discovery tooling) were aggregating sensitive PII into a place where the access controls weren't necessarily as tight as the originating Dataverse environment. Stripping the field values at the boundary closes that.
The cost of that improvement, though, lands on every team whose detection logic was built on top of those values.
What's Still There vs. What's Gone
What still flows to Purview after May 1:
-
Audit row metadata —
RecordType,Operation,UserId,ObjectId,CreationTime,Workload,OrganizationId - Table and record identity — entity name, record GUID, what was acted on
-
Action type —
Update,Create,Delete,Access - Field name list — which attributes changed (in many cases)
What's gone:
-
OldValue— the value of each changed field before the update -
NewValue— the value after - Any nested change-detail payload that previously contained those values for Dataverse audit events specifically
Existing audit rows from before May 1 retain their original payload. Only newly-created audit events created after the cutover have the stripped shape. That makes the regression invisible to retrospective queries — any test you run against a "known good" audit record from April still passes. The new audits don't.
Why This Looks Like Nothing Changed
The reason this is a textbook silent failure has three parts.
1. Row counts don't move. Audit volume in Purview is unchanged — every action that produced an event before still produces one. SIEMs that alert on "audit gap" or "logging stopped" don't trigger.
2. Dashboards still populate. Microsoft Sentinel, Splunk, Chronicle, and the rest get their hourly Purview pull. The records arrive. The widgets refresh. The "audit activity" panels show the same line. There's no failure indicator at the platform level.
3. The query language doesn't error. Detection rules in KQL, SPL, etc. that referenced OldValue or NewValue don't return errors when the field is missing — they return no rows. Empty result sets read as "no anomalies detected." Which is exactly the value those queries return when the world is fine. A query saying "alert when OldValue == 'Active' AND NewValue == 'Inactive' for AccountStatus" stops firing because the where-clause never matches anything. Nobody knows the rule went mute. They know is the alerts stopped, and "the alerts stopped" has a thousand benign-looking causes.
If your compliance program is built on these correlations — privileged-field changes, status flips, balance adjustments, role escalations, recipient-redirections — the rules went silent five days before this article was written, and it's likely nobody has noticed yet.
Concrete Detections That Just Stopped Working
A non-exhaustive list of rule patterns that depend on field-level deltas in Purview-sourced Dataverse events:
-
Status-flip detection. Account flipping from
Active→Inactiveoutside business hours. Lead flipped fromDisqualified→Qualifiedwithout an approval workflow. Sales-stage regression. All of these are "where OldValue == X AND NewValue == Y" rules. - Privileged-field watch. Role membership changes, privilege grants, security-group membership flips on Dataverse identity records. The list of what fields changed still reaches Purview; the values themselves don't.
- Financial guardrails. Watching for credit-limit increases, discount-percentage bumps, payment-term extensions beyond a threshold. The delta — "increased from 30 to 90 days" — was the rule. The new event reports "PaymentTerm changed" with no values.
- PII redirection alerts. A common pattern: alert when the email address on a contact record changes to a different domain than the prior value (used to catch impersonation / account-takeover). Both the prior and new domains were the rule. Both are gone.
- Bulk-edit anomaly. "User edited 50 contact records in 5 minutes, where 80% of the changes set Status = 'Disqualified'" — required reading the resulting NewValue. Now the rule sees 50 changes; can't see what they were.
- Record-merge correlation. Reconstructing what was kept and what was lost in a merge required the before/after on each field. Audit still shows the merge happened. The contents of the merge are no longer in Purview.
- Compliance-comparison reporting. Quarterly reports that compute "X% of contact records had their consent-flag flipped from Granted → Revoked" — these rolled up OldValue/NewValue from Purview. Reports go to zero.
These are all rules that pre-date May 1, 2026 and were green on April 30. They'll stay green after May 1 because empty result sets don't generate alerts.
Why CI / Validation Didn't Catch It
The same shape of the problem we keep seeing on every silent breaking change.
Detection rules don't have unit tests. Most security-rule frameworks let you author KQL/SPL/Sigma but don't ship a way to assert "this rule produces N alerts when given this fixture." If you have such a test harness — congratulations, you're in the top 1% — but it's almost certainly using fixtures recorded before May 1, when OldValue still existed. The tests pass against the old shape; production sees the new shape.
Audit volume is the canary, and the canary is fine. Most teams monitor Purview ingestion volume. Volume didn't drop. The canary keeps singing.
Microsoft's communication channel. MC1239891 went into the Microsoft 365 Message Center. If your security architect doesn't read MC posts that look like they're for the Power Platform admin (because the dependency chain to Purview-side detection isn't visible from the post's title), the change lands without anyone hearing it. The post mentions Purview, but it's filed under Power Platform.
Organizational seam. Dataverse changes are owned by the Power Platform admin / D365 team. Purview detection rules are owned by the security team. The fact that a Power Platform configuration change blanks out a security team's detection logic crosses a domain boundary that no playbook usually covers.
The Real Migration Path
Microsoft's recommended workaround is correct, but it requires moving where your detections live.
Option A: Pull from Dataverse Web API directly. The before/after values are still stored in Dataverse — they didn't go anywhere. They are accessible via the RetrieveAuditDetails API on the audit table. The flow is:
Audit row (in Dataverse) → RetrieveAuditDetailsRequest →
AttributeAuditDetail.OldValue / .NewValue
You build a connector that polls Dataverse audit on a schedule, pulls the audit-detail rows for changes you care about, and pushes the enriched events into your SIEM as a separate stream. This is more work than what existed before (Purview was doing the heavy lifting) and the data isn't in the same query store as the rest of your Purview data, so cross-table correlations get harder.
The audit-detail API has a few footguns worth knowing in advance:
- Large field values are truncated at 5 KB and the response shows an ellipsis. Long-text fields (description, comments) can't be fully reconstructed.
- The user calling the API needs
prvReadAuditSummary. A service principal that worked for the Purview pull may not have this privilege on the Dataverse side. - Audit data isn't accessible via the TDS / SQL endpoint. You can't join it to other Dataverse tables through the SQL surface — has to be via the Web API or Organization Service.
Option B: Switch to Dataverse-native detection. Run the rules against the Dataverse audit table itself, via Power Automate flows or scheduled functions, and only forward the result (a fired alert) to your SIEM. This keeps the field values inside the Dataverse compliance boundary, which is also Microsoft's preference — it's the privacy-preserving path. Trade-off: you lose centralization. Each Dataverse environment becomes its own detection point.
Option C: Accept the loss. For a subset of rules where the fact that a field changed is enough, drop the value-comparison clause and alert on any change to the watched field. This widens the alert volume considerably. It's a fallback, not a replacement.
Whichever path you pick, the migration sequence that actually holds is:
-
Inventory. Grep your detection content (Sentinel rules, Sigma packs, Splunk content, custom KQL) for
OldValueandNewValue. Anything that came out ofAudit.Generalor Dataverse-sourced workloads is in scope. - Triage. Sort by criticality (privileged-field rules first), by event volume (low-volume rules first to reduce blast radius), and by whether the rule is "any change" (still works) vs. "specific value transition" (broken).
- Replace. Build the Dataverse-side pull or rewrite as Dataverse-native rules. Validate end-to-end against a known test transition.
- Run both paths in parallel for a sprint. Compare alert counts pre- and post-migration. Resolve gaps before tearing the old (now empty) rules out.
-
Update playbooks. SOC runbooks that say "pivot to OldValue/NewValue in the Audit row" need to be rewritten to "pivot to the Dataverse
audit_audit_detailslink in the source environment."
How To See The Next One
This pattern — the data still flows, the shape just changed — is not unique to Microsoft. It's the most common breaking-change pattern this year, by a good margin. Stripe's Dahlia release reshaped decimal fields in the SDK. GitHub silently retired seven org-security fields. Power Platform stripped two attributes from a deeply nested payload. The HTTP envelope didn't move. The endpoint didn't 404. The thing inside changed.
The defense is to watch the shape of every external response your detections, integrations, and consumers depend on. Not the volume, not the latency, not the status code — the shape. The presence and type of every field, on every payload you read.
That's the gap FlareCanary plugs. Point it at the endpoints and event streams you depend on, and it learns the response structure, then alerts on field disappearances, type changes, and shape drifts. For a Purview/Dataverse setup this would have caught OldValue / NewValue going to null (or being absent entirely) on the first event after May 1, before the silence reached the alerts.
You don't strictly need a tool. You need a habit. Watch the runtime shape of every external response you depend on. Detection rules built on top of fields are only as durable as the fields. The ones in MC1239891 vanished cleanly, with no error, on a quiet Friday in May. The next ones will too.
If your security tooling depends on Dataverse audit field values — or you've been bit by any silent shape-strip on a payload — drop a note. The "audit volume looks fine, the rules just stopped firing" failures are the exact kind we're tracking.
Top comments (0)