Every AI deployment now carries audit risk.
For high-risk AI in Europe, logging and documentation are legally mandated. In the United States, states are building their own rules with no consistency. In parts of Asia, certain high-impact AI systems face mandatory risk assessments and pre-deployment scrutiny.
Non-compliance blocks sales, delays integrations, and forces engineers to rebuild systems under pressure.
The only sustainable path is making compliance part of the design. Build systems that record evidence as they run.
Three Layers of Evidence
Despite regional differences, every regulatory framework asks for the same proofs:
What was shipped: models, configurations, schemas, and artifacts.
Who changed it: access logs, RBAC/SSO identities, approvals, version history.
How changes were governed: linting results, governance checks, signed configs, CI-based controls.
These layers clarify what auditors mean when they ask for "documentation" or "technical files."
EU AI Act
The EU AI Act entered into force on August 1, 2024. Key obligations for high-risk AI systems phase in over roughly two to three years, with most provider duties applying from 2026-2027.
Once obligations begin, high-risk AI providers must:
Generate and retain logs for at least six months, longer where necessary based on the system's purpose or other EU/national laws.
Keep technical documentation and compliance records available for 10 years after the system is placed on the market or put into service.
Fines can reach up to β¬35 million or seven percent of global turnover.
US State Approaches
No comprehensive federal law exists. States have taken over.
In the 2025 session, lawmakers in all 50 U.S. states considered at least one AI-related bill or resolution.
Colorado regulates "high-risk" systems with focus on algorithmic discrimination and transparency, requiring impact assessments, consumer disclosures, and notification to the Attorney General within 90 days when discrimination risks are discovered.
California targets frontier models rather than use cases. SB 53 requires large frontier-model developers to publish safety/transparency disclosures and report certain critical safety incidents within 15 days, or within 24 hours when there is imminent risk of death or serious injury.
New York maps high-risk directly to employment. The proposed Workforce Stabilization Act would require AI impact assessments before workplace deployment and impose tax surcharges when AI displaces workers.
Texas pursued both youth safety and AI governance. The Texas Responsible Artificial Intelligence Governance Act, effective January 1, 2026, establishes statewide consumer protections, defines prohibited AI uses, and creates enforcement mechanisms.
Utah requires disclosure when consumers interact with generative AI and mandates stricter upfront disclosure for licensed professions and designated 'high-risk' AI interactions.
Each state treats 'high-risk' differentlyβemployment decisions, youth safety, frontier models, discrimination, transparency. Engineering teams design for multiple compliance targets with no federal standard to unify them.
Asia Pre-Deployment Review
China mandates lawful training data, consent for personal information, and clear labeling for synthetic content.
India initially proposed stricter draft guidance pointing toward mandatory government approval for some AI systems in early 2024, then revised its advisory and removed the explicit government-permission language while continuing to emphasize transparency, consent, and content safeguards.
South Korea's AI Basic Act, effective in 2026, will add mandatory risk assessments and local representation for high-impact systems.
Compliance Costs
Building Audit-Ready Infrastructure
Identity-Linked Event Trails
Audit-ready logs provide immutable, identity-linked events that allow teams to replay a decision path. If you cannot reconstruct what happened, you cannot satisfy traceability requirements.
Policy Enforcement in CI/CD
Policy enforcement in CI prevents misconfigured models or insecure schemas from entering production. Every blocked change becomes an approval record.
Access Governance
Access governance connects each action to a verified identity through SSO, SCIM, and RBAC. This creates a chain of accountability that enforcement agencies can verify.
Versioning
Versioning links prompts, models, and configurations to their exact commit or revision. This establishes a reproducible audit history for every component.
Compliance Maturity Model
Audit Readiness Checklist
Step One: Map Evidence Gaps
Compare existing logs, approvals, and version history against regional requirements. Identify where traceability breaks or where evidence is missing.
Step Two: Stabilize the Basics
Ensure six-month log retention, CI policy checks, and identity-based approvals. These controls form the minimum reliable audit trail.
Step Three: Automate and Operationalize
Add inference-level tracing, compliance coverage KPIs, and region-specific audit bundles. This turns compliance from reactive work into automated evidence generation.
When proof is generated automatically, regulation stops blocking delivery. Teams that build evidence into their systems answer audit requests in hours. Teams that don't spend weeks reconstructing logs, defending gaps, and explaining why critical evidence doesn't exist.
Adapted from the original article on the WunderGraph blog.


Top comments (0)