On March 18, a US senator released a discussion draft for federal AI legislation — the TRUMP AMERICA AI Act. It proposes mandatory duty-of-care obligations, bias audits for high-risk systems, and training data transparency requirements. Three days earlier, the EU Council agreed to delay its own AI Act's high-risk rules by over a year.
Two continents. Two frameworks. Neither is finalized.
If you're building AI agents right now, this might feel like a reason to wait. Don't know which rules will apply? Don't build governance yet.
That instinct is wrong. Here's why.
What's actually happening
In the US, there is no federal AI law. What exists is a discussion draft — a proposal from Senator Blackburn that hasn't been formally introduced, needs bipartisan support, and faces opposition from both tech companies (too much regulation) and consumer groups (too much preemption of state laws). It's ambitious: mandatory risk assessments, FTC enforcement authority, expanded liability for AI developers, required bias audits for systems affecting health, safety, employment, education, law enforcement, or critical infrastructure.
But the bill provides almost no guidance on how to classify a system as "high-risk." That determination is left to organizations themselves, with significant liability if they get it wrong.
What IS already operational: a December 2025 executive order establishing a DOJ task force to challenge state AI laws in court, and an FTC directive on state bias-mitigation requirements. The federal government is actively trying to preempt state-level AI regulation, even without a federal replacement in place.
Meanwhile, 38 states enacted AI-related laws in 2025. Colorado's AI Act is live. Illinois amended its Human Rights Act to cover AI discrimination. California has transparency requirements for frontier models.
The practical result: if you deploy agents in the US today, you face a patchwork of state laws, an executive branch actively challenging those laws, and a proposed federal framework that may or may not replace them.
In the EU, the AI Act is enacted law — but the rules that matter most aren't enforceable yet. The Commission missed its own deadline for high-risk classification guidance. Two standardization bodies missed their deadline for technical standards. The "Digital Omnibus" package pushes high-risk system deadlines from August 2026 to December 2027 (standalone systems) or August 2028 (product-embedded systems).
But here's the catch: the Omnibus hasn't passed yet. If Parliament and Council don't agree before August 2026, the original deadlines technically apply — even though nobody has the guidance or standards to comply with them. The EU could enter a period where obligations are legally active but practically impossible to meet.
Prohibited AI practices (social scoring, subliminal manipulation) have been enforced since February 2025. Transparency obligations for AI-generated content take effect August 2026 regardless of the Omnibus. GPAI model rules are already in force.
What's converging
The regulatory text is diverging — the US and EU disagree on approach, scope, enforcement mechanisms, and timeline. But read past the policy language and look at what both frameworks actually require organizations to produce. The infrastructure requirements are remarkably similar:
Logging. The EU AI Act (Article 12) requires automatic logging of AI system operations. The US bill requires risk assessments and documentation of algorithmic systems. Both want a record of what your system did and why.
Transparency. The EU (Article 13, Article 50) requires disclosure of AI involvement and labeling of AI-generated content. The US bill requires training data use records and inference data use records. Both want visibility into how AI systems process data.
Data provenance. The EU requires operators to document data sources, processing locations, and jurisdictional context. The US bill creates liability for using copyrighted or personal data without consent in AI training. Both want you to know — and prove — where your data came from.
Quality assurance. The EU requires conformity assessments, accuracy standards, and human oversight protocols for high-risk systems. The US bill requires bias audits and participation in evaluation programs. Both want evidence that your system produces reliable outputs.
Audit trails. Both frameworks assume that organizations can produce, on demand, documentation showing what their AI system did, what data it used, whether it was reliable, and whether appropriate oversight was in place.
This convergence isn't coincidental. These are the basic requirements of accountable software. Logging, transparency, provenance, and quality assurance aren't regulatory inventions — they're engineering practices that regulated industries have used for decades. The AI frameworks are adapting them for a new context, but the underlying infrastructure is the same.
What this means if you're building agents
AI agents have a specific version of this problem. An agent that calls external tools at runtime — looking up company data, validating tax numbers, screening sanctions lists, checking compliance — creates a chain of data dependencies that neither framework can ignore.
Every external data call your agent makes is potentially auditable: What data source did it use? Was the source reliable at the time of the call? Was AI involved in processing the response? What jurisdiction was the data processed in? How long is the data retained?
If your agent stack doesn't capture this information today, adding it later is expensive. You'd need to instrument every integration point, build a logging layer, create provenance metadata, design quality monitoring — and do it retroactively across an architecture that wasn't designed for it.
The practical argument isn't "comply with regulation X by date Y." It's that the infrastructure for accountable agent operations — logging, provenance, quality signals, audit trails — is the same regardless of which regulatory text ends up applying. Building it now costs less than retrofitting it later, and it works no matter what happens in Washington or Brussels.
The quality signal gap
There's one dimension where agent infrastructure is further behind than most teams realize: quality signals for external tools.
When a developer hardcodes an API integration, they test it. They know when it breaks. An agent discovering and calling tools at runtime has no equivalent — it trusts whatever comes back. If the API returns stale data, the agent doesn't know. If the response schema changed, the agent's output degrades silently.
Regulators in both jurisdictions are starting to notice this gap. The EU AI Act's accuracy and robustness requirements (Article 15) apply to the entire system, including external data dependencies. The US bill's duty-of-care obligation covers "algorithmic systems and data practices." Neither framework will accept "we called an API and it returned JSON" as evidence of quality assurance.
Agent builders who treat external tool quality as someone else's problem are accumulating regulatory risk on both sides of the Atlantic — even before anyone agrees on which specific rules apply.
What to build now
If you're deploying agents that interact with external data, here's what both regulatory trajectories suggest you should have in place regardless of jurisdiction:
Per-call audit records. Every external data call should produce a structured log: what was called, what data source was used, what was returned, how long it took. Not for compliance theater — for debugging and accountability when something goes wrong.
Provenance metadata. Each data source should have a documented chain: where the data comes from, how fresh it is, whether AI was involved in processing it, what jurisdiction it was processed in. This is the information that both the EU and US frameworks will eventually require, and it's useful for debugging long before any regulator asks for it.
Quality monitoring. External tools should have measurable quality signals — success rates, schema stability, data freshness. Your agents should be able to check these signals before trusting a response, not after.
Transparency markers. If AI was involved in generating or processing a response, that should be visible in the output. Both frameworks require this. It's also just good practice — downstream consumers of your agent's output deserve to know what was AI-generated.
None of this requires you to pick a regulatory jurisdiction. It's infrastructure that works everywhere because it's based on engineering principles, not legal text.
How we think about this at Strale
Strale is a capability marketplace for AI agents — 225+ data capabilities accessible via a single API call. But the part that's relevant to this discussion is what happens underneath: every call through the platform automatically generates a structured audit record with data provenance, quality scores, transparency markers, and regulatory cross-references.
We didn't build that layer because of the EU AI Act or the US bill. We built it because agents calling external data sources without quality signals and audit trails is an engineering problem, and engineering problems get worse when you ignore them.
The regulatory frameworks are catching up to what good agent infrastructure already requires. The developers who build governance into their stack now — whether through Strale or through their own instrumentation — won't need to scramble when the rules finally land.
Full methodology: strale.dev/trust
Try five capabilities free, no signup: strale.dev
Strale gives AI agents access to 225+ quality-scored capabilities via MCP, REST API, or SDK. Every capability is independently tested and scored. Get started free — €2 credit, no card required.
Top comments (0)