- Book: Observability for LLM Applications — paperback and hardcover on Amazon · Ebook from Apr 22
- Also by me: Thinking in Go (2-book series) — Complete Guide to Go Programming + Hexagonal Architecture in Go
- My project: Hermes IDE | GitHub — an IDE for developers who ship with Claude Code and other AI coding tools
- Me: xgabriel.com | GitHub
This week, the Trump administration released the first comprehensive National AI Policy Framework of its second term. The detail matters. The framework carries four load-bearing decisions for every team shipping AI features in the United States.
One. Federal preemption of state-level AI laws.
Two. Regulation through existing agencies, not a new one.
Three. Heavy emphasis on child-online-safety obligations.
Four. Copyright rules for AI training data.
Each of these changes your compliance surface in a different direction. Here is how.
From 50 patchworks to one federal regime
Until this week, any AI product with US users had to navigate a patchwork of state laws. California SB-1047-style bills, Colorado's AI Act, Illinois's biometric and training-data statutes, Texas's employment-AI rules, and smaller state provisions in between. Each with its own disclosure requirements, audit triggers, and enforcement regime. A single SaaS launch meant lawyers mapping one product to fifty policies.
flowchart TD
subgraph OLD["Before: 50 state patchworks"]
CA[California<br/>SB-1047 era rules] --> APP1[Your AI app]
CO[Colorado AI Act] --> APP1
IL[Illinois BIPA<br/>+ training-data law] --> APP1
TX[Texas employment rules] --> APP1
DOTS[+ 46 more state regimes] --> APP1
end
subgraph NEW["After: federal preemption"]
FED[Federal framework<br/>one regime] --> APP2[Your AI app]
EXIST[Enforced through<br/>FTC / FDA / FCC / USPTO / SEC] --> APP2
end
Federal preemption means the federal framework is the ceiling — states cannot impose stricter or divergent rules on the same ground. Legally this is contested territory: preemption will be litigated, almost certainly reaching the Supreme Court within 18 months. But the framework's intent is a single federal floor-and-ceiling.
For compliance engineering this is the biggest structural change. One regulatory-attestation pipeline instead of fifty. One audit path. One set of required disclosures. If you were dreading the compliance matrix, it just shrank.
Regulation through existing agencies
The framework explicitly rejects the "create a new AI super-agency" path (which EU and UK have moved toward). Instead, it delegates AI regulation into existing federal bodies, each enforcing AI rules in its own domain.
flowchart LR
FRAME[Federal AI Framework] --> FTC[FTC<br/>Consumer protection<br/>deceptive AI claims]
FRAME --> FDA[FDA<br/>AI in healthcare<br/>medical device software]
FRAME --> FCC[FCC<br/>AI-generated calls<br/>robocall rules]
FRAME --> USPTO[USPTO<br/>Copyright + training data]
FRAME --> SEC[SEC<br/>AI disclosure in filings]
FRAME --> COPPA[FTC/COPPA<br/>Child online safety]
The practical translation: the agency your team already deals with is the one that will write your AI rules. If your product is consumer-facing SaaS, FTC. If it touches health data, FDA. If it makes phone calls, FCC. If it trains on third-party content, USPTO. If you are a public company, SEC.
This is pragmatic. It is also slower than a single agency would be, because each body has to issue its own sub-rules. Expect the first batch of agency-specific AI guidance within 90-180 days, rolling out across 2026-2027.
Child online safety is the enforcement priority
Among the four framework pillars, child protection is where the teeth are. The framework directs the FTC to expand COPPA-adjacent enforcement into AI products — age verification, parental consent, minor-specific data handling rules for training data and inference logs.
If your product has any path where a minor can reach your AI feature, expect:
- Mandatory age-assurance mechanisms (not just self-attested birth dates)
- Explicit compliance attestations on training data — did minor data end up in your training corpus?
- Stricter rules on persuasive or emotionally-engaging AI in minor-accessible products
- Documentation requirements on safety-filtering efficacy, auditable by the FTC
The teams affected first are consumer AI products — chatbots, companion apps, educational tools, any social-media integration. But the definition of "minor-accessible" is broad. If your B2B product is embedded in a platform that has minor users downstream, the obligations cascade.
Training-data copyright: the seismic one
The most consequential and most unsettled element is the framework's direction on copyright and AI training data. The framework proposes, in broad strokes, a clearer path for commercial AI training on copyrighted material — subject to disclosure, rights-holder notification, and a licensing regime that is still being defined.
For context, a partial ongoing docket: the New York Times v. OpenAI lawsuit, Universal Music v. Anthropic, GitHub Copilot's class action, Getty Images v. Stability AI, multiple author-collective suits against Meta. The framework's direction here effectively signals which way the federal government wants these to resolve.
For engineers, three things shift:
Training-data provenance becomes a tracked asset. If your team fine-tunes models on third-party data, you will need an audit trail. Where it came from, when it was acquired, what license governs it, what redactions or filters were applied. Today most teams handle this informally. Within 12 months it will be part of the compliance stack, alongside SOC 2 and data-residency documentation.
Rights-holder notification may become mandatory. The exact threshold is unclear in the framework, but expect rules that require AI vendors to publish training-data manifests or to notify specific rights-holders when their works are used in training. If you run your own models, this obligation sits on you, not your provider.
Inference-time attribution becomes a feature, not an afterthought. Models that can cite the training-data source for a given claim will have a compliance advantage over models that cannot. Watch for inference-side changes from OpenAI, Anthropic, and Google that expose attribution metadata in API responses.
What to actually do this quarter
Not panic. The framework is a direction, not a law yet. Most of its provisions will take effect via agency rulemaking over the next 6-18 months. Your immediate actions are boring:
- Map your product to agencies. If you do not already know which existing federal agency has the strongest enforcement claim on your AI feature, figure it out. That is where the rules will come from first.
- Start a training-data inventory. If you fine-tune, write down what data went in, with dates and licensing status. This is cheap now and expensive later.
- Audit your minor-reachability surface. Be honest about whether your product can be used by minors or is embedded in platforms where it can. If yes, expect compliance work.
- Watch for agency-specific guidance. FTC and FDA will move first. Subscribe to their rulemaking dockets.
If this was useful
Compliance without observability is a paperwork exercise. Compliance with observability is an auditable attestation. The book covers the trace-level attributes (app.feature.owner, gen_ai.prompt.version, retention policies, PII redaction in span attributes) that turn "we run an AI feature" into "we can answer an FTC request in an afternoon." Chapter 18 covers the role assignments — who owns what when a regulator shows up.
- Book: Observability for LLM Applications — paperback and hardcover on Amazon · Ebook from Apr 22.
- Also by me: Thinking in Go — Book 1: Go Programming + Book 2: Hexagonal Architecture
- Hermes IDE: hermes-ide.com — an IDE for developers who ship with Claude Code and other AI coding tools.
- Me: xgabriel.com · github.com/gabrielanhaia.


Top comments (0)