The White House released a national AI legislative framework calling on Congress to preempt state laws. Forty-five states have already introduced 1,561 AI bills this year. The same administration that expelled Anthropic then deployed Claude militarily now wants sole jurisdiction over artificial intelligence. The story is not the framework's six objectives. It is the jurisdictional battle over who gets to write the rules.
The White House released a national AI legislative framework on March 20, 2026 — four pages, six objectives, and one central demand: Congress should preempt state AI laws that impose undue burdens on the technology sector.
The document states that a patchwork of conflicting state laws would undermine American innovation and the nation's ability to lead the global AI race. Only the federal government, it argues, can set a consistent national policy. States should not be permitted to regulate AI development because it is an inherently interstate phenomenon with key foreign policy and national security implications.
Forty-five states have introduced 1,561 AI-related bills in 2026. That number is up from 1,208 bills across all fifty states in 2025, which was up from 635 in 2024. Twenty-seven state legislatures are moving seventy-eight bills targeting AI chatbot interactions alone. The legislative machinery the White House wants Congress to override is not hypothetical. It is running.
The Six Objectives
The framework organizes around what AI czar David Sacks calls the four C's — children, communities, creators, and censorship — plus innovation and workforce development.
Children first: AI platforms accessible to minors must include features reducing sexual exploitation and self-harm risks. Parents must be empowered with account controls for privacy protection and device management. Age-assurance requirements would apply to AI services.
Communities second: data centers should not force ratepayers to subsidize their electricity consumption. Congress should streamline permitting so data centers can generate power on site. The framework also targets AI-enabled scams and national security threats.
Creators third: intellectual property rights must be balanced against fair use for AI improvement. The language protects the creative works and unique identities of American innovators, creators, and publishers.
Censorship fourth: guardrails must prevent AI systems from silencing lawful political expression or dissent. The framework states that AI cannot become a vehicle for government to dictate right and wrong-think.
Innovation fifth: remove barriers, accelerate deployment, provide broad access to testing environments.
Workforce sixth: expand training programs and create new jobs in an AI-powered economy.
Each objective is reasonable in isolation. Together they describe a federal government that wants to be the sole author of rules it has not yet written, preempting the work of forty-five state legislatures that are already writing them.
The Jurisdictional Fracture
The AI framework did not land in a vacuum. It arrived three days after a Nevada federal judge ruled that CFTC registration does not preempt state gaming law. Judge Miranda Du sent Nevada's case against Kalshi back to state court, finding that Congress did not intend federal authority to displace state authority. The savings clause in the Commodity Exchange Act, she wrote, explicitly preserves the jurisdiction of state courts.
It arrived three days after Arizona's attorney general filed twenty criminal misdemeanor counts against Kalshi — the first state to prosecute a prediction market company as an illegal gambling operation. Four of the twenty counts target election wagering specifically.
It arrived one week after the CFTC published an Advanced Notice of Proposed Rulemaking to begin formal regulation of prediction markets — the first step toward embedding event contracts into federal regulatory architecture.
The CFTC chairman, Michael Selig, responded to Arizona's criminal charges by calling them a jurisdictional dispute and entirely inappropriate as a criminal prosecution. In a Wall Street Journal op-ed, he argued that the CFTC has always had authority over prediction markets and that event contracts serve legitimate economic functions as swaps rather than gambling. His message to states challenging CFTC authority: we will see you in court.
The pattern is identical to the AI framework. The federal government claims exclusive jurisdiction. The states have already legislated. The courts are split. No one has the authority to stop the others.
The Anthropic Precedent
The same administration that released today's framework expelled Anthropic from all federal agencies in February. The Pentagon gave Anthropic an ultimatum: allow Claude for all lawful uses or lose the Defense Production Act contract. Anthropic held its safety commitments. The government banned the company.
Then the military kept using Claude in combat operations anyway.
The Pentagon designated Anthropic a supply chain risk — the same classification historically reserved for foreign adversaries. Consumer downloads of Claude surged to number one on the App Store the day after the expulsion. OpenAI closed a Pentagon deal hours later on substantially identical safety terms.
This is not hypocrisy in the ordinary political sense. It is a structural revelation. The federal government expelled a company, used its product anyway, gave the replacement contract to a competitor on the same terms, and designated the expelled company a national security risk — all within the same policy apparatus that now demands sole authority over AI regulation.
The framework asks Congress to trust that a single national standard will be more coherent than fifty state standards. The record suggests that the federal standard is not coherent with itself.
The Preemption Paradox
Federal preemption of state law is a well-established legal tool. It works when the federal government has a clear, enforceable standard that renders state standards redundant. Environmental regulation, aviation safety, financial market supervision — each rests on a federal regime comprehensive enough that parallel state regimes would create genuine conflict.
The AI framework is not that. It is a legislative wish list — six objectives, no bill text, no enforcement mechanism, no timeline. The White House is asking Congress to preempt 1,561 existing state bills with legislation that does not yet exist.
The preemption would carve out exceptions for child safety, data center infrastructure, and state government procurement of AI. These exceptions are revealing. They acknowledge that states have legitimate regulatory interests in precisely the domains where AI's impact is most immediate — where children interact with platforms, where data centers consume electricity, where state agencies deploy automated systems. The exceptions concede the principle while claiming the territory.
The framework also reveals its own internal tension on data centers. Sixty-four billion dollars in data center projects have been blocked or delayed by local opposition. The framework wants Congress to streamline permitting while simultaneously acknowledging that ratepayers should not subsidize data center power consumption. These objectives conflict. Streamlined permitting enables the projects that communities are blocking precisely because they fear the costs — grid strain, water usage, noise, property value effects — that the framework promises to prevent.
Who Writes the Rules
The deeper story is not what the framework says. It is what the framework reveals about the current state of AI governance in the United States.
No single entity controls the regulatory landscape. The White House issues frameworks. Congress introduces bills without passing them — disagreements over preemption, copyright, and children's safety have stalled federal AI legislation for years. The CFTC writes rules for prediction markets while states prosecute the same platforms as criminal enterprises. The FTC published an AI policy statement under executive order. The Pentagon makes procurement decisions that contradict its own security designations. Forty-five state legislatures write their own laws because the federal government has not.
The result is not a patchwork. A patchwork implies pieces cut from the same cloth, arranged differently. This is multiple institutions writing rules in different languages, for different purposes, under different legal authorities, with contradictory assumptions about what AI is and who should govern it.
Arizona says prediction markets are gambling. The CFTC says they are swaps. Nevada says the question belongs to state courts. The White House says AI regulation belongs exclusively to Congress. Congress has not acted. The states have.
The framework's preemption demand is an attempt to resolve this by fiat — to assert federal authority over a domain where federal authority has been conspicuously absent. But preemption requires a federal standard to preempt toward. Without enacted legislation, the framework is a jurisdiction claim without jurisdiction. It is the assertion that someone should write the rules, addressed to an institution that has declined to do so for years, while demanding that the institutions already writing rules stop.
The question the framework cannot answer is the one it raises most sharply: if the federal government wants sole authority over AI, what has it been doing with that authority so far? The record — expelling and then using the same AI company, claiming and then losing tariff powers, asserting and then watching courts deny preemption — suggests that the demand for sole jurisdiction is not a demonstration of competence. It is a response to the realization that control is already distributed, and the framework is arriving after the fact.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)