There's a certain irony in this story that I can't get past.
On Day 1 of his second term, President Trump signed an executive order revoking the Biden administration's AI safety requirements — including a provision that required AI developers to share safety test results with the federal government before releasing powerful new models. The Biden EO was out. The hands-off era was in.
That was January 2025.
Sixteen months later, the Trump administration is reportedly considering an executive order that would do roughly the same thing. Create a working group. Mandate pre-release review. Get the government's eyes on frontier AI models before they reach the public.
What changed? One word: Mythos.
What the Proposal Actually Is
The New York Times broke the story on May 4, with Bloomberg and a string of other outlets confirming the discussions. The White House is weighing an executive order that would establish a formal "AI working group" — a body of tech executives and government officials tasked with examining new AI models before they reach public release.
The proposed structure isn't a single regulatory body with a clear mandate. It's messier than that, and deliberately so. Three agencies are reportedly under consideration to lead or participate in oversight reviews: the National Security Agency, the Office of the National Cyber Director, and the Office of the Director of National Intelligence. That's a heavily intelligence-community-weighted list — which tells you something about the framing of this initiative.
This isn't being built around consumer protection or labor displacement or algorithmic bias. It's being built around national security. Specifically, around what happens when an AI model can autonomously find and exploit zero-day vulnerabilities in critical infrastructure.
Senior administration officials briefed executives from Anthropic, Google, and OpenAI on some of the plans in meetings last week, according to reports. The review process being discussed could grant government agencies early access to models — without necessarily blocking public releases outright. That's a meaningful distinction. This isn't framed as a veto mechanism. It's framed as a government visibility mechanism. Whether that remains true if the reviews become adversarial is a different question.
One more thing: a White House official publicly dismissed reporting on the executive order as "speculation," adding that Trump would make any policy announcement himself. So this is still very much in process. Real tensions are "being worked through internally," per sources cited by multiple outlets.
How This Differs from What Came Before
To understand why this matters, you need to understand the regulatory vacuum this proposal would be filling.
Biden's October 2023 AI executive order was, at the time, the most substantive US government action on frontier AI safety. Its most consequential provision: developers of AI systems that posed risks to national security, the economy, or public health were required to share safety test results with the government before public release — backed by the authority of the Defense Production Act. The Biden administration also funded a Center for AI Standards and Innovation at the Commerce Department to run those evaluations.
Trump revoked that EO on January 20, 2025. Clean sweep.
What's existed since then is NIST's AI Risk Management Framework — a voluntary standard that provides guidance on managing AI risks. The AI RMF is genuinely useful as an internal governance tool. But "voluntary" is doing a lot of work in that sentence. There's no pre-release requirement. No mandatory government access. No enforcement mechanism. Companies that ignore it face no consequences.
The proposed working group would mark a sharp departure from that posture. Not because the ideas are new — they're actually borrowed from Biden's playbook — but because the Trump administration's political identity has been explicitly anti-regulation. For this White House to reverse course on AI oversight is a meaningful signal about how seriously Mythos changed the calculus.
There's also a structural difference worth noting. Biden's approach centralized review authority at Commerce / NIST. The proposal being discussed now distributes it across the intelligence community — NSA, ONCD, ODNI. Those agencies don't have established relationships with commercial AI companies. They don't have the AI evaluation infrastructure that NIST spent years building. That's either a feature or a bug, depending on your view of government capability.
Mythos Made This Unavoidable
We've covered the Mythos story in depth — what the model is and why Anthropic restricted it, and the White House's opposition to Anthropic's planned expansion of Project Glasswing. So I won't relitigate all of that here.
But you can't understand the pre-launch review proposal without understanding what problem it's trying to solve.
Mythos is a model Anthropic built and then decided not to release publicly — because in testing, it autonomously discovered thousands of zero-day vulnerabilities in real production systems, with more than 99% of those vulnerabilities still unpatched at announcement time. Anthropic restricted it to roughly 40 organizations through Project Glasswing. The White House found out about the model's capabilities through a direct briefing in April. And then watched as the same day Glasswing launched, unauthorized users in a private online forum somehow gained access.
That sequence of events — dangerous capability, restricted access, immediate breach — is exactly the scenario that pre-launch review frameworks are designed to address. The government's argument isn't irrational: if they'd known about Mythos capabilities before the April 7 announcement, they could have been part of the containment strategy from the start rather than scrambling to respond after the fact.
The question the proposed framework is trying to answer: how does the government get ahead of the next Mythos instead of reacting to it?
What This Would Mean for AI Companies
If the executive order moves forward — and that's still genuinely uncertain — the practical implications for AI companies would depend heavily on the details that don't yet exist.
A few scenarios are worth thinking through.
The soft access model. Government agencies get early access to models on a confidential basis, they flag concerns, and release proceeds unless there's a defined threshold breach. This is closest to how the UK's Frontier AI Safety Institute has operated — it's more of a partnership model than a gatekeeping one. Companies cooperate because the alternative (a harder review requirement) is worse, and because government certification could actually help with procurement.
The hard veto model. Government review can block or delay releases pending safety determination. This is closer to what Biden's EO implied, though it was never fully implemented. For AI companies, this creates enormous uncertainty: how do you plan a product roadmap when a government review with unclear timelines can hold up your launch?
The checkbox model. Review exists on paper, but with no clear standards, no enforcement mechanism, and agencies that don't have the technical capacity to meaningfully evaluate frontier models. This produces compliance theater — companies file reports, agencies acknowledge them, nothing changes.
Which model emerges matters enormously for industry. And right now, the discussions are fluid enough that it's genuinely hard to predict.
What's clear: this isn't like NIST guidance, which companies can opt into or out of. The executive order framing means this would have teeth. The question is what shape those teeth take.
The UK Model They're Drawing From
The structure under discussion reportedly draws from the United Kingdom's approach to frontier AI safety — specifically the framework developed by the UK AI Safety Institute after the 2023 Bletchley Park summit.
The UK model is worth understanding because it's been operating for about two years now and has produced actual evaluations. The UK AISI got early access to GPT-4o and Claude 3 Opus before their public releases. Those evaluations were largely collaborative: developers shared access, AISI ran their own tests, and results were summarized in public reports. No model was blocked. But the UK government had visibility before launch, which created accountability and informed public communication.
The UK AISI actually published an evaluation of Mythos Preview, confirming the cybersecurity capabilities Anthropic described in its own announcement. That independent verification mattered — it took "Anthropic is overstating its model's capabilities for PR reasons" off the table as a counterargument.
If the US version draws seriously from the UK model, it might be more workable than the intelligence-community framing suggests. The UK AISI built genuine technical capacity and worked collaboratively with developers. The NSA has not. Whether the US can replicate that collaborative dynamic — or whether the national security framing poisons the well for the kind of trust-based cooperation that makes the UK model function — is the critical open question.
Timeline and What to Actually Watch
The honest answer on timeline: nobody knows. The discussions are described as fluid, the White House is publicly calling reports speculation, and internal tensions haven't been resolved. This could become a formal executive order in weeks. It could stall indefinitely. It could emerge in a form that bears little resemblance to what's being reported now.
A few things worth watching.
Whether Anthropic signals support or resistance. Anthropic has been quiet on the proposed framework. Privately, they may prefer formal pre-launch review over the current situation — which is informal pressure, unpredictable supply chain designations, and a Pentagon court case. A clear process, even a demanding one, might be preferable to the chaos of ad hoc government intervention. But Anthropic's public posture will be telling.
Which agencies end up leading. NSA-led review signals a pure national security frame. Commerce Department involvement would signal continuity with Biden-era NIST infrastructure. ONCD involvement could be a bridge — it sits closer to the policy community than pure intelligence. The lead agency shapes the entire character of the review process.
Whether Google and OpenAI align with Anthropic or diverge. OpenAI has been publicly cooperative with the administration on some fronts. Google is playing its own political game. If the three major US frontier AI labs present unified support for a particular review framework, that has real political weight. If they diverge, the administration has more room to design whatever suits their priorities.
The Underlying Tension
What I keep coming back to is the fundamental tension this proposal doesn't resolve.
The pre-launch review framework is premised on the idea that government agencies can evaluate frontier AI models meaningfully before they're released. That requires technical capacity, institutional knowledge, and — honestly — a level of AI expertise that most government agencies don't currently have. The UK AISI built that capacity deliberately over two years, starting from a policy-first approach and hiring technical staff who could actually run evaluations.
The intelligence community agencies named in discussions — NSA, ONCD, ODNI — have different mandates, different cultures, and different relationships with private technology companies. Whether they can build the evaluation infrastructure needed to make pre-launch review meaningful, versus creating a process that produces compliance burden without meaningful oversight, is genuinely unknown.
The honest framing: this proposal is the government's attempt to get ahead of a problem (frontier AI with autonomous offensive capabilities) that it currently has no framework to address. Whether the specific mechanism being designed is the right one is a harder question. But the impulse — building some formal structure for government visibility before release, not just reactive pressure after the fact — isn't wrong.
What matters now is the details. And those details are still being fought over in rooms we don't have access to.
Sources: Bloomberg · Axios · Washington Post · TechCrunch — Mythos briefing · TechPolicy Press — Trump AI timeline
Top comments (0)