Key Takeaways
- Representative April McClain Delaney co-led the GUARDRAILS Act in March 2026, a bill designed to repeal a December 2025 Trump executive order that sought to impose a moratorium on state-level AI regulation.
- The GUARDRAILS Act directly challenges the federal administration’s push for consolidated AI regulatory authority, representing a legislative effort to restore states’ ability to establish their own AI safeguards.
- The debate over federal versus state control in AI governance is intensifying — with the prospect of fragmented, state-by-state regulation raising real compliance challenges for AI developers and deployers operating nationally. A bill introduced in Congress last month could determine whether American states retain the power to regulate artificial intelligence — or whether that authority belongs exclusively to Washington. The GUARDRAILS Act, co-led by Representative April McClain Delaney, takes direct aim at a December 2025 Trump executive order that sought to freeze state-level AI policymaking in its tracks. What happens next will shape the compliance landscape for every organisation building or deploying AI in the United States.
The New Push for State-Level AI Authority
The executive order at issue — titled “Ensuring a National Policy Framework for Artificial Intelligence” — directed federal agencies to prioritise a “minimally burdensome” regulatory environment for AI, effectively signalling that state-level rules were unwelcome. The GUARDRAILS Act, whose full name is the Guaranteeing and Upholding Americans’ Right to Decide Responsible AI Laws and Standards Act, would repeal that order and explicitly prohibit it from taking effect. Co-sponsors include Representatives Beyer, Matsui, Lieu, and Jacobs, with a companion Senate bill introduced by Senator Brian Schatz.
Proponents argue that states need the flexibility to respond quickly to AI-related harms affecting their residents, without waiting for federal consensus that may never arrive. Critics counter that allowing 50 different regulatory regimes to develop independently could fragment the market and place an unsustainable compliance burden on the industry. Both concerns are legitimate — and understanding them requires working through the specific challenges that state-level AI regulation presents.
Defining AI and Its Scope for State Law
Before states can regulate AI, they have to define it — and that turns out to be harder than it sounds. AI encompasses everything from basic automated decision tools to large-scale generative models, and the technology evolves faster than most legislative cycles. Definitions that are too narrow risk obsolescence within years; definitions that are too broad risk capturing low-risk software that poses no meaningful harm, adding compliance costs without corresponding public benefit.
Getting this right requires sustained engagement with technical experts, industry, and consumer advocates — not just at the point of drafting, but on an ongoing basis. States that treat their AI definitions as living frameworks, subject to regular review, are better positioned to remain relevant as the technology develops. Those that don’t may find their laws outpaced before enforcement even begins.
Navigating Jurisdictional Conflicts and Federal Preemption
The GUARDRAILS Act does not resolve the underlying tension between federal and state authority — it simply shifts the battlefield. Even if the bill passes, state AI laws remain vulnerable to legal challenge on grounds including federal supremacy and the dormant Commerce Clause, which limits states’ ability to enact rules that unduly burden interstate trade. The Trump administration’s executive order relied on funding conditions and agency guidance rather than direct statutory preemption, but that approach still carried significant force over state behaviour.
For businesses, the risk of a genuinely fragmented landscape is not theoretical. A company deploying an AI hiring tool nationally could, in principle, face distinct transparency requirements, audit obligations, and liability standards in each state where it operates. That kind of regulatory divergence has precedent in data privacy law — where the gap between California’s framework and other states’ approaches has created persistent compliance complexity — and AI is a considerably broader domain. This is a tension that courts are already beginning to navigate in related AI governance disputes.
Resource Constraints for Effective Enforcement
Passing an AI law and enforcing it are two very different things. Most state regulatory agencies were built to oversee traditional industries and lack the technical staff needed to audit algorithmic systems, assess model behaviour, or investigate AI-related complaints with any depth. Hiring data scientists and AI specialists into government roles — at salaries competitive with the private sector — is a challenge that even well-resourced federal agencies struggle with.
The consequence is predictable: laws that exist on paper but have limited real-world effect. Reactive enforcement, triggered only by complaints or high-profile failures, is unlikely to catch systemic harms embedded in AI systems that few people understand and fewer still can audit. States serious about AI regulation will need to invest in the institutional capacity to match their legislative ambitions — through dedicated technical units, partnerships with universities, or formal cooperation with federal bodies. Without that investment, the protection offered to citizens may be more symbolic than substantive.
Balancing Innovation with Public Protection
The innovation-versus-protection debate is often framed as a binary choice, but the more useful question is whether regulation is proportionate to risk. High-stakes AI applications — those making decisions about bail, benefits, employment, or medical treatment — warrant rigorous oversight, including mandatory impact assessments, human review requirements, and meaningful redress mechanisms. Lower-risk applications, such as recommendation engines or productivity tools, generally do not require the same level of scrutiny.
Risk-tiered frameworks are increasingly the international standard, reflected in approaches like the EU AI Act. States that adopt similar structures — targeting their regulatory resources where harm is most likely — are better positioned to protect residents without creating a hostile environment for AI development. Regulatory sandboxes and pilot programmes can also provide a middle path, allowing new applications to be tested under controlled conditions before full compliance obligations apply.
Implications for Interstate Commerce
The commercial implications of a state-by-state regulatory patchwork deserve direct attention. An AI developer building a product for national deployment faces a straightforward question: which state’s rules govern, and what happens when they conflict? Requirements around bias testing, data handling, transparency disclosures, and accountability mechanisms could vary significantly across jurisdictions, forcing companies to either build multiple product variants or accept that they cannot operate in certain markets.
This burden falls hardest on smaller companies and startups, which lack the legal and compliance infrastructure to navigate complex multi-state requirements. Larger incumbents, by contrast, can absorb those costs — potentially turning regulatory complexity into a competitive moat. If states want to encourage a diverse and competitive AI ecosystem rather than consolidate market power among a handful of large players, some degree of coordination on common standards is not just pragmatic; it may be necessary. The cost of poorly coordinated AI governance is already well documented at the institutional level.
Aligning with Existing Data Privacy and Security Frameworks
Most states considering AI regulation already have data privacy laws on the books. California’s CCPA framework, along with sector-specific security requirements in finance and healthcare, creates an existing compliance architecture that new AI rules must fit alongside — or risk creating contradictions. AI systems are heavily data-dependent, and questions about how existing privacy rights apply to algorithmic decision-making and model training remain genuinely unsettled.
For example, a right to deletion under a state privacy law may conflict with the technical realities of how a trained model retains information. Transparency requirements designed to give individuals insight into automated decisions may clash with trade secret protections or, in some contexts, national security considerations. States drafting AI legislation need to map these interactions carefully, ideally in consultation with the agencies already responsible for privacy enforcement, to avoid creating a compliance landscape where organisations face irreconcilable obligations.
Ethical Considerations and Bias Mitigation at the State Level
States have historically been active in civil rights and consumer protection — areas directly implicated by AI systems that embed or amplify discriminatory patterns. Algorithmic bias in employment screening, credit decisions, housing, and criminal justice is well documented, and affected individuals often have limited visibility into how those decisions were made or how to challenge them. This is precisely the terrain where state-level regulation can add genuine value, particularly where federal civil rights enforcement is constrained or inconsistent.
The challenge lies in translating principles into enforceable rules. Defining “fairness” in statistical terms is contested even among researchers; mandating algorithmic transparency without exposing proprietary systems requires careful drafting; and assigning legal accountability for AI-driven harm involves questions of causation that existing tort law was not designed to address. State laws that require bias audits, algorithmic impact assessments, and clear redress pathways for affected individuals represent a meaningful step — provided they are written with enough precision to be enforceable and enough flexibility to remain relevant as AI capabilities continue to evolve. For more coverage of AI policy and regulation, visit our AI Policy & Regulation section.
Originally published at https://autonainews.com/seven-key-challenges-for-states-shaping-ai-policy-in-2026/
Top comments (0)