Federal Preemption in AI Governance: What the Expected Executive Order Means for Your State Compliance Strategy — AI: The Washington Report
Main Points:
- According to multiple media outlets, the Trump administration is circulating a draft Executive Order titled “Eliminating State Law Obstruction of National AI Policy.” If signed, the EO would represent a significant escalation in federal efforts to override state-level AI regulations.
- The draft EO apparently references California’s Transparency in Frontier Artificial Intelligence Act (S.B. 53) and Colorado’s Artificial Intelligence Act (S.B. 24-205), framing these statutes as examples of a “patchwork” regulatory landscape that the federal government seeks to harmonize under a national framework.
- The draft EO directs the Federal Trade Commission (FTC) to evaluate state AI-output requirements under 15 U.S.C. § 45 and proposes a DOJ-led AI Litigation Task Force to challenge state laws in federal court, signaling a potential new enforcement mechanism to advance federal preemption of AI regulations.
- Major tech industry associations have publicly supported federal preemption to promote uniform standards and limit regulatory fragmentation, while hundreds of civil society, labor, and consumer protection organizations have voiced opposition, warning that preemption could undermine transparency, accountability, and civil rights protections in AI deployment.
- If implemented, the EO would centralize AI governance and oversight at the federal level, leveraging executive branch litigation and spending levers to discourage state lawmaking, and potentially establish federal standards intended to supersede state-level AI laws. These actions would narrow the scope of state authority and curtail states’ recent efforts in areas such as algorithmic transparency, bias mitigation, and the regulation of high-risk AI applications.
On November 19, multiple news outlets reported that President Trump is considering an Executive Order (EO) aimed at challenging state efforts in AI governance. The reported draft EO follows a summer of mounting tension between state policy makers and the administration’s federal-first, deregulatory approach. If issued (potentially as soon as today, according to several sources), the EO would mark a significant federal intervention in the ongoing debate regarding the allocation of authority between federal and state governments in AI governance.
Overview of the Reported Draft Executive Order
The draft EO reportedly contends that the proliferation of state AI laws, with “over 1,000 AI bills introduced by state legislatures,” poses risks to US competitiveness. It asserts that “American AI companies must be free to innovate without cumbersome regulation” and grounds federal preemption in the policy goal of advancing “America’s global AI dominance through a minimally burdensome, uniform national policy framework for AI.”
Under this policy directive, the reported draft EO includes the following operative elements:
- DOJ AI Litigation Task Force: The draft EO would establish an “AI Litigation Task Force” under Attorney General Pam Bondi to challenge state AI laws in federal courts, “including on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing federal regulations, or are otherwise unlawful.” The draft EO lists several White House advisors, including AI and crypto czar David Sacks, to identify state laws that may warrant litigation.
- Funding leverage: The draft EO directs the Commerce Department to assess whether federal funds under the Broadband Equity Access and Deployment (BEAD) program should be withheld from states whose AI laws are deemed inconsistent with the EO’s policy directive. This approach mirrors language from the previously failed Senate proposal for a 10-year moratorium on state AI regulations.
- Agency actions:
- Commerce Department: Within 90 days of the EO’s issuance, the Commerce Department is to identify state laws that don’t comply with the EO and refer those to the AI Litigation Task Force. Reinforcing President Trump’s July Executive Order on “Preventing Woke AI in the Federal Government,” the draft EO calls for reviewing state AI laws that “require AI models to alter truthful outputs.”
- Federal agencies are “to assess their discretionary grant programs” to determine whether grants should be conditioned on state non-enforcement of conflicting AI laws.
- The Federal Communications Commission (FCC) Chairman and David Sacks are to explore a federal reporting or disclosure standard for AI models that would preempt state requirements.
- The Federal Trade Commission (FTC) and David Sacks are tasked with issuing a policy statement on how the FTC Act’s prohibition on unfair or deceptive practices (15 U.S.C. § 45) applies to AI models. It further directs the FTC to “explain the circumstances under which State laws that require alterations to the truthful outputs of AI models” would be preempted by the FTC Act’s prohibition on unfair or deceptive practices.
- Broader legislative framework: Lastly, the EO reportedly directs David Sacks and the Director of the Office of Legislative Affairs to develop a legislative recommendation for a federal AI framework designed to preempt state laws in areas covered by the EO.
State Targets and Divergence from Federal Posture
The draft EO specifically references recently enacted state AI statutes, including California’s Transparency in Frontier Artificial Intelligence Act (S.B. 53) and Colorado’s Artificial Intelligence Act (S.B. 24-205), which it characterizes as contributing to a “patchwork” regulatory landscape, as we’ve previously covered. These state laws impose requirements on large frontier developers and high-risk AI system deployers, including requirements for transparency reporting, model-risk disclosures, and guardrails for high-risk decision systems used in employment, housing, health care, and education. The draft EO appears designed to curb regulatory divergence by limiting states’ ability to enforce such obligations, in line with the administration’s broader deregulatory approach to AI.
Both California’s and Colorado’s statutes diverge from the deregulatory stance favored by the Trump administration, which emphasizes minimal oversight of AI technologies to facilitate national AI competitiveness and infrastructure. This draft EO aligns with the White House’s AI Action Plan released this summer, which prioritizes advancing US leadership in AI through deregulation, as we’ve previously reported. On November 18, President Trump posted on Truth Social that China “could easily catch us” in the global AI race absent a uniform, nationwide AI framework, signaling the potential issuance of the EO. By seeking to reduce state-level regulatory barriers, the draft EO reinforces the administration’s objectives of accelerating AI innovation to promote US competitiveness.
Intersection with Industry Positions
Press reports note that several provisions in the draft EO align with positions advanced by major technology industry associations. Over the past year, these groups have expressed concern about regulatory fragmentation and have advocated for a single federal standard, a more narrowly defined interpretation of “unfair or deceptive” AI practices, and clearer limits on developer liability.
The draft EO’s emphasis on federal primacy, the FTC’s role in reviewing state output–related requirements, and the potential conditioning of federal funding on state non-enforcement reflect areas of convergence with these industry priorities. The policy effects, if implemented, would likely shift authority toward federal institutions and reduce the discretion of individual states.
At the same time, hundreds of organizations, including tech-worker unions, labor groups, AI-safety and consumer protection nonprofits, and academic institutions, submitted letters to Congress this week opposing efforts to preempt state AI regulations and cautioning against weakening existing AI safeguards. While the letters were addressed to proposed federal preemption language in the NDAA FY2026, their arguments are relevant to the draft EO. The coalition contends that “federal preemption would invalidate key state laws that protect against ‘high impact’ AI” and that it would be “virtually impossible to achieve a level of transparency into the AI system necessary for state regulators to even enforce laws of general applicability, such as tort or antidiscrimination law.”
FTC, the AI Action Plan, and Federal AI Governance Architecture
The draft EO builds on the administration’s AI Action Plan, which calls for ensuring AI outputs remain “viewpoint neutral” and rejects state-imposed standards tied to bias mitigation, civil rights protections, and content moderation norms. The directive to the FTC in the draft EO is particularly notable, as it would ask the agency to apply the FTC Act’s prohibition on unfair and deceptive acts or practices under 15 U.S.C. § 45 to AI models — an expansion of the agency’s historically corporation-focused enforcement role.
Under this framing, state AI laws such as those in California and Colorado, which impose obligations related to fairness, bias mitigation, non-discrimination, or content moderation, may be characterized as imposing ideological conditions on AI model outputs. The draft EO calls for reviewing state AI laws that “require AI models to alter truthful outputs” and furthers President Trump’s July Executive Order on “Preventing Woke AI in the Federal Government.” This represents a departure from traditional FTC practice, which has focused on regulating commercial entities rather than evaluating the legality of state regulatory mandates. If adopted, the FTC policy contemplated in the draft EO could support litigation by the DOJ’s AI Task Force by framing state output–related requirements as conflicting with federal consumer protection law. The EO’s structure suggests that the FTC’s interpretive role is intended to serve as a predicate for broader federal preemption efforts.
The draft EO also aligns with prior actions by the administration, including E.O. 14179, “Removing Barriers to American Leadership in AI,” which directs agencies to prioritize innovation and calls upon the FTC to review state laws insofar as they may constitute deceptive acts or practices affecting commerce. Similarly, the AI Action Plan directs the FTC to reassess investigations initiated under the Biden administration to “ensure that they do not advance theories of liability that unduly burden AI innovation.” Taken together, these directives underscore the administration’s emphasis on reducing regulatory constraints in favor of advancing national AI competitiveness.
Stakeholder perspectives on these federal preemption measures vary widely. The American Enterprise Institute (AEI) expressed their support for a federal AI framework in an October 28 letter, noting that “the single biggest risk to AI innovation isn’t from foreign competitors. It’s from poorly-designed state laws that undermine the innovation they claim to protect.” Several banking sector groups have also expressed support for the administration’s AI Action Plan and its push for federal preemption to establish a uniform national standard. By contrast, a coalition of more than 30 civil rights, consumer protection, and civil society organizations has voiced opposition to removing or weakening regulatory safeguards, particularly at the federal level, citing concerns regarding transparency, privacy, and the reliability and safety of AI systems.
Prior Preemption Efforts and Political Landscape
The draft EO follows a failed legislative effort led by Senator Ted Cruz (R-TX) this summer to impose a 10-year moratorium on state AI laws through the One Big Beautiful Bill, as we’ve previously covered. That proposal collapsed in a 99-1 vote, reflecting deep divisions, both across parties and within the GOP, over the balance between federal preemption and the role of states in shaping AI policy. A number of supporters of the 10-year moratorium have attempted to include it within the final Defense Authorization (NDAA) conference report; but due to the controversial, bipartisan concerns with the moratorium, that effort seems unlikely to succeed.
In reaction, the administration now appears to be pivoting to executive action via the Executive Order after legislative failure.
The question of federal preemption on AI governance has divided Republicans into two main camps: those who tout the party’s traditional stance of states’ rights, and those concerned with “winning the AI race” and who view a federal preemption as the best solution. This draft EO views states like California and Colorado as “the most restrictive states [who will] dictate national AI policy at the expense of America’s domination of this new frontier.”
Advocates for freezing state-level AI regulations — including ranking member Rep. Hank Johnson (D-GA) and Chairman Rep. Darrell Issa (R-CA) of the House Subcommittee on Judiciary Courts, IP, AI, and the Internet — argue that “state preemption would take away citizens’ common law right of action” and that federal preemption would be helpful to prevent a patchwork of parochial AI regulations. Lawmakers who opposed the 10-year moratorium, including Sen. Ron Johnson (R-WI) and Sen. Josh Hawley (R-MO), have cited the economic detriment of a federal framework for AI innovation and the lack of consideration for states’ rights as reasons for their opposition. Governor Ron DeSantis (R-FL) also made a statement this week on X, stating that “denying the people the ability to channel these technologies in a productive way via self-government constitutes federal government overreach and lets technology companies run wild.” It appears that the main supporters of the draft EO would be President Trump, his administration, and Big Tech companies in Silicon Valley with stakes in the AI regulatory landscape.
NDAA AI Provisions and Their Relevance to the Draft EO
The Senate’s FY2026 National Defense Authorization Act (NDAA), advanced on October 9, includes several AI-related provisions as well as the Guaranteeing Access and Innovation for National Artificial Intelligence Act of 2026 (GAIN Act). Together these provisions prioritize domestic compute capacity, supply chain security, and federal coordination of AI development — areas that broadly align with the administration’s stated federal-first approach to AI policy.
Although both chambers have passed their respective NDAA bills, a final conference report has not yet been issued. Late December remains the expected timeline for a final conference report. The Senate and House Armed Services Committees have pushed this week to wrap up negotiations, with action on the final conference report likely over the next few weeks.
Against this backdrop, the draft EO, aimed at curbing “state law obstruction” of national AI policy, may be viewed as a broader federal consolidation strategy. Although some state legislators and members of Congress have pushed back on federal preemption, the Senate’s FY2026 NDAA signals that Congress may be moving toward a more nationally coordinated AI framework by prioritizing federal infrastructure, security, and compute capacity for AI. In effect, while Congress advances nationwide AI capabilities through the NDAA, the administration is simultaneously pursuing its own path to consolidate regulatory control and limit the emergence of conflicting state AI regimes.
Implications of Federal Preemption vs. State Authority
If implemented as reported, the EO would mark a sweeping shift toward federal preemption in AI governance. It would centralize AI governance and oversight at the federal level, leveraging executive branch litigation and spending levers to discourage state lawmaking, and potentially establish federal standards intended to supersede state-level AI laws. These actions would narrow the scope of state authority and curtail states’ recent efforts in areas such as algorithmic transparency, bias mitigation, and the regulation of high-risk AI applications.
State AI statutes may continue to operate in areas not addressed by federal action, but the EO as drafted appears designed to limit state discretion where federal objectives relating to national competitiveness and AI innovation are implicated. The potential result would be increased federal consolidation and heightened uncertainty for states pursuing comprehensive AI regulatory regimes.
As of now, the document remains a draft EO as published by news outlets, and its provisions may change prior to issuance. We will update our reporting once a final version is released. Please feel free to contact us if you have questions about current practices or how to proceed.
Authors
Bruce D. Sokler
Member / Co-chair, Antitrust Practice
Alexander Hecht
ML Strategies - Executive Vice President & Director of Operations
Erek L. Barron
Member / Chair, Crisis Management and Strategic Response Practice
Christian Tamotsu Fjeld
Senior Vice President





