Federal Takeover of AI Governance? Breaking Down the White House’s New Executive Order — AI: The Washington Report
Main Points
- On December 11, President Trump signed an Executive Order (EO) titled “Ensuring a National Policy Framework for Artificial Intelligence.” The EO is designed to preempt much of the states’ authority over AI governance and constrain recent state-level efforts to regulate AI. Its issuance follows several failed attempts to pass AI federal preemption through federal legislation this year.
- The signed EO reflects revisions from an earlier draft that was leaked and reported by several news outlets on November 19, which we’ve previously covered. The changes in the signed version appear to attempt to “preempt” some of the grounds for opposition to legislation and in the leaked earlier draft, while preserving the administration’s core objective of limiting what it views as a fragmented and burdensome state regulatory landscape for AI.
- The EO represents a potentially unprecedented use of executive authority to preempt state-level AI regulations even before any substantive AI federal legislation has been proposed, let alone become law. We would expect legal and political challenges to its existence and implementation across several agencies in the coming months.
- The EO asserts that “[i]t is policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” The EO does not have the force of legislation, nor does it itself preempt state laws. Instead, it attempts to centralize AI governance and oversight at the federal level by leveraging executive branch litigation authority and spending levers to discourage state lawmaking, and potentially establish federal standards intended to supersede state-level AI laws. These actions, if they affect state behavior either directly or indirectly, would narrow the scope of state authority and curtail states’ recent efforts in areas such as algorithmic transparency, bias mitigation, and the regulation of high-risk AI applications.
- It should be emphasized that while the EO signals a federal effort to preempt state AI laws, it does not suspend or invalidate existing regulations. And though the out-of-bounds lines remain unclear, the EO suggests, in an attempt to neuter opposition, that states will retain authority, particularly in the carve-outs in the EO like child safety, infrastructure, and procurement. Businesses should continue to comply with current state requirements while we monitor how courts, agencies, and state governments respond to the EO’s implementation.
On December 11, President Trump signed an Executive Order (EO) titled “Ensuring a National Policy Framework for Artificial Intelligence.” The EO is designed to preempt much of the states’ authority over AI governance and to constrain recent state-level efforts to regulate AI. The EO represents a doubling down in the administration’s push toward a federal-first approach to AI policy, following unsuccessful legislative efforts earlier this year to enact broad federal preemption of state AI laws. Last summer, the main attempt to include a 10-year preemption of state regulation in the One Big Beautiful Bill failed by a 99-1 vote in the US Senate.
The final EO reflects revisions from a leaked draft that was reported by several news outlets on November 19, which we’ve previously covered. Issuance of the order was delayed in the weeks following those reports, reportedly due to opposition from members of Congress, including some within the president’s own party. The signed version appears calibrated in part to “preempt” some of those concerns, while retaining the administration’s core objective of limiting what it views as a fragmented and burdensome state regulatory landscape for AI.
Overview and Policy Orientation of the Executive Order
The EO states that “[i]t is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” The EO asserts that to carry out this policy, “United States AI companies must be free to innovate without cumbersome regulation. But excessive State regulation thwarts this imperative.” It advances the view that a fragmented regulatory landscape, described as “50 discordant State” approaches, impedes innovation and economic growth, and argues instead for a “minimally burdensome national standard.” Within this framework, the EO characterizes certain state laws as imposing requirements that compel AI models to alter truthful outputs or embed ideological bias. As the only concrete example, the EO cites Colorado’s Artificial Intelligence Act (S.B. 24-205) as one that, in the administration’s view, departs from its preferred deregulatory approach to AI oversight. (Perhaps notably, the final EO omits the California-specific S.B. 53 reference included in the leaked initial draft.)
The final signed EO elaborates on the operational elements previewed in the draft and incorporates an explicit policy statement that had been anticipated in earlier reporting. In our prior coverage of the initial draft, we outlined the EO’s key operative elements, most of which are retained in the final version, albeit with revisions in language, scope, and emphasis.
Key Points in the EO:
- DOJ AI Litigation Task Force: Before January 15, 2026, the Attorney General must establish an AI Litigation Task Force (the Task Force) whose “sole responsibility” is to challenge state AI laws that conflict with the federal policy of a minimally burdensome national AI framework. The Task Force can bring challenges on grounds such as interstate commerce clause violations, preemption by existing federal regulations, or other bases “unlawful in the Attorney General’s judgment.” It must consult periodically with key White House advisors, including the Special Advisor for AI and Crypto (David Sacks), the Assistant to the President for Science and Technology, the Assistant to the President for Economic Policy, and the Counsel to the President regarding which state laws warrant litigation challenges.
- Funding leverage: The EO directs the Commerce Department to assess whether federal funds under the Broadband Equity Access and Deployment (BEAD) program should be withheld from states whose AI laws are deemed inconsistent with the EO’s policy directive before March 16, 2026. This approach mirrors language from the previously failed Senate proposal for a 10-year moratorium on state AI regulations. Moreover, federal agencies and executive departments are “to assess their discretionary grant programs in consultation with the Special Advisor for AI and Crypto” to determine whether they may condition on states either not enacting conflicting AI laws or entering into agreements not to enforce conflicting laws during the grant period.
- Agency actions:
- Commerce Department: Before March 16, 2026, the Commerce Department is to identify “onerous” state laws that don’t comply with the EO and refer those to the AI Litigation Task Force. Reinforcing President Trump’s July Executive Order titled “Preventing Woke AI in the Federal Government,” the draft EO calls for reviewing state AI laws that “require AI models to alter truthful outputs.” Optionally, they may identify state laws that promote AI innovation consistent with the policy of the EO of a “minimally burdensome national policy framework for AI.” This final sentence on the “good laws” clause at the end of Section 4 did not appear in the leaked version of the EO. Compared to the initial draft, the signed EO adds explicit language clarifying the types of state laws that are priorities for identification and referral to the Task Force, such as laws “that require AI models to alter their truthful outputs or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution.”
- The Federal Communications Commission (FCC) Chairman and the Special Advisor for AI and Crypto (David Sacks) are to explore a federal reporting or disclosure standard for AI models that would preempt state requirements, before March 16, 2026.
- The Federal Trade Commission (FTC) and David Sacks are tasked with issuing a policy statement before March 16, 2026 on how the FTC Act’s prohibition on unfair or deceptive practices (15 U.S.C. § 45) applies to AI models. It further directs the FTC to “explain the circumstances under which State laws that require alterations to the truthful outputs of AI models” would be preempted by the FTC Act’s prohibition on unfair or deceptive practices.
- Legislation: Lastly, the EO directs David Sacks and the Assistant to the President for Science and Technology to prepare a legislative recommendation for a federal AI framework designed to preempt state laws in areas covered by the EO.
- Critically, while the draft preview states a legislative push for a federal AI framework, the final EO’s explicit carve-outs specify state law categories that should not be proposed for preemption under Section 8(b), including child safety protections, AI compute and data center infrastructure (except general permitting), state government procurement and use of AI, and other topics as shall be determined, were not clearly outlined in the initial draft EO reported but are significant substantive additions in the signed EO.
Prior Preemption and Federal Efforts
As mentioned, the EO follows a failed legislative effort led by Sen. Ted Cruz (R-TX) this summer to impose a 10-year moratorium on state AI laws through the One Big Beautiful Bill, as we’ve previously covered. That proposal collapsed in a 99-1 vote, reflecting deep divisions, both across parties and within the GOP, over the balance between federal preemption and the role of states in shaping AI policy.
The EO’s issuance also came nearly immediately after the release of the conferenced National Defense Authorization Act (NDAA) text by the House and Senate on November 30, following a months-long reconciliation process. Some supporters of the 10-year moratorium have attempted to include state AI preemption language in the NDAA, but bipartisan concerns over the measure prevented its inclusion in the final conference report. In response to these legislative setbacks, the administration has appeared to pivot to executive action through the EO.
The EO aligns in part with Trump administration initiatives, including the January Executive Order 14179, “Removing Barriers to American Leadership in AI,” which directed agencies to prioritize innovation and called upon the FTC to review state laws insofar as they may constitute deceptive acts or practices affecting commerce. Similarly, the White House AI Action Plan, released in July, directed the FTC to reassess investigations initiated under the prior administration to ensure they “do not advance theories of liability that unduly burden AI innovation.” The AI Action Plan also identified funding leverage as a mechanism to influence states with “onerous” AI regulations.
The rapid sequence of developments surrounding the EO underscores the intensifying federal-state conflict over AI authority. Just two days prior to the EO’s signing, the bipartisan State Attorneys General AI Task Force, with a founding focus on “anti-preemption,” sent a letter to Big Tech companies “urging them to implement safeguards on artificial intelligence (AI) chatbots to protect children and vulnerable people,” emphasizing states’ continued interest in regulating AI. (For additional background on the State Attorneys General AI Task Force, see our newsletter last week.) State activities in this space are not limited to so-called “Blue States,” as we discuss below the notable example of state-level activity in Florida.
Together these developments highlight the growing tension between federal and state approaches to AI regulation and signal that the EO represents a strategic pivot by the administration to assert national control over AI policy in the absence of congressional consensus.
Implications of Federal Preemption vs. State Authority
The EO represents a seismic shift toward federal preemption in AI governance through executive action. It centralizes AI governance and oversight at the federal level, leveraging executive branch litigation and spending levers to discourage state lawmaking, and potentially establish federal standards through legislation intended to supersede state-level AI laws. These actions would narrow the scope of state authority and curtail states’ recent efforts in areas such as algorithmic transparency, bias mitigation, and the regulation of high-risk AI applications.
Florida Governor Ron DeSantis signaled that President Trump’s EO on AI will not deter Florida from advancing its own AI policies, particularly in areas such as child safety and consumer protection. In his first public comments on the EO, Governor DeSantis said that Florida’s proposed measures, including an AI “bill of rights,” disclosure requirements for AI interactions, limits on AI-driven insurance claim denials, and restrictions on the use of AI in mental health services, would remain “very consistent” with federal guidance, even under a broad reading of the order. Emphasizing state authority, DeSantis asserted that Florida “has a right to do this,” expressing confidence that any state actions would withstand potential federal challenges. His remarks underscore how states may rely on the EO’s carve-outs, especially for child safety and infrastructure-related measures, to justify continued state-level AI regulation despite the administration’s broader push toward federal preemption.
The EO’s reliance on federal agencies to establish standards that would effectively preempt state AI laws raises significant legal and constitutional questions. Unlike traditional preemption efforts grounded in explicit congressional authorization, the EO largely depends on agency action, such as FTC policy statements, potential FCC rulemaking, and discretionary grant conditions, to displace state regulatory authority. This approach invites scrutiny over whether existing statutes provide sufficient authority for agencies to assert preemptive effect, particularly in light of the Supreme Court’s application of the major questions doctrine and increasing skepticism toward expansive agency interpretations absent clear congressional intent.[1]
The EO also raises broader separation-of-powers concerns, as courts may question whether the executive branch is using litigation, funding leverage, and regulatory guidance to accomplish a level of nationwide preemption that Congress itself has not enacted — and which the Senate overwhelmingly rejected just months ago. As a result, the EO’s implementation will likely face legal challenges testing the limits of administrative federal preemption, the Spending Clause, and the executive branch’s role in reshaping the federal-state balance in emerging AI regulation.
The EO does not by itself preempt any state actions. State AI statutes currently continue to operate and may continue to operate in areas not addressed by federal action, but the EO appears designed to limit state discretion where federal objectives relating to national competitiveness and AI innovation are implicated. While the EO signals the Trump administration’s effort to preempt state AI laws, it does not suspend or invalidate existing regulations, and states retain authority, particularly in the carve-outs in the EO like child safety, infrastructure, and procurement. Businesses should continue to comply with current state requirements while we monitor how courts, agencies, and state governments respond to the EO’s implementation.
[1] In West Virginia v. EPA (2022) and related precedents, the Supreme Court emphasized that “[i]f an agency seeks to decide an issue of major national significance, a general delegation of authority may not be enough; instead, the action must be supported by clear statutory authorization.”
Authors
Bruce D. Sokler
Member / Co-chair, Antitrust Practice
Alexander Hecht
ML Strategies - Executive Vice President & Director of Operations
Erek L. Barron
Member / Chair, Crisis Management and Strategic Response Practice
Christian Tamotsu Fjeld
Senior Vice President





