NIST Expands Federal AI Capacity; Florida Proposes AI Bill of Rights — AI: The Washington Report
Main Points
- On December 22, 2025, the US Department of Commerce’s National Institute of Standards and Technology (NIST) announced the establishment of two AI centers aimed at “strengthening U.S. manufacturing and cybersecurity for critical infrastructure.” Through an expanded partnership with nonprofit MITRE Corporation, NIST is investing $20 million to establish:
- the AI Economic Security Center for US Manufacturing Productivity and
- the AI Economic Security Center to Secure US Critical Infrastructure from Cyberthreats.
- The launch of these AI centers furthers Pillars I and II of the White House AI Action Plan, which emphasize accelerating AI innovation and building American AI infrastructure, particularly through investments in data centers and manufacturing ecosystems.
- Meanwhile, in Florida, Gov. Ron DeSantis (R-FL) announced a sweeping AI “Bill of Rights” for consumers and an AI data centers proposal to establish comprehensive state-level AI regulations and consumer protections on December 5, 2025. Both have been filed in the state Senate and House for consideration during the legislative session beginning on January 13.
- The Florida proposals aim to protect consumers and local communities from perceived harms associated with AI and the physical infrastructure that underpins it. They arrive amidst a growing debate between state authority and federal preemption efforts, especially following President Trump’s Executive Order titled “Ensuring a National Policy Framework for Artificial Intelligence,” signed on December 11, 2025. We’ve written on how the EO seeks to limit state-level AI regulation in favor of a uniform national policy.
- Following President Trump’s EO, Governor DeSantis responded shortly after unveiling Florida’s AI Bill of Rights. DeSantis asserted: “Even reading it very broadly, I think the stuff we’re doing is going to be very consistent.” He added: “But irrespective, clearly, we have a right to do this.”
NIST Launches Centers for AI in Manufacturing and Critical Infrastructure
On December 22, 2025, the US Department of Commerce’s National Institute of Standards and Technology (NIST) announced the establishment of two AI centers aimed at “strengthening U.S. manufacturing and cybersecurity for critical infrastructure.” Through an expanded partnership with nonprofit MITRE Corporation, NIST is investing $20 million to establish:
- the AI Economic Security Center for US Manufacturing Productivity and
- the AI Economic Security Center to Secure US Critical Infrastructure from Cyberthreats.
Both centers are designed to accelerate deployment of AI-driven tools and solutions in their respective domains by advancing applied research, standards, and public-private collaboration. MITRE will operate the centers in coordination with NIST experts, industry partners, and academic stakeholders.
This initiative is a key component of NIST’s Strategy for American Technology Leadership in the 21st Century, which aims to speed critical and emerging technologies from development to broader adoption. Pillars I and II of the White House AI Action Plan emphasize accelerating AI innovation and building American AI infrastructure, particularly through investments in data centers and manufacturing ecosystems. NIST’s approach reflects a direct commitment to advancing these priorities, as highlighted in its press release, by translating strategic recommendations into actionable programs for the administration’s stated goals of AI innovation and US competitiveness.
The new centers expand NIST’s broader AI ecosystem, including the Center for AI Standards and Innovation (CAISI), which aims to define best practices and measurement standards for both US and adversarial AI systems. In addition, NIST plans to award funding for an AI for Resilient Manufacturing Institute under the Manufacturing USA program. This forthcoming institute is expected to receive up to $70 million in federal funding over five years, matched by at least that amount from nonfederal sources, to embed AI across supply chains and production systems to enhance resilience and competitiveness.
Together, the $20 million and $70 million investments signal a multi-year federal commitment to scaling AI adoption in industrial contexts beyond experimental pilots and demonstrating value in real-world applications.
Positioning Within the US AI Regulatory Landscape
The launch of these AI centers comes amid broader federal efforts to shape a coordinated national AI policy framework. In late 2025, an Executive Order establishing the Genesis Mission underscored the administration’s ambition to harness AI for scientific discovery and economic growth across sectors, including advanced manufacturing and critical infrastructure, as we’ve previously covered.
In this context, NIST’s centers function as non-regulatory capacity-building institutions, complementing federal standards designed to promote secure AI systems. By providing technical foundations and practical tools, these centers support the administration’s federal AI policy goals while avoiding the fracturing effects of divergent state AI laws or fragmented regulatory regimes.
The NIST launch of the new AI centers is part of the Trump administration’s plan that prioritizes federal primacy in AI governance and a coordinated approach to AI — in this case, capacity-building in AI — over a compliance-first and fragmented approach varying by states.
Florida’s AI Bill of Rights
While the federal government rolls out a national AI strategy through initiatives like NIST’s newly established AI centers and federal standards efforts, states are increasingly asserting their own more regulatory visions. In Florida, Gov. Ron DeSantis (R-FL) announced a sweeping AI “Bill of Rights” for consumers and an AI data centers proposal to establish comprehensive state-level AI regulations and consumer protections on December 5, 2025. Both have been filed in the state Senate and House for consideration during the legislative session beginning on January 13.
The Florida proposals aim to protect consumers and local communities from perceived harms associated with AI and the physical infrastructure that underpins it. They arrive amidst a growing debate between state authority and federal preemption efforts, especially following President Trump’s Executive Order titled “Ensuring a National Policy Framework for Artificial Intelligence,” signed on December 11, 2025. We’ve written on how the EO seeks to limit state-level AI regulation in favor of a uniform national policy.
Florida’s proposal includes two legislative packages:
- AI Bill of Rights: 15 major provisions addressing consumer protection, child safety, and industry accountability.
- Data Centers Regulation: 12 provisions centering on preventing cost-shifting and protecting local control.
The draft legislation includes provisions aimed at bolstering privacy and transparency in AI interactions. Key measures include:
- Prohibiting AI from using a person’s name, image, or likeness without consent.
- Requiring explicit notices when users interact with AI chatbots.
- Enabling parental oversight of child-chatbot conversations and mandating alerts for concerning behavior.
- Banning AI “therapy” without human oversight.
- Reinforcing bans on deepfakes, particularly involving minors.
- Blocking the use of Chinese-origin AI tools like DeepSeek by state or local agencies.
- Restricting insurers from relying solely on AI for claims adjudication, with mandated transparency and regulatory oversight.
The companion data-center proposal restricts development of hyperscale AI centers and reflects mounting concerns across the state over the costs and local impacts of AI infrastructure, particularly electricity, water usage, and environmental effects in host communities.
Intersection with Federal AI Policy
Following President Trump’s signing of the “Ensuring a National Policy Framework for Artificial Intelligence” EO, aimed at preempting much of states’ authority over AI governance and curbing recent state-level regulatory efforts, Gov. Ron DeSantis responded shortly after unveiling Florida’s AI Bill of Rights. The Republican governor asserted: “Even reading it very broadly, I think the stuff we’re doing is going to be very consistent.” He added: “But irrespective, clearly, we have a right to do this.”
Where Trump’s EO seeks to preempt or neutralize state laws deemed overly regulatory, Florida’s proposal asserts a state right to regulate AI in areas like privacy, child welfare, and infrastructure-related impacts. DeSantis explicitly framed state-level legislation as necessary, stating Florida would not allow the federal government to strip away its ability to protect Floridians.
However, the EO does exempt child safety measures from preemption — one of the core themes in Florida’s package (parental controls, deepfake bans, bot transparency). This carve-out could help Florida defend parts of its AI Bill of Rights if challenged under the EO. Both the Florida proposed legislation package and newly signed EO have a shared emphasis on child safety and consumer notification when interacting with AI. However, they diverge in the assertion of authority: Where Florida seeks to assert local authority over AI data centers, Trump’s EO threatens to cut federal funds to states that enact what it defines as “onerous” AI restrictions, which may include Florida’s data-center rules if they pass the state legislature.
The interplay between these approaches may shape where regulatory authority ultimately resides. Florida’s AI Bill of Rights represents a counterpoint to Washington’s drive for a national AI regime. It aligns with Trump’s EO on child protection but diverges sharply by advocating state-level consumer and infrastructure protections that may conflict with the EO’s push to minimize regulatory fragmentation. The carve-out for child safety may preserve some provisions, but Florida’s data-center oversight and broader privacy rules are likely to be the focal point of federal-state tension under Trump’s enforcement regime.
We will continue to monitor, analyze, and issue reports on these developments.
Authors
Bruce D. Sokler
Member / Co-chair, Antitrust Practice
Alexander Hecht
ML Strategies - Executive Vice President & Director of Operations
Erek L. Barron
Member / Chair, Crisis Management and Strategic Response Practice
Christian Tamotsu Fjeld
Senior Vice President





