Skip to main content

Federal Agencies Take Sweeping Action on AI in Accordance with AI EO — AI: The Washington Report (Part 1 of 2)

  1. President Joe Biden’s October 2023 Executive Order on AI directed agencies to institute a significant number of actions on AI. On April 29, 2024, the White House announced that federal agencies had completed “all of the 180-day actions in the E.O. on schedule, following their recent successes completing each 90-day, 120-day, and 150-day action on time.”
  2. The 180-day actions involved agencies that spanned the executive branch and touched on a wide range of topics, including health care, national security, labor standards, and grid modernization.
  3. In this newsletter, we discuss three recently completed actions: a draft risk management framework on generative AI, guidance on the deployment of AI in the workplace, and recent actions taken by the Department of Energy on AI

President Joe Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI EO) set in motion a series of rulemakings, studies, and convenings on AI across the executive branch. As covered in our AI EO timeline, April 27, 2024 marks the deadline for all actions ordered to take place 180 days after the signing of the executive order.

On April 29, 2024, the White House reported that federal agencies had completed all of the 180-day actions in the AI EO on schedule, and have finished some actions ahead of schedule. In this newsletter, we will cover three of these actions in detail.[1] In next week’s newsletter, we will provide a summary of the remaining actions.

NIST Generative AI Risk Management Framework

In late April 2024, the National Institute of Standards and Technology (NIST) released a draft of a risk management framework for generative AI. NIST is inviting interested stakeholders to submit comments on this draft through the web portal before the deadline, which is 11:59 pm EDT on June 2, 2024.

Pursuant to the National Artificial Intelligence Initiative Act of 2020, NIST published the Artificial Intelligence Risk Management Framework (AI RMF) in January 2023. This document seeks to provide AI developers with “approaches that increase the trustworthiness of AI systems…to help foster the responsible design, development, deployment, and use of AI systems over time.” Since its publication, the AI RMF has received bipartisan and cross-sectoral approbation.

However, rapid advances in generative AI technology that have occurred since the release of the AI RMF have led some to worry that the document is already out of date. To address this, in June 2023, NIST announced the launch of a Public Working Group on Generative AI to “develop key guidance to help organizations address the special risks associated with generative AI technologies.”

Building off of the efforts of this working group, President Biden’s AI EO tasked NIST with “developing a companion resource to the AI Risk Management Framework…for generative AI.” On April 29, 2024, NIST released a draft of this document, entitled “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile.”

The document consists of two primary sections. The first concerns 12 risk factors unique to or exacerbated by generative AI (GAI). These risks include:

  • Facilitating CBRN Access: Lowering barriers to entry to “materially nefarious information related to chemical, biological, radiological, or nuclear (CBRN) weapons, or other dangerous biological materials.”
  • Environmental Degradation: “Impacts due to high resource utilization in training GAI models, and related outcomes that may result in damage to ecosystems.”
  • Intellectual Property Infringement: “Eased production of alleged copyrighted, trademarked, or licensed content used without authorization and/or in an infringing manner; eased exposure to trade secrets; or plagiarism or replication with related economic or ethical impacts.”

The second part of the document consists of a matrix of actions that organizations deploying AI can take to mitigate these risks. These actions are arranged in a manner that corresponds with the AI RMF, allowing organizations already utilizing that framework to easily adopt elements from this GAI-centered update.

Stakeholders interested in utilizing the GAI RMF are encouraged to review the draft and contribute comments.

Guidance on the Deployment of AI in the Workplace

Pursuant to the AI EO, the Department of Labor (DOL) has published a guide for federal contractors and subcontractors on avoiding discrimination in “hiring involving AI and other technology-based hiring systems.” Recognizing that some federal contractors may use AI systems to “increase productivity and efficiency in their employment decision-making” the report warns that “the use of AI systems also has the potential to perpetuate unlawful bias and automate unlawful discrimination, among other harmful outcomes.”

To mitigate these potential harms, the guidance provides answers to common questions that employers may have about the use of AI in hiring decisions. After defining basic terms like “AI” and “automated systems,” the guidance outlines federal contractors’ Equal Employment Opportunity obligations related to the use of AI in employment decisions. These include:

  • Maintaining records and ensuring the confidentiality of records consistent with all Office of Federal Contract Compliance Programs (OFCCP) regulatory requirements.
  • Cooperating with the OFCCP by providing requested information on AI systems being deployed.
  • Making reasonable accommodations “to the known physical or mental limitations of an otherwise qualified applicant or employee with a disability as defined in OFCCP’s regulations, unless the federal contractor can demonstrate that the accommodation would impose an undue hardship on the operation of its business.”

Following the discussion of contractors’ legal obligations, the guidance covers best practices that contractors can adopt “to help avoid potential harm to workers and promote trustworthy development and use of AI.” These include:

  • Providing advance notice and due disclosure to applicants, employees, and their representatives if the contractor intends to deploy AI tools in the hiring process.
  • Engaging with employees in the design and deployment of AI systems used in employment-related decisions.
  • Not relying solely on AI systems in making employment-related decisions.
  • Ensuring that AI systems used to make hiring decisions are generally accessible to people with disabilities.

DOE Announces a Suite of Actions on AI

As the agency housing the National Laborites, the Department of Energy (DOE) has long been at the forefront of global AI research and development. To capitalize on this expertise and equip the US grid for the challenges and opportunities of the AI age, President Biden’s AI EO directs the DOE to take a number of actions on AI. In advance of the 180-day deadline, the DOE has fulfilled many of these requirements, as indicated by an April 29, 2024 agency press release.

In this press release, the DOE announced the launch of two AI-powered tools. The first is VoltAlc, a $13 million initiative intended to leverage AI to streamline siting and permitting at the local, state, and federal levels. The second is PolicyAI, a large language model developed by the Pacific Northwest National Laboratory that is trained on over 50 years of data on federal projects, including environmental review documents, environmental assessments, and environmental impact statements. The DOE intends for this tool to “augment the efforts and expertise of federal agencies to improve and expedite environmental and permitting reviews.”

The DOE also announced that it will be “convening energy stakeholders and technical experts over the coming months to collaboratively assess potential risks that the unintentional failure, intentional compromise, or malicious use of AI could pose to the grid, as well as ways in which AI could potentially strengthen grid resilience and our ability to respond to threats.” These meetings, to be held in pursuance of the AI EO, will build off of the DOE’s recently released initial risk assessment on the implications of novel AI tools on critical energy infrastructure.

In addition to anticipating the opportunities and risks posed by AI for critical energy infrastructure, the DOE has been considering the potential strain on the grid posed by the data centers that support AI development. To support this effort, the DOE has established a new Working Group on Powering AI and Data Center Infrastructure. The working group anticipates that by June of 2024, it will make recommendations on “meeting energy demand for AI and data center infrastructure.”


This report only covers a subset of the actions pursuant to the AI EO recently completed by federal agencies. Actions recently completed by federal agencies pursuant to President Biden’s AI EO cover fields ranging from national security to federal talent acquisition and cybersecurity. In next week’s newsletter, we will discuss and contextualize the remaining actions.

As we cover in our timeline of the AI EO, there are many more actions due over the coming months and into 2025. These include a NIST-led plan for “global engagement on promoting and developing AI standards,” standards from the Department of Education on AI and education, and an AI in Global Development Playbook published by the Department of State.

We will continue to monitor, analyze, and issue reports on these developments. Please feel free to contact us if you have questions as to current practices or how to proceed.



[1] To view a brief summary of all of the 180-day actions, reference this White House press release.


Subscribe To Viewpoints


Bruce D. Sokler

Member / Co-chair, Antitrust Practice

Bruce D. Sokler is a Mintz antitrust attorney. His antitrust experience includes litigation, class actions, government merger reviews and investigations, and cartel-related issues. Bruce focuses on the health care, communications, and retail industries, from start-ups to Fortune 100 companies.

Alexander Hecht

ML Strategies - Executive Vice President & Director of Operations

Alexander Hecht is Executive Vice President & Director of Operations of ML Strategies, Washington, DC. He's an attorney with over a decade of senior-level experience in Congress and trade associations. Alex helps clients with regulatory and legislative issues, including health care and technology.

Christian Tamotsu Fjeld

Senior Vice President

Christian Tamotsu Fjeld is a Vice President of ML Strategies in the firm’s Washington, DC office. He assists a variety of clients in their interactions with the federal government.

Raj Gambhir

Raj Gambhir is a Project Analyst in the firm’s Washington DC office.