Skip to main content

Federal Agencies Take Sweeping Action on AI in Accordance with AI EO — AI: The Washington Report (Part 2 of 2)

  1. President Joe Biden’s October 2023 Executive Order on AI directed agencies to institute a significant number of actions on AI. On April 29, 2024, the White House announced that federal agencies had completed “all of the 180-day actions in the E.O. on schedule, following its recent successes completing each 90-day, 120-day, and 150-day action on time.”
  2. The 180-day actions involved agencies that spanned the executive branch and touched on a wide range of topics, including health care, national security, labor standards, and grid modernization.
  3. Last week’s newsletter discussed three major actions whose completion was announced at the end of April. This week, we cover the remainder of the actions completed pursuant to the AI EO in April 2024. 
     

 
President Joe Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI EO) set in motion a series of rulemakings, studies, and convenings on AI across the executive branch. As covered in our AI EO timeline, April 27, 2024 marks the deadline for all actions ordered to take place 180 days after the signing of the executive order.

On April 29, 2024, the White House reported that federal agencies had completed all of the 180-day actions in the AI EO on schedule, and have finished some actions ahead of schedule. In last week’s newsletter, we covered three significant actions completed at the end of April: a generative AI risk management framework, guidance on the deployment of AI in the workplace, and a suite of actions by the Department of Energy on AI.

This week, we discuss the other actions pursuant to the AI EO completed by the 180-day deadline. In its press release, the White House sorted these actions into four categories:

1. Managing Risks to AI Safety

  • The Office of Science and Technology Policy (OSTP) has released a “Framework for Nucleic Acid Synthesis Screening” intended to “help prevent the misuse of AI for engineering dangerous biological materials.” By late October 2024, “federal research funding agencies will require recipients of federal R&D funds to procure synthetic nucleic acids only from providers that implement” the best practices outlined in this framework.
  • In addition to the Generative AI Risk Management Framework discussed in last week’s newsletter, the National Institute of Standards & Technology (NIST) has released three additional draft publications. Comments on each of these draft documents may be submitted through the Federal Register’s website and are due on June 2, 2024.
    1. Secure Software Development Practices for Generative AI and Dual-Use Foundation Models, a guide offering “guidance on dealing with the training data and data collection process.”
    2. Reducing Risks Posed by Synthetic Content, a document “intended to reduce risks from synthetic content by understanding and applying technical approaches for improving the content’s transparency, based on use case and context.”
    3. A Plan for Global Engagement on AI Standards, a roadmap “designed to drive the worldwide development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing.”

Along with these draft documents, NIST has launched NIST GenAI, a program that will evaluate generative AI technologies through the issuance of “challenge problems designed to evaluate and measure the capabilities and limitations of generative AI technologies.”

  • The US Patent and Trademark Office (USPTO) has put out a request for comments on “the impact of the proliferation of artificial intelligence (AI) on prior art, the knowledge of a person having ordinary skill in the art (PHOSITA), and determinations of patentability made in view of the foregoing.” Comments may be submitted through the Federal Register’s website and are due on July 29, 2024.
  • The Department of Homeland Security (DHS) has created Safety and Security Guidelines for Critical Infrastructure Owners and Operators, a guide that addresses risks posed by AI “to safety and security, which are uniquely consequential to critical infrastructure.” The DHS has also launched an “AI Safety and Security Board” to advise the Secretary, critical infrastructure operators, and other stakeholders “on the safe and secure development and deployment of AI technology in our nation’s critical infrastructure.”
  • The White House has reported that the Department of Defense (DoD) has made progress on a pilot tool that “can find and address vulnerabilities in software used for national security and military purposes.”

2. Standing up for Workers, Consumers, and Civil Rights

3. Harnessing AI for Good

  • Along with the actions described in last week’s newsletter, the Department of Energy (DOE) has announced funding opportunities “to support the application of AI for science, including energy-efficient AI algorithms and hardware.”
  • The President’s Council of Advisors on Science and Technology (PCAST) has authored a report to the president entitled Supercharging Research: Harnessing Artificial Intelligence to Meet Global Challenges. This report seeks to diagnose and suggest solutions to roadblocks to creating an effective AI R&D ecosystem in the United States.

4. Bringing AI Talent into Government

  • The General Services Administration (GSA) will be onboarding its first-ever Presidential Innovation Fellows AI cohort during the summer of 2024.
  • The DHS will hire 50 AI professionals as part of the newly established DHS AI Corps tasked with building “safe, responsible, and trustworthy AI to improve service delivery and homeland security.”
  • The Office of Personnel Management (OPM) has released guidance “on skills-based hiring to increase access to federal AI roles for individuals with non-traditional academic backgrounds.”

Conclusion

Through the AI EO, President Biden has mobilized agencies across the federal government to institute policies, commission reports, and initiate rulemakings regarding AI. As comprehensive AI legislation has not been forthcoming, actions stemming from the executive branch may constitute the most significant form of AI regulation for the foreseeable future. Stakeholders interested in the state and trajectory of federal AI policy should closely track the implementation of the AI EO and related developments coming out of the executive branch.

Over the coming months and into 2025, there will be many more AI EO actions due. These include recommendations by the USPTO on potential executive actions relating to AI and copyright (due July 2024), a report by the Attorney General on the use of AI in the criminal justice system (due October 2024), and standards from the Department of Education on AI and education (due October 2024).

We will continue to monitor, analyze, and issue reports on these developments. Please feel free to contact us if you have questions as to current practices or how to proceed. 

 

Subscribe To Viewpoints

Authors

Bruce D. Sokler

Member / Co-chair, Antitrust Practice

Bruce D. Sokler is a Mintz antitrust attorney. His antitrust experience includes litigation, class actions, government merger reviews and investigations, and cartel-related issues. Bruce focuses on the health care, communications, and retail industries, from start-ups to Fortune 100 companies.

Alexander Hecht

ML Strategies - Executive Vice President & Director of Operations

Alexander Hecht is Executive Vice President & Director of Operations of ML Strategies, Washington, DC. He's an attorney with over a decade of senior-level experience in Congress and trade associations. Alex helps clients with regulatory and legislative issues, including health care and technology.

Christian Tamotsu Fjeld

Senior Vice President

Christian Tamotsu Fjeld is a Vice President of ML Strategies in the firm’s Washington, DC office. He assists a variety of clients in their interactions with the federal government.

Raj Gambhir

Raj Gambhir is a Project Analyst in the firm’s Washington DC office.