Welcome to this week's issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies.
The accelerating advances in artificial intelligence (“AI”) and the practical, legal, and policy issues AI creates have exponentially increased the federal government’s interest in AI and its implications. In these weekly reports, we hope to keep our clients and friends abreast of that Washington-focused set of potential legislative, executive, and regulatory activities.
This issue covers Biden’s executive order (“EO”) on AI, announced on October 30. Our initial takeaways are:
- The executive order establishes programs, policies, and guidelines addressing AI’s impact on the following six issue areas: (1) Workforce Training and Modernization; (2) Privacy; (3) Advancing Equity and Civil Rights; (4) Safety and Security; (5) Competition, Fairness, and Consumer Protection; and (6) Investing in R&D and Government Use of AI.
- Some noteworthy provisions of the EO include streamlining the visa application process to encourage immigrants with AI expertise to study and work in the United States, the establishment of a pilot of the National AI Research Resource, and the commencement of a “government-wide AI talent surge.”
- The EO acknowledges the continued need for additional AI legislation. It was announced the same day that the Group of Seven (“G7”) countries (Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States) agreed to a code of conduct for companies developing advanced artificial intelligence systems.
Wide Ranging Executive Order on AI Signed by President Biden
On October 30, 2023, President Biden issued his long-expected executive order (“EO”) on AI.
The EO addresses a wide range of issues from safety and security to privacy and labor rights. While the EO primary relies on the creation of non-binding guidance to achieve public policy goals as anticipated, the order does hold certain AI developers to binding commitments they have made on AI safety. Furthermore, the EO establishes AI oversight and R&D bodies, including a pilot of the National AI Research Resource (“NAIRR”). Finally, the EO seeks to bolster domestic AI expertise by conducting a “government-wide AI talent surge” and by streamlining the visa application process to encourage immigrants with AI expertise to study and work in the United States.
The EO also initiates a number of AI programs and policies at the Department of Health and Human Services, which are discussed in depth in a companion newsletter to this piece.
The EO has broad implications for multiple sectors of the emerging AI economy, ramifications whose full extent can only be assessed over time.
Overview of Biden Executive Order on AI
The policies enacted by President Biden’s AI EO are based on the premise that AI “holds extraordinary potential for both promise and peril.” As such, the provisions of the EO include both inducements to AI R&D and restrictions on deployments of AI that are perceived to be harmful.
In this newsletter, we detail the main provisions of President Biden’s AI EO. We have divided these provisions into six main categories:
- Workforce Training and Modernization
- Advancing Equity and Civil Rights
- Safety and Security
- Competition, Fairness, and Consumer Protection
- Investing in R&D and Government Use of AI
Workforce Training and Modernization
AI is poised to have transformational impacts on the global labor market. According to one expert analysis, labor market shifts caused in part by AI may necessitate an additional 12 million occupational transitions by the end of the decade. The same report notes that with “millions of jobs potentially being eliminated by automation—and even more being created in fields requiring different skills—the United States needs broad access to effective training programs, as well as job-matching assistance that can help individuals find opportunities.”
The Biden EO seeks to utilize existing executive branch authority to begin to address these potentially imminent labor market disruptions through the following initiatives:
- Streamlining the visa application process by late January 2024 to facilitate the retention of immigrants with expertise in critical areas, including AI. By late February 2024, the Secretary of State is to consider implementing “a domestic visa renewal program…to facilitate the ability of qualified applicants, including highly skilled talent in AI…to continue their work in the United States…” By late April 2024, the Secretary of State is to consider rulemaking to expand the categories of non-immigrants who qualify for the domestic visa renewal program to include J-1 and F-1 recipients in science, technology, engineering, and mathematics, and to establish “a program to identify and attract top [overseas] talent in AI and other critical and emerging technologies…” By late April 2024, the Secretary of Homeland Security is to consider rulemaking to modernize the H-1B program to facilitate the process by which “noncitizens, including experts in AI and other critical and emerging technologies and their spouses, dependents, and children, to adjust their status to lawful permanent resident.”
- Conducting a “national surge in AI talent in the Federal Government” and providing AI training for federal employees at all levels.
- Directing the National Science Foundation and Secretary of Energy to establish a pilot program to improve existing programs training scientists, with the goal of training 500 new researchers by 2025.
- Producing a report by late April 2024 on potential AI-driven labor-market disruptions and studying the means by which the federal government can best support impacted workers.
Stakeholders interested in the development of AI regulation, from lawmakers to agency heads, have stressed the connection between data privacy protections and AI regulation. Senator John Hickenlooper (D-CO) has claimed that since “AI trains on publicly available data…Congress needs to pass comprehensive data privacy protections” to wholly address the consumer protection issues raised by the development of AI. Director of the Federal Trade Commission’s (“FTC”) Bureau of Consumer Protection, Samuel Levine, has claimed that the agency’s AI regulation strategy has been shaped by the inability of Congress to thus far implement comprehensive data privacy law.
Recognizing this connection, the Biden AI EO seeks to implement data privacy protections, including the following:
- Evaluating how federal agencies procure information from data brokers and updating privacy guidance for federal agencies to take novel AI risks into account.
- Funding a Research Coordination Network (a National Science Foundation research support program) on privacy-enhancing technologies such as cryptographic tools by late February 2024.
Advancing Equity and Civil Rights
In October 2022, the White House released the “Blueprint for an AI Bill of Rights” (“AI Bill of Rights”), a non-binding framework “to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.” In the lead up to the release of the AI EO, more than 60 civil society organizations and over a dozen members of Congress called on the White House to use the forthcoming EO to make binding the principles established by the AI Bill of Rights.
Ultimately, the EO does not make the AI Bill of Rights federal policy. However, the EO does implement several measures directed at bolstering privacy, civil, and labor rights in the AI age, including the following:
- Addressing algorithmic discrimination by initiating training and technical assistance programs between the Department of Justice and federal civil rights offices on best practices for investigating AI-related civil rights violations by late January 2024.
- Developing best practices on the use of AI by law enforcement and in the criminal justice process by late October 2024.
- Creating best practices on the deployment of AI in the workplace addressing issues including data collection and job displacement by April 2024.
- Issuing guidance to landlords, federal benefits programs, and federal contractors to not utilize AI algorithms in a discriminatory manner.
- Directing the Secretary of Energy to issue a report by late April 2024 on the potential for AI to improve electric grid infrastructure and facilitate the equitable provision of clean energy.
- Creating resources by October 2024 to encourage the responsible and effective use of AI educational tools in schools.
Safety and Security
With the emergence of accessible generative AI tools in late 2022, many experts expressed concern at the potential for AI to be misused to spread misinformation, undermine cybersecurity, and cause other societal harms. Concern grew to such an extent that tens of thousands of individuals, including Elon Musk, have signed an open letter calling for “all AI labs to immediately pause for at least six months the training of AI systems” of a certain complexity.
This widespread concern has led some prominent AI companies to call on Congress to implement comprehensive AI regulation to allay concerns about the technology. Absent final action from Congress, the executive branch has played a prominent role in drafting AI safety standards. On July 21, 2023 the White House announced that seven major technology companies had agreed to a framework on “Ensuring Safe, Secure, and Trustworthy AI.” In the following months, more companies announced their adherence to this framework.
Leveraging a mix of voluntary and mandatory guidelines, Biden’s AI EO builds on existing AI safety and security efforts through the following measures:
- Mandating that companies developing “any foundation model that poses a serious risk to national security, national economic security, or national public health and safety” conduct safety tests and share the results of these tests with the US government. The Secretary of Commerce is to implement this requirement by late January 2024.
- Instructing the Department of Energy to develop tools to evaluate AI outputs that may pose “nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards” by late July 2024.
- Directing the Secretary of Commerce to consult with the public regarding “risks and potential benefits of dual-use foundation models with widely available weights” by late July 2024.
- Instructing the National Institute of Standards and Technology (“NIST”) to draft companion resources to the AI Risk Management and Secure Software Development Frameworks reflecting advancements in generative AI technology. NIST is further instructed to create standards for pre-release red-team testing for AI models. These actions are to be completed by late July 2024.
- Directing DHS to establish an “Artificial Intelligence Safety and Security Board.”.
Competition, Fairness, and Consumer Protection
A lack of AI legislation from Congress has not stopped consumer protection agencies from regulating AI.
As we have discussedin previous newsletters, the Federal Trade Commission (“FTC”) has repeatedly expressed its willingness and ability to utilize its existing statutory authority to regulate novel uses of AI. The agency has released pages of business guidance detailing proscribed uses of AI. In August, the agency filed a first-of-its-kind individual case concerning AI-related misrepresentations. The FTC is not alone in its AI enforcement efforts. The Consumer Financial Protection Bureau (“CFPB”) has proposed a rule that would enforce quality control standards on certain entities using automated systems to make credit decisions.
But despite these energetic enforcement activities, leaders of consumer protection authorities have signaled the need for expanded authority to regulate AI. A September 20 confirmation hearing saw FTC nominees express the need for Congress to draft new AI legislation.
Absent action from Congress, the Biden AI EO implements AI consumer protection programs across the federal government through the following measures:
- Encouraging the FTC to consider “whether to exercise the Commission’s existing authorities, including its rulemaking authority under the Federal Trade Commission Act…to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI.”
- Instructing independent regulatory agencies to leverage “their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI…”
- Establishing by late December 2024, “guidance regarding…digital content authentication and synthetic content detection measures” and on preventing generative AI from create child sexual abuse material or non-consensual intimate imagery of real individuals.
- Instructing the Consumer Financial Protection Bureau and Federal Housing Finance Agency to consider requiring regulated entities to evaluate their underwriting models for bias.
- Encouraging the Federal Communications Commission to consider “actions related to how AI will affect communications networks and consumers…”
- Directing the Department of Health and Human Services to establish a program to receive and redress reports of “unsafe healthcare practices involving AI.” For further information on this and other healthcare provisions in the AI EO, please reference the newsletter published by the Mintz Health section.
Investing in AI R&D and Government Use of AI
As discussed in previous newsletters, the advancement of AI R&D has been a priority of the executive branch since the Obama administration. Years of study and consultation led an Office of Science and Technology Policy task force to recommend the creation of a National AI Research Resource, a federal body that would provide educational and computational resources for AI development, grow US capacity for AI research, and support AI workforce development.
The Biden AI EO seeks to build on existing efforts to spur AI R&D and productive government use of AI technologies with measures including the following:
- Establishing a pilot of the National AI Research Resource (“NAIRR”), a body that will provide educational and computational resources to spur AI R&D and workforce development. By mid-December 2023, relevant agency heads will submit reports identifying agency resources that could be integrated into the NAIRR pilot. By late January 2024, the pilot program implementing the NAIRR is to be operational. The EO sets operational targets the NAIRR is to meet by late April 2025.
- Issuing guidance by late March 2024 on the use of AI by federal agencies including standards on AI procurement and deployment and a requirement for each agency to appoint a Chief Artificial Intelligence Officer.
- Directing the Secretary of Commerce to consider the inclusion of “competition-increasing measures in notices of funding availability for commercial research-and-development facilities focused on semiconductors” in administering the CHIPS Act.
- Instructing the United States Patent and Trademark Office (“USPTO”) to publish to patent examiners and applicants guidance on inventorship and the use of AI by late February 2024 and guidance on considerations regarding AI and intellectual property by late July 2024.
- Directing the DHS and Defense Department to conduct an operational pilot to deploy AI to identify vulnerabilities in critical US government software, systems, and networks by late April 2024.
Conclusion: A Turning Point or a Stepping Stone in the Development of AI Regulation?
President Biden’s October 2023 AI EO leverages existing executive authority to implement wide ranging programs across the federal bureaucracy. As anticipated, the EO does rely on the promulgation of voluntary guidelines for AI safety and security, and seeks to address AI’s impact on national security and the workforce. Beyond these measures, the EO encourages AI R&D, establishes consumer protection measures, and expands on the work begun in the AI Bill of Rights.
We would expect to see agencies like the FTC which have already asserted existing authority to address AI issues to feel even more empowered to bring actions under its competition and consumer protection authority.
Meanwhile, on the world stage, the G-7 countries, which includes major EU countries, Japan, Canada, the United Kingdom, and the United States, agreed on a Code of Conduct for companies developing advanced artificial intelligence systems. The 11-point code “aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced foundation models and generative AI systems.”
Congress continues to hold AI Insight Forums, and lawmakers continue to release frameworks for AI legislation. The shape of potential comprehensive AI legislation is moving closer into view. The exact contours of this eventual standard are necessarily hazy. But what is clear is that the programs, policies, and guidelines established by this EO will jumpstart activities throughout the government that will affect developers, users, consumers, and employees.
We will continue to monitor, analyze, and issue reports on these developments.
 Red-teaming is a strategy whereby a technology developer directs a team to locate weaknesses in the integrity of their technology by emulating the behavior of an adversary, such as a hacker.