Skip to main content

The FCC Initiates a Proceeding to Explore the Impact of AI on Calling and Texting Practices — AI: The Washington Report

The Federal Communications Commission (“FCC”) released a draft Notice of Inquiry (“NOI”) on October 25, 2023 that would, if adopted, seek information about the impact of artificial intelligence (“AI”) on the FCC’s efforts to protect consumers from unwanted and unlawful calls and text messages under the Telephone Consumer Protection Act (“TCPA”). 

Defining AI for TCPA Purposes

The NOI would seek comment on whether and, if so, how the FCC should define AI for the purposes of determining which current uses of AI are relevant to fulfilling its statutory responsibilities under the TCPA.  The TCPA, among other things, prohibits unwanted calls using an artificial or prerecorded voice.  The NOI would observe that some definitions of AI may include technologies beyond the scope of the AI technologies encompassed by the FCC’s regulatory authority under the TCPA – i.e., emulating a human’s voice and speech and interacting with consumers.  It would ask whether AI technologies that can enhance call analytics and robocall detection should factor into any definition of AI.  In addition, the NOI would seek comment on whether AI technologies could function in a way that qualifies as an autodialer under the definition in the TCPA.

The Benefits and Risks of AI Technologies for Robocalls and Robotexts

The NOI would seek comment on the ways in which AI technologies could protect consumers from unwanted and illegal robocalls and texts.  It would seek comment on how calling platforms could use AI to identify and block unwanted calls and texts.  The NOI would ask whether AI technology can improve calling networks’ ability to detect unwanted or fraudulent calls and block them at the network level.

The NOI would request current examples of how AI is being used in disruptive ways.  Specifically, it would ask how AI is or could be used for illegal, fraudulent, or otherwise unwanted robocalls and robotexts.

Future Steps to Address AI Technologies

The NOI would seek comment on the FCC’s legal authority under the TCPA to take additional steps beyond its initial inquiry.  The NOI would ask whether there is any reason to conclude that the FCC’s existing legal authority under the TCPA – namely to make technical and procedural standards for systems that are used to transmit any artificial or prerecorded voice message via telephone and to prohibit the use of artificial or prerecorded voice messages to make calls to residential or cellular telephone numbers absent prior express consent of the called party – does not already enable the agency to ensure that AI does not erode the TCPA’s consumer protections. 

Further, the NOI would ask whether there are steps the FCC can or should take under its existing TCPA authority to enable consumers to know when they are interacting with an AI-generated voice or if the technology becomes the functional equivalent of a live human.  The NOI would seek comment on whether it could require a “digital watermark” or a disclosure that the call is using AI.

Assessing AI as a Threat and the Tools Available to Respond

The NOI represents the FCC’s first step in considering whether to modify its robocall mitigation approach as the threats posed by AI continue to evolve with the capabilities of the technology.  In recent years, the FCC has focused on addressing robocalls in three primary ways:  (1) reducing the unlawful traffic on U.S. telephone networks through rules requiring carriers to block certain calls that, based on data analytics, are likely to be illegal robocalls; (2) restoring consumers’ confidence that calls are legitimate by requiring the implementation of call authentication technology; and (3) taking swift, and often financially severe, enforcement action against robocalling operations.  However, as the NOI points out, AI poses novel challenges for each strategy.  For example, AI could increase call volumes while simultaneously disguising call traffic to avoid being blocked by carriers.  Likewise, if AI can sufficiently mimic human interaction, fraudulent calls that reach consumers could further erode consumer confidence.  Finally, as the FCC itself notes, pinning down the party responsible for the harm from AI scam calling may not be clear cut between the caller utilizing the AI or the developer who designed and programmed the AI capable of the deception.

Despite these and other looming threats, Chairwoman Jessica Rosenworcel remains hopeful that AI will also provide the tools necessary to address new robocalling concerns, stating that “AI is a real opportunity for communications to become more efficient, more impactful, and more resilient” and noting that “there is also significant potential to use this technology to benefit communications networks and their customers—including in the fight against junk robocalls and robotexts.”  Ultimately, with this NOI, the FCC aims to acquire the necessary information to assess the proper next steps.

The FCC will vote on the NOI at its November 15, 2023 meeting.  

Subscribe To Viewpoints

Authors

Christen B'anca Glenn is a Mintz attorney who advises communications and technology clients on regulatory and compliance matters before the FCC.
Jonathan Garvin is an attorney at Mintz who focuses on legal challenges facing companies in the communications and media industries. He advises clients on transactional, regulatory, and compliance issues before the FCC involving wireless, broadband, broadcast, and cable matters.