FCC Continues Focus on AI, Targeting Robocalls and Political Advertisements
- On September 27, 2024, FCC Chairwoman Jessica Rosenworcel gave a speech on AI regulation by the FCC, highlighting her goal of improving transparency in the use of AI technology in the communications industry. The speech follows recent enforcement actions targeting the use of generative AI in political robocalls, as well as two AI-specific NPRMs released this summer.
- The first NPRM addressed requirements regarding the disclosure of AI in political advertisements. The rulemaking, which will impact television and radio advertisements, passed in a 3-2 vote split along partisan lines, with the Democratic commissioners in favor of the proposal. The NPRM seeks to mandate disclosures regarding the use of AI in political advertisements for those who have existing legal obligations to file information about their television and radio advertisements.
- The Commission also adopted an NPRM on addressing consumer protections against AI-generated robocalls. These actions follow a declaratory ruling by the Commission in early 2024 that included AI-generated content in the scope of the term “artificial or prerecorded voice” and an enforcement action targeting the use of AI to imitate President Biden’s voice in robocalls.
Background
Government focus on artificial intelligence (AI) has grown in recent years, and the Federal Communications Commission (FCC or Commission) is no exception. FCC Chairwoman Jessica Rosenworcel outlined the Commission’s efforts to regulate AI in a recent speech to the 7th Annual Berkeley Law AI Institute on September 27, 2024. Highlighting the fact that transparency in the use of AI technology in communications is critical, Chairwoman Rosenworcel reviewed recent FCC actions to combat potential harms posed by AI technology. These actions include: a declaratory ruling that made illegal “artificial or prerecorded” robocalls using AI voice cloning technology under the Telephone Consumer Protection Act of 1991 (TCPA); a Notice of Proposed Rulemaking (NPRM) related to AI-generated robocalls; enforcement actions targeting the use of deepfake, generative AI technology in spoofed robocalls; and a NPRM related to AI disclosures in political campaign advertisements on radio and television.
Agency Focus on AI-Generated Robocalls
On August 7, 2024, the FCC approved an AI-focused NPRM on consumer protections against AI-generated robocalls and robotexts. The FCC also issued a Notice of Inquiry (NOI) to further explore some of the key areas related to AI-generated robocalls. Comments on these items are due October 10, 2024, with reply comments due October 25, 2024.
The FCC’s NPRM in this instance centers on its authority under the TCPA to limit telemarketing calls and the use of automatic telephone dialing systems, as well as artificial or prerecorded voice messages. The Commission issued a declaratory ruling on February 8, 2024, determining that the term “artificial or prerecorded voice” includes AI-generated content that resembles human voice, thereby rendering calls using AI-generated technologies subject to TCPA requirements.
In light of this, the Commission proposes to define “AI-generated call” as “a call that uses any technology or tool to generate an artificial or prerecorded voice or a text using computational technology or other machine learning, including predictive algorithms, and large language models, to process natural language and produce voice or text content to communicate with a called party over an outbound telephone call.” While the Commission already requires callers to obtain prior express consent from consumers when making autodialed or artificial or prerecorded voice calls, calls falling under this definition would be subject to the new, proposed rules, which include:
- A proposal requiring callers making calls with AI-generated artificial or prerecorded voice messages to include a “clear and conspicuous disclosure that the consumer's consent to receive artificial and prerecorded calls may include consent to receive AI-generated calls.”
- A proposal mandating the same disclosure for callers making autodialed text messages that include AI-generated content.
- A proposal requiring callers that use AI-generated voice to clearly disclose the use of such technology at the commencement of the call.
- An exemption from the TCPA requirements for artificial and prerecorded calls for callers with disabilities that may leverage AI technology to communicate.
The FCC seeks comments on whether it should require additional disclosures or impose additional constraints related to AI-generated calls, as well as the benefits and drawbacks of such disclosures and the value they could provide to the consumer. Specifically, the Commission seeks further comment on the scope of consent and whether the prior express consent framework of the TCPA requires additional, separate consent to receive AI-generated calls. The Commission also seeks comment on its proposed definition of “AI-generated calls,” including whether such a specific definition for the term is even necessary given the “artificial and prerecorded voice” language in the TCPA.
The NOI accompanying this NPRM opens the issue up to further comment in areas including real-time call detection technology, call blocking technology, privacy implications, use of the National Institute of Standards and Technology’s AI Risk Management Framework, and potential duplication with the secure telephone identity revisited and signature-based handling of asserted information using tokens (STIR/SHAKEN) authentication standards, which are validation requirements intended to prevent caller identification spoofing by robocallers that must be implemented by most voice service providers, including originating carriers.
The item passed with the approving votes of all Commissioners, though Commissioner Simington only concurred in part regarding a portion of the NOI regarding the active monitoring of phone calls.
The FCC remains committed to enforcing rules similar to this proposal, as it recently settled and entered into a consent decree with voice service provider Lingo Telecom for originating spoofed robocalls using generative AI voice cloning technology to disseminate disinformation in New Hampshire. The settlement arising out of the transmittal of these robocalls, which were an attempt to interfere with the presidential primary election, resulted in Lingo agreeing to a $1 million civil penalty and a compliance plan requiring “strict adherence” to the FCC’s STIR/SHAKEN caller ID authentication rules. The FCC labels this compliance plan as “historic” and the “first of its kind secured by the FCC,” potentially signaling the agency’s intent to pursue future enforcement actions in this space. In this vein, the FCC also recently adopted a Forfeiture Order issuing a $6 million fine against political consultant Steve Kramer for illegal spoofed robocalls made to potential New Hampshire voters in the presidential primary election using deepfake, AI-generated voice cloning technology. Kramer is also facing legal action taken by the New Hampshire Attorney General as well as a separate civil suit in which the U.S. Department of Justice submitted a statement supporting the right of private plaintiffs to challenge the robocalls as a form of coercion and therefore a violation of the Voting Rights Act. These actions clearly indicate that parties across various branches of government are targeting enforcement in this space.
Of note, the FCC’s Consumer Advisory Committee (CAC) recently adopted recommendations to protect vulnerable populations from illegal robocalls, so it is likely the FCC continues to act in this space in the months to come.
Proposal on AI and Political Advertisement Disclosures
Additionally, in line with its efforts to mitigate disinformation in election-related robocalls, the FCC announced a rulemaking on July 25, 2024, proposing new AI transparency requirements for political advertisements. The proposed rules, advanced by the Commission in a party-line split on July 10, 2024, would apply to political advertisements containing AI-generated content that are aired on radio and television broadcast stations. Initial comments on the rule were due September 19, 2024, with reply comments due October 11, 2024.
In releasing the proposed rules regarding the use of generative AI in political advertising, the FCC cites the growing use of AI to mimic human voices and mislead Americans with disinformation and altered content. While the proposal acknowledges the importance of political advertising in helping voters make informed decisions about candidates and issues, it aims to address the potential of AI-generated content to mislead voters with deceptive information. With this proposed rulemaking, the FCC’s majority stated that it aims to balance the benefits of using AI in political advertising—such as tailoring messaging to specific communities or accelerating the creation of advertisements—with the possible harms—like altered images or manipulated media—that would undermine the regulated broadcast entities’ (who would be subject to the disclosure requirements) obligation to serve the public interest.
The FCC proposes to require radio and television broadcast stations; cable operators, Direct Broadcast Satellite (DBS) providers, and Satellite Digital Audio Radio Service (SDARS or satellite radio) licensees engaged in origination programming; and permit holders transmitting programming to foreign broadcast stations pursuant to section 325(c) of the Act to (1) provide an on-air announcement for all political advertisements that contain AI-generated content that discloses the use of that content in the advertisement; and (2) include a notice in their online political files for all political advertisements that include AI-generated content that discloses that the advertisement contains such content. The NPRM seeks to emphasize that the proposal is not to ban AI-generated content in political advertisements, but rather, to implement a mechanism to ensure that voters are informed when the advertisements contain AI. To achieve this goal, the FCC outlines several proposals in the NPRM:
- Definition of "AI-Generated Content": The FCC proposes to define “AI-generated content” as “an image, audio, or video that has been generated using computational technology or other machine-based system that depicts an individual’s appearance, speech, or conduct, or an event, circumstance, or situation, including, in particular, AI-generated voices that sound like human voices, and AI-generated actors that appear to be human actors.” The FCC seeks comment on this definition or proposals for alternative definitions.
- Broadcaster Disclosure of AI-Generated Content in Political Advertisements: The Commission proposes that radio and television broadcast stations inquire regarding whether there is AI-generated content in political advertisements scheduled to be aired on their stations. Such advertisements would require an on-air announcement that would disclose the use of AI-generated content. Broadcasters will also be required to file notices disclosing the use of AI-generated content in their online political files. The Commission seeks comment on this proposal and how broadcasters can effectively make the necessary inquiries of those who purchase airtime.
- Applicability of Requirements: The FCC also proposes to have the rules apply to cable operators, DBS providers and satellite radio licensees that are engaged in origination programming. The proposed rules would also apply to political advertisements broadcast under a section 325(c) permit, which is required when the entity produces programming in the United States but transmits the programming from a U.S. studio to a non-U.S. licensed station in a foreign country, which is then broadcast back to the United States from the foreign station.
In addition to seeking comment on the proposal, the FCC requests public input regarding its statutory authority to adopt the proposed rules regarding on-air disclosure and political file requirements for AI-generated content in political advertisements. The Commission cites section 303(r) of the Act, which authorizes the Commission to issue regulations and impose conditions on the use of the radiofrequency spectrum, as employed by the broadcast licensees encompassed by the proposed rules, as required for the “public convenience, interest, or necessity.” The Commission argues that the public interest standard defines a broadcast licensee’s duty and, therefore, the Commission has authority to enact rules in the public interest that impact the affected entities in this rulemaking.
Due to the statutory authority concerns and the perceived political valence of the proposal, the NPRM prompted immediate dissent from the FCC’s Republican commissioners and Republican members of Congress, especially given the ongoing presidential campaign season. Commissioner Brendan Carr, for example, called the rulemaking an effort by the Democratic majority of the Commission to “change the rules of the road in the run-up to the 2024 election” and chastised the FCC for what he characterized as a misguided and unlawful “attempt to fundamentally alter the regulation of political speech” and create a “recipe for chaos.” He further contended that the FCC is “grasping for authority” in an effort that would “empower the FCC to operate as the nation’s speech police.” Republican Commissioner Nathan Simington echoed these concerns, arguing that the standard proposed by the FCC would make it difficult to define what messaging falls under the proposed definition of “AI-generated content” and that the statutory authority for the FCC to pursue these rules is shaky at best.
The proposal, on the other hand, is strongly supported by the FCC’s Democratic commissioners. Chairwoman Jessica Rosenworcel described the proposal as a “major step to guard against AI being used by bad actors to spread chaos and confusion in our elections.” She further argues that the FCC has the statutory authority to require broadcasters to maintain a publicly available file for campaign advertisements and conduct on-air disclosures.
The partisan reaction to this proposal preceded release of the NPRM, with House Republicans criticizing the FCC’s efforts to regulate AI in political advertisements in a House Energy & Commerce FCC Oversight hearing in July of this year, and Republican senators accusing the FCC of pushing a “dangerous proposal” that “risks confusing voters on the eve of a federal election.” In contrast, Senate Democrats recently wrote to Chairwoman Rosenworcel expressing their support for the proposal as a way to combat potential disinformation spread using AI-generated content and urging the Commission to quickly finalize and implement the proposed rules.
Republican criticism extended across agencies, with the Republican Chairman of the Federal Election Commission (FEC), Sean Cooksey, expressing his concern to Chairwoman Rosenworcel that the proposal would fall within the “exclusive jurisdiction” of the FEC and “sow chaos among political campaigns for the upcoming election,” especially considering the FEC’s ongoing rulemaking effort to regulate the use of AI in political communications. In contrast, the FEC’s Vice Chair, Democrat Ellen Weintraub, supports the FCC proposal and argues that the effort and responsibility to combat misleading or false AI-generated political content does not fall solely to one agency, but rather, requires action by multiple agencies that have jurisdiction over different spaces impacted by the issue.
Takeaways
The FCC’s actions in the late summer and early fall of 2024 continue the agency’s focus on the impact on AI and efforts to mitigate its potential negative impacts on consumers and voters as the capability and use of AI technologies expands at an increasingly rapid pace. Although it is unlikely as a practical and procedural matter that the FCC could adopt final rules in the political advertising proceeding before Election Day, let alone sufficiently in advance to have a meaningful effect on advertising this cycle, the agency is likely to continue its focus on understanding and addressing on a first-principles basis the potential benefits and risks of the technology in order to attempt to shape on the front end the manner in which it is developed and deployed within and across the technology, media and telecommunications (TMT) sector.
Comments on the political advertising NPRM were due on or before September 19, 2024, with reply comments due on or before October 11, 2024. Comments on the AI-generated robocalls NPRM will be due on October 10, 2024, with reply comments due October 25, 2024. Our team will be tracking this space closely and is available to answer questions you may have.