
May 23, 2024 – The Federal Communications Commission (FCC) is taking steps to enhance transparency around the use of artificial intelligence (AI) in political advertising on broadcast television and radio. On Wednesday, FCC Chairwoman Jessica Rosenworcel unveiled a proposal that would mandate AI Content Disclosure in political ads.
The proposed rule aims to address growing concerns about the potential for AI technologies, particularly deepfakes and synthetic media, to mislead or deceive voters in the upcoming 2024 U.S. presidential and congressional elections. While the FCC's jurisdiction is limited to broadcast TV, cable, satellite, and radio providers, the move signals a broader push for accountability and transparency in the rapidly evolving AI landscape.
“As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used,” Rosenworcel stated in a release. “Today, I've shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue.”
If adopted by the five-member commission, the rule would require political advertisers to include on-air and written disclosures when their ads contain AI-generated content. This could encompass AI-generated visuals, audio, or text, including deepfakes – altered images, videos, or audio recordings that depict people saying or doing things they did not actually say or do.
The FCC's proposal comes amid heightened scrutiny of AI's role in elections and political campaigns. In January, a Democratic operative working for the Dean Phillips campaign created a deepfake robocall imitating President Joe Biden's voice, urging Democrats not to vote in the New Hampshire primary. The incident prompted the FCC to affirm that AI voice-cloning tools in robocalls violate existing laws.
Advocacy groups and lawmakers have also sounded the alarm on the potential threats posed by AI-generated content in elections. “This rulemaking is a positive development as the use of deceptive AI and deepfakes poses a threat to our democracy and has already been employed to undermine trust in our institutions and elections,” said Ishan Mehta, media and democracy program director at Common Cause.
While the FCC's proposal does not outright prohibit the use of AI in political ads, it aims to provide voters with crucial information to assess the content they encounter. The commission will seek public input on defining AI-generated content and determining whether disclosures should be required for issue-based ads in addition to candidate ads.
The proposed rule aligns with broader efforts to regulate AI's impact on elections. In March, Senators Amy Klobuchar (D-MN) and Lisa Murkowski (R-AK) introduced the AI Transparency in Elections Act, which would mandate AI disclaimers on political ads and empower the Federal Election Commission (FEC) to respond to violations.
Read the Notice of Proposed Rulemaking Here
However, with just over five months until the November elections, the prospects of comprehensive federal legislation remain uncertain. The FEC itself has yet to finalize rules on AI content disclosure in political ads, despite calls from lawmakers and public support.
In the absence of federal action, some states have taken matters into their own hands. California, for instance, prohibits the distribution of “materially deceptive” AI-generated media about candidates without proper disclosure. Florida has proposed similar legislation, while Illinois is considering classifying deepfake videos as election interference.
As the regulatory landscape evolves, broadcasters and political advertisers must navigate a complex web of state and federal laws governing AI's role in elections. Failure to comply with disclosure requirements could expose them to legal liabilities and reputational risks.
“Broadcasters should consult communications counsel before airing any advertisement that contains AI-generated content, particularly where it might be considered deceptive,” advised Pillsbury Law Firm in a recent alert.
Broadcasters should consult communications counsel before airing any advertisement that contains AI-generated content, particularly where it might be considered deceptive
– advised Pillsbury Law Firm.
Beyond the legal implications, the FCC's proposal underscores the broader ethical considerations surrounding AI's impact on democratic processes. As AI technologies become increasingly sophisticated and accessible, ensuring transparency and accountability in their use is paramount to maintaining public trust and safeguarding the integrity of elections.
“Citizens must have confidence in the basic truthfulness of political campaigns,” stated the American Association of Political Consultants, condemning the use of “deepfake” AI content in political advertising as a “dramatically different and dangerous threat to democracy.”
While the FCC's proposal represents a crucial first step, experts argue that a comprehensive, multi-stakeholder approach is necessary to address the challenges posed by AI in elections. This could involve collaboration between policymakers, technology companies, media organizations, and civil society groups to develop robust guidelines, ethical frameworks, and technological solutions.
As the 2024 elections approach, the debate over AI's role in political advertising is likely to intensify. The FCC's proposed rule, if adopted, could set a precedent for other regulatory bodies and platforms to follow, shaping the future of AI content disclosure and transparency in the digital age.