The Rapid Evolution of AI Disclosure in Political Advertising


The increasing sophistication of artificial intelligence is presenting new challenges in the realm of political advertising. As generative AI tools become more accessible, there are mounting concerns about their potential to spread misinformation and manipulate voters. In an effort to promote transparency and integrity, Meta just announced a new policy requiring disclosures for any political ads created or altered using AI.

AI Disclosures For Political Ads

Effective starting in 2024, advertisers on Facebook and Instagram will need to clearly indicate if their political ads portray unreal actions by real people, fabricate non-existent individuals, or manipulate footage of actual events. This move comes on the heels of Meta’s recent ban on political advertisers using generative AI ad creation tools. While that prohibition aimed to curb deception, mandatory disclosures provide an extra layer of protection by arming viewers with important context.

As a public relations veteran with over 30 years of experience creating and protecting brands, I applaud these proactive measures to limit the unchecked spread of AI-powered disinformation. Social media platforms are finally acknowledging their responsibility to safeguard the integrity of political discourse. Misleading or outright false political messaging can be incredibly damaging, widening societal divides and undermining democracy.

Just look at the havoc wrought by deepfakes during the 2020 election. Deceptively edited videos of political figures provoked outrage and heated debate. Now imagine such manipulated content generated seamlessly at scale. That dystopian future may not be far off as AI grows more advanced. Meta’s disclosure rules are a necessary stopgap until more robust solutions emerge.

While political ads represent a tiny sliver of Meta’s overall ad revenue, the company likely recognized the imminent risks posed by AI. Once generative AI becomes mainstream, it would be trivial for campaigns or other entities to churn out propaganda videos. Even basic text-to-image generators could concoct fake photos of events that never transpired.

Mandating AI disclosures prevents bad actors from laundering such synthetic content through social platforms to deceive and manipulate. Other platforms like Google have taken similar steps, blocking political advertisers from using its AI tools altogether. While a blanket ban may be excessive, prohibiting specific dangerous prompts makes sense.

Expect Bad Actors To Find Loop Holes

As heartening as these developments are, disclosure rules hardly make platforms bulletproof. We can expect well-funded campaigns to probe for loopholes and other ways to incorporate AI subtly. The arms race of detection versus deception will ramp up as generative AI grows more ubiquitous. For now, though, Meta’s transparency requirements are a very positive step.

The public needs protections against ill-intentioned generative AI in the high-stakes domain of political advertising. This is why I was pleased to see the Biden administration is charging the Department of Homeland Security and Department of Commerce to develop guidelines, including watermarks to protect the integrity of branded content. When deep fakes and synthetic media can portray vivid false realities, people deserve to know what is real versus AI-fabricated. Otherwise, social platforms risk becoming propaganda playgrounds saturated with deceptive, algorithmically-generated political messaging.

While crafting a comprehensive framework to govern AI will take time, mandatory disclosures establish responsible norms around transparency. They reflect an awakening within the tech industry to AI’s unintended consequences and the need for oversight. Generative models sculpt compelling synthetic realities that our brains instinctively believe. As PR professionals, we understand both the power and the peril of controlling narratives.

I hope Meta’s disclosure rules are the first domino in a wave of thoughtful AI governance. These are thorny issues without simple solutions, but a healthy democracy demands we confront them. If societal harms lurk within unrestrained generative AI, then transparency and accountability must be infused into its development and deployment. Meta’s new requirements are an encouraging step on that important path.


If you need assistance understanding how to leverage Generative AI in your marketing, advertising, or public relations campaigns, contact us today. In-person and virtual training workshops are available. Or, schedule a session for a comprehensive AI Transformation strategic roadmap to ensure your marketing team utilizes the right GAI tech stack for your needs.

Read more: The Rapid Evolution of AI Disclosure in Political Advertising

Posted

in

,

by

Comments

Leave a Reply

%d bloggers like this: