Artificial Intelligence has been the talk of the business world for more than a year now, with its potential to revolutionize industries and transform the way we live and work. However, despite the excitement surrounding generative AI, there is a growing concern among consumers regarding its trustworthiness. Recent studies have shown a decline in consumer trust in AI, highlighting the need for businesses and institutions to address these concerns and build confidence in this rapidly evolving technology.
The Decline in Consumer Trust
According to the 2024 edition of Edelman’s Trust Barometer, consumer trust in AI has fallen globally from 61% to 53% in the past five years, while trust in AI in the U.S. has declined from 50% to 35%. This decline in trust is more pronounced in developed countries compared to developing markets, with rejection of AI being three times higher in developed countries. The survey also found that respondents trusted tech overall (76%) considerably more than AI (50%) and were more likely to embrace AI when institutions manage it well compared to when AI is poorly managed.
Political Divide and AI Trust
The Edelman’s Trust Barometer also revealed a significant political divide when it comes to trust in AI. Among Democrats, only 38% trust AI, while 45% reject it. On the other hand, only 24% of Republicans trust AI, and a staggering 58% reject it. Independents fall somewhere in between, with 25% trusting AI and 25% accepting it. This political divide highlights the need for a bipartisan approach to addressing concerns surrounding AI and building trust among all segments of the population.
Reasons for Distrust
Several factors contribute to the growing distrust in AI. A recent report from UNESCO found that AI models, including GPT-3.5 and Llama 2, have an “alarming tendency” to generate content based on stereotypes about race, gender, sexuality, and cultural biases.
This raises concerns about the potential for AI to perpetuate and amplify existing biases and discrimination. Additionally, the Center for Countering Digital Hate found that several popular AI image platforms generated election disinformation in 41% of researchers’ test runs, highlighting the potential for AI to be used to spread misinformation and influence elections.
Concerns for Advertisers
The growing distrust in AI is also a concern for advertisers. According to a recent report from Forrester, 82% of U.S. consumer marketers are worried about marketing their brands during the 2024 presidential campaign.
While AI-generated misinformation is among the concerns, other “headwinds” include inflated ad prices, evolving regulations, and consumer sentiment. Advertisers will need to navigate these challenges carefully to maintain consumer trust and protect their brand’s reputation.
Regulatory Efforts
Regulators are also taking notice of the growing concerns surrounding AI and are moving forward with proposals to address them. The Federal Trade Commission (FTC) recently announced plans for new rules related to AI robocalls to protect consumers and businesses from various scams. President Joe Biden also mentioned AI briefly during his State of the Union address, noting that bipartisan legislation aims to “harness the promise of A.I. to protect us from peril,” including a proposal to ban AI deepfakes.
The Inherent Trust in Technology
During the FTC’s annual PrivacyCon event, Stanford University researcher Jesutofunmi Omiye noted that humans are inherently trusting of information, which leads many people to believe the answers they get from platforms like Google.
“The thing we need to understand and remember is that human beings are trust machines. We think when a computer says something it’s very accurate, and that’s why a lot of people have been scammed. … And that’s just because we think computers are very right and we almost blindly follow computers’ instructions.”
This highlights the need for businesses and institutions to be transparent about the limitations and potential biases of AI and to educate consumers about how to critically evaluate information generated by AI.
The growing distrust in AI presents both challenges and opportunities for businesses and institutions. While the potential benefits of AI are significant, it is crucial to address the concerns surrounding its trustworthiness and potential for misuse. This will require a collaborative effort between businesses, policymakers, and researchers to develop standards and guidelines for the responsible development and deployment of AI. It will also require transparency and education to help consumers understand the capabilities and limitations of AI and to critically evaluate the information it generates. By addressing these challenges head-on and building trust in AI, we can harness its potential to drive innovation and improve our lives while mitigating its risks and negative impacts.
If you need assistance understanding how to leverage Generative AI in your marketing, advertising, or public relations campaigns, contact us today. Custom training workshops are available. Or, schedule a session for a comprehensive AI Transformation strategic roadmap to ensure your marketing team utilizes the right GAI tech stack for your needs.
Read more: Navigating Growing Distrust in AI: Challenges & OpportunitiesYour AI Problem Isn’t AI. It’s Your Workflow.
Most AI efforts fail because of fragmented tools, unclear policies, and broken workflows. Here’s why tech stack selection and governance must come before AI training, and how to fix it.
From Frontier to Framework: What AI Adoption Gets Wrong
In Part 2 of a 4-part series, we explore what marketers get wrong about AI adoption and internal frameworks.
Spring Cleaning Your AI: Resetting How You Work
AI isn’t getting harder; you’re just not structured for it. Here’s how to reset your workflow, organize your AI work, and stop starting over.

