Seven AI Companies “Promise” To Self Regulate


In news that’s really nothing more than subterfuge, seven different AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI) agreed to adhere to the Biden administration’s requirements for “voluntary safeguards” concerning safety, security and social responsibility.

Unfortunately, these guardrails are incredibly vague. Frankly, they are nothing more than window dressing as the companies will basically be left to police themselves. I discussed this (and more) on a recent podcast with AI ethicist, John C. Havens.

The Voluntary Safeguards Include:

  1. Internal and external security testing. There are no details about what security measures these companies will test and/or who will do the testing.
  2. Information sharing with other companies and the government to manage at-risk results. Again, this is vague. There is nothing that explains what constitutes “at-risk” results, or what will be done with the information once it’s been shared.
  3. Investments in cybersecurity. These companies promise to invest in their own protections, of course. But, there is nothing in the guidelines to explain how much will be invested and what security measures will be taken. This is just another, “we will police ourselves” kind of message.
  4. Facilitate third-party reporting on vulnerabilities. Again, there is no explanation of who these third-party entities are, what vulnerabilities will be monitored, or what the reporting will include.
  5. Use of water marketing systems. This will help brands determine what is branded AI-generated content and what might be a deep fake. I am all for these kinds of watermarks, but, again, there are no details outlining what this will entail, how these watermarks will be tracked, and how to ensure these watermarks can’t be replicated or falsified.
  6. Reporting on inappropriate uses of AI. Again, there is nothing that defines these “inappropriate uses.” There are also no clear parameters around to whom these companies should report these outputs and what will be done about them.
  7. Prioritize the bias issue. As I mentioned in my most recent podcast, all GAI tools are inherently biased simply because they were trained on generated content and humans have biases. Add to that the fact that generative AI tools learned by ingesting the entirety of the internet. Considering the vast majority of published works over the generations have represented the cis-gender, while male perspective, there is an inherent bias just in the stories and experiences that have been told. Of course, this will change as more minority views are surfacing, but in the interim, beyond “prioritizing” the issue, there are no clear measures outlined that these companies will take to address it.
  8. Address societal issues. This one is likewise vague and toothless with no required actions to take and no clear process to measure adherence.

The Missed Opportunity

This is a missed opportunity for real guardrails that are absolutely necessary to protect individual privacy, brand reputations, and to guard against, misinformation and misuse of this technology.

It seems we cannot learn our own lessons, no matter how obvious they are. As social media platforms grew in the early 2000s, calls for guardrails and regulations were met with the same promises and assurances that the technology companies can police themselves. And, of course, we ended up with social media harnessing hate groups and creating a crisis of misinformation.

What Meta Says About The Guidelines

Meanwhile, Meta seems quite content with the company’s agreement to abide by toothless guidelines, stating:

“We are pleased to make these voluntary commitments alongside others in the sector. They are an important first step in ensuring responsible guardrails are established for A.I. and they create a model for other governments to follow.”

Nick Clegg, the president of global affairs at Meta, the parent company of Facebook.

What Biden Says About The Guidelines

President Biden also believes these company promises are a good first step.

“We must be clear-eyed and vigilant about the threats emerging from emerging technologies that can pose — don’t have to but can pose — to our democracy and our values. This is a serious responsibility; we have to get it right. And there’s enormous, enormous potential upside as well.”

President Biden in a July 21st press conference

While the Biden administration is correct in the need for these guardrails, the actual “promises” seem to fall short of providing any real action.

It’s no secret I am a fan of generative AI and the incredible opportunities these tools offer marketers to automate common tasks, augment skills gaps, and enhance creativity. But, industry and government regulations – meaningful regulations – are necessary to safeguard against bad actors using AI in dangerous ways. There is no question that we need more than voluntary commitments that will not be enforced by government regulators.

I will continue to monitor this issue and will let you know of any progress in the development of real protections and regulations.

If you are struggling to identify the right path for your agency or marketing team’s AI Transformation, please contact me today.

Read more: Seven AI Companies “Promise” To Self Regulate

Posted

in

,

by

Comments

Leave a Reply

%d