Adobe Clarifies Firefly Update, Assuring Users of Content Ownership and Privacy


In a recent development that caused a stir among its users, Adobe has clarified its Terms of Use update, which initially led many to believe that their unpublished work could be used for training AI models, including its generative AI model, Firefly. The backlash began when Adobe notified its users of an update to its Terms of Use policy, with a particular sentence in Section 2.2 catching people’s attention. The sentence stated that Adobe’s automated systems might analyze user content and Creative Cloud Customer Fonts using techniques like machine learning to improve their services, software, and user experience.

However, Adobe has now clarified that it does not train its Firefly generative AI models on user content and that it “will never assume ownership of a customer’s work.”

In a blog post, the company explained that the controversial part of the policy was not new and specifically refers to moderating illegal content, such as child sexual abuse material or content that violates its terms, like the creation of spam or phishing attempts.

The Update Includes Human Content Moderation

Adobe also highlighted that it has added more human moderation to its content submissions review processes, given the explosion of Generative AI and the company’s commitment to responsible innovation. This clarification suggests that the update involves more human moderation, rather than increased automated moderation.

The confusion among users arose because Adobe did not specify exactly what had been updated in the Terms, leading many to assume that the section meant their unpublished work, including confidential content, could be used to train AI models. However, Adobe has reassured its users that their commitments have not changed.

To further clarify the changes, Adobe highlighted the specific updates in its Terms of Service using pink text. In the privacy section, the text was changed from “‘will only’” access” to “‘may’ access, view, or listen to your Content” and added the line “‘through both automated and manual methods, but only’ in limited ways…” The paragraph also included a reference to Section 4.1, which outlines details of illegal or banned content.

Additionally, the section added the phrase “including manual review,” referring to the methods employed for screening Adobe’s content. The term “child ‘pornography’” was also replaced with “child ‘sexual abuse material.’”

AI-Generated Art via Firefly Can Be Used Commercially

In the blog post, Adobe clarified that its Firefly models are trained on licensed content, including Adobe Stock, and public domain content, rather than user-generated content. The company also asserted that its users own the work they create on its apps, providing a sense of relief to those who rely on Adobe for their creative pursuits.

I often encourage my clients to organize the use of their generative AI tech stack accordingly to the most ethical use of each model’s output. Because Firefly’s AI was trained on licensed and publicly available content, anything you create with it can be used in your marketing.

Despite the clarifications, the reaction to Adobe’s vaguely-worded update highlights the fear and mistrust creatives feel as generative AI threatens to disrupt their professions. AI models like ChatGPT, Dall-E, Gemini, Copilot, and Midjourney have faced criticism for being trained on content scraped from the web, which in turn automates writing and image creation. There are also concerns about OpenAI’s unreleased model, Sora, which is believed to be trained on videos from YouTube and other sources.

As the creative industry navigates this tense period, professionals are understandably wary of changes that may threaten their livelihoods. Adobe’s clarification of its Terms of Use update serves as a reminder of the importance of transparent communication and the need for companies to address the concerns of their users, especially in the face of rapidly evolving technologies like generative AI.

While Adobe has taken steps to reassure its users that their content and privacy remain protected, the incident underscores the broader challenges facing the creative industry as it adapts to the rise of AI-powered tools and services. As more companies explore the potential of generative AI, it is crucial that they prioritize the rights and interests of the creatives who rely on their platforms, ensuring that innovation does not come at the cost of trust and fairness.

Adobe’s clarification of its Terms of Use update has provided some relief to its users, who can now be assured that their unpublished work will not be used for training AI models without their consent. However, the incident serves as a wake-up call for the creative industry as a whole, highlighting the need for ongoing dialogue and collaboration between technology companies and the creative community to ensure that the benefits of generative AI are realized in a responsible and equitable manner.


Remember, AI won’t take your job. Someone who knows how to use AI will. Upskilling your team today, ensures success tomorrow. In-person and virtual training workshops are available. Or, schedule a session for a comprehensive AI Transformation strategic roadmap to ensure your marketing team utilizes the right GAI tech stack for your needs.


Posted

in

, ,

by

Discover more from HumanDrivenAI

Subscribe now to keep reading and get access to the full archive.

Continue reading