Skip to main content

Adobe clarifies new AI terms and conditions after high-profile users revolt

An image-editor app being used to edit photos on a laptop.
Mylene Tremoyet / Pexels

Adobe updated the terms and conditions for its popular Creative Cloud suite of photo and video editing apps on Thursday, setting off a wave of protests and vitriol from its users, who were upset that the new rules seemingly granted the company rights to “access [user] content through both automated and manual methods, such as for content review.” On Friday, the company was forced to clarify those changes and unequivocally state that, no, Adobe does not own artists’ works, nor will it use that content to train its AI systems like Firefly.

The controversy began Thursday when Creative Cloud users opened their apps to discover themselves locked out from using the programs, uninstalling them, or even contacting customer support, until the new terms were agreed to. Users were not amused.

“Hey @Photoshop what the hell was that new agreement you forced us to sign this morning that locked our app until we agree to it,” wrote director Duncan Jones in a tweet. “We are working on a bloody movie here, and NO, you don’t suddenly have the right to any of the work we are doing on it because we pay you to use Photoshop.”

Adobe initially responded on Thursday with the following:

This policy has been in place for many years. As part of our commitment to being transparent with our customers, we added clarifying examples earlier this year to our Terms of Use regarding when Adobe may access user content. Adobe accesses user content for a number of reasons, including the ability to deliver some of our most innovative cloud-based features, such as Photoshop Neural Filters and Remove Background in Adobe Express, as well as to take action against prohibited content. Adobe does not access, view, or listen to content that is stored locally on any user’s device.

In a blog post Friday, Adobe sought to further clarify its motivations for changing the terms and conditions. “The focus of this update was to be clearer about the improvements to our moderation processes that we have in place,” the Adobe Communications Team wrote. “Given the explosion of Generative AI and our commitment to responsible innovation, we have added more human moderation to our content submissions review processes.”

screenshot of the changes to Adobe terms and conditions
Adobe

Adobe explained that its systems need to access user content for a variety of nominal uses, such as “opening and editing files for the user or creating thumbnails or a preview for sharing,” or to apply AI-enhanced tools like Photoshop Neural Filters, Liquid Mode, or Remove Background. The company will also apply machine learning systems in moderation reviews to screen for illegal content like spam and child sexual abuse material (CSAM).

The company went on to further pledge that “Adobe does not train Firefly Gen AI models on customer content” and that “Adobe will never assume ownership of a customer’s work.” The company was quick to point out that while it does host content “to enable customers to use our applications and services, customers own their content and Adobe does not assume any ownership of customer work.”

Adobe plans to push a notification more clearly explaining the Terms and Condition changes when customers next open their editing apps.

Andrew Tarantola
Andrew has spent more than a decade reporting on emerging technologies ranging from robotics and machine learning to space…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more