Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Adobe gets called out for violating its own AI ethics

Ansel Adams' panorama of Grand Teton National Park with the peak in the background and a meandering river in the forest.
Ansel Adams / National Archives

Last Friday, the estate of famed 20th century American photographer Ansel Adams took to Threads to publicly shame Adobe for allegedly offering AI-generated art “inspired by” Adams’ catalog of work, stating that the company is “officially on our last nerve with this behavior.”

While the Adobe Stock platform, where the images were made available, does allow for AI generated images, The Verge notes that the site’s contributor terms prohibit images “created using prompts containing other artist names, or created using prompts otherwise intended to copy another artist.”

Adobe has since removed the offending images, conceding in the Threads conversation that, “this goes against our Generative AI content policy.”

A screenshot of Ansel Adams images put in Adobe Stock.
Adobe

However, the Adams estate seemed unsatisfied with that response, claiming that it had been “in touch directly” with the company “multiple times” since last August. “Assuming you want to be taken seriously re: your purported commitment to ethical, responsible AI, while demonstrating respect for the creative community,” the estate continued, “we invite you to become proactive about complaints like ours, & to stop putting the onus on individual artists/artists’ estates to continuously police our IP on your platform, on your terms.”

The ability to create high-resolution images of virtually any subject and in any visual style by simply describing the idea with a written prompt has helped launch generative AI into the mainstream. Image generators like Midjourney, Stable Diffusion and Dall-E have all proven immensely popular with users, though decidedly less so with the copyright holders and artists whose styles those programs imitate and whose existing works those AI engines are trained on.

Adobe’s own Firefly generative AI platform was, the company claimed, trained on the its extensive, licensed Stock image library. As such, Firefly was initially marketed as a “commercially safe” alternative to other image generators like Midjourney, or Dall-E, which trained on datasets scraped from the public internet.

However, an April report from Bloomberg found that some 57 million images within the Stock database, roughly 14% of the total, were AI generated, some of which were created by their data-scraping AI competitors.

“Every image submitted to Adobe Stock, including a very small subset of images generated with AI, goes through a rigorous moderation process to ensure it does not include IP, trademarks, recognizable characters or logos, or reference artists’ names,” a company spokesperson told Bloomberg at the time.

Andrew Tarantola
Andrew has spent more than a decade reporting on emerging technologies ranging from robotics and machine learning to space…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more