Skip to main content

Barnes & Noble used A.I. to make classic books more diverse. It didn’t go well

For Black History Month, Barnes & Noble created covers of classic novels with the protagonists re-imagined as people of color. Then it quickly canceled its planned Diverse Editions of 12 books, including Emma, The Secret Garden, and Frankenstein amid criticism that it clumsily altered books by mostly white authors instead of promoting writers of color. The project used artificial intelligence to scan 100 books for descriptions of major characters, and artists created covers depicting Alices, Romeos, and Captain Ahabs of various ethnicities.

“We acknowledge the voices who have expressed concerns about the Diverse Editions project at our Barnes & Noble Fifth Avenue store and have decided to suspend the initiative,” Barnes & Noble announced in a statement. The company partnered with with Penguin Random House and advertising agency TBWA/CHIAT/DAY to create the books.

Diverse Editions wizard of oz cover
Image used with permission by copyright holder

It’s not clear how deep the changes went beyond the cover. Author Benjanun Sriduangkaew tweeted an image with a description of the project: “We used artificial intelligence to analyze the text from 100 of the most famous titles searching the text to see if it omitted ethnicity of primary characters. Using speech and linguistic patterns, our natural language processing (NLP) algorithms accounted for the fact that when authors describe a character, they rarely outright state their race, but often use more poetic and descriptive language. Among the classics that didn’t specify race or ethnicity, here are 12 that we have re-imagined for Diverse Editions: Alice’s Adventures in Wonderland, The Count of Monte Cristo, Emma, Frankenstein, Dr. Jekyll & Mr. Hyde, Moby Dick, Peter Pan, The Secret Garden, The Three Musketeers, Treasure Island, Romeo & Juliet, The Wonderful Wizard of Oz.”

Even modern authors often don’t directly state the race or ethnicity of certain characters. Instead, they’ll rely on other ways of distinguishing them, describing their “mocha” or “coffee” skin, as writer Justine Ireland pointed out on Twitter. Anyone who grew up reading The Baby-Sitters Club knows Claudia Kishi is Japanese-American and is always described as having “almond-shaped eyes,” an inaccurate and often-use phrase signifying a character of Asian descent.

Many authors and librarians cited a number of reasons the project was misguided. “If Barnes & Noble was serious about this #BlackHistoryMonth celebration, they would feature and push renditions of the ‘classics’ actually written by Black authors,” L.L. McKenny tweeted. Her book A Blade So Black is a retelling of Alice in Wonderland, mixed with some Buffy the Vampire Slayer.

It’s unclear if the A.I. picked up cues outside of descriptions (or a lack of) of skin color. Characters’ social class or wealthiness during the relevant historical period, might imply whiteness alone, for example. Or they might verbally denigrate people of color. Writer Amitha Knight wrote about the main character in The Secret Garden: “if you’re going to say ‘you can put yourself into any book!’ I’m telling you, you can’t. Mary Lennox did not want to be Indian.” In the book, Mary tells another girl, “‘You thought I was a native! You dared! You don’t know anything about natives! They are not people — they’re servants who must salaam to you.’” However she was depicted on the trio of Diverse Editions covers, it wouldn’t change the fact that Mary was the daughter of an English government official living and working in colonial India. “The Secret Garden is a book that hinges on the premise that Mary Lennox is a peevish white girl born and raised in India by colonialist British parents,” tweeted writer Hanna Alkaf.

https://twitter.com/yesitshanna/status/1224880743437914112

TBWA created the A.I., according to Fast Company. Exactly what words and descriptors it searched for is unclear, but Dr. Debbie Reese said in an email to Digital Trends that it seems to have missed some significant words with Peter Pan. Tiger Lily is called a slur, and the “braves” refer to Peter as “Great White Father.” Reese is the author of the blog American Indians in Children’s Literature and said that scanning the books would miss important context outside the pages, as well. “The analysis would not have found a problem with respect to Native peoples in Wizard of Oz,” she wrote. “Native concerns over that book are not with its content, but with its author (who called for extermination of Native people).”

Barnes and Noble Diverse Editions Book Covers
Image used with permission by copyright holder

If a teacher uses a book like Peter Pan in class and attempted to critique depictions of Tiger Lily, it would come at the expense of some students, Reese said. “During such lessons, the Native or students of color have to deal with hearing and reading slurs, derogatory passages, etc. — things they already experience — in the classroom so that their white peers come to a greater understanding of racism,” she said. “That puts Native/students of color at a disadvantage so that white peers can ‘learn.’” She recommends Cynthia Leitich Smith’s Hearts Unbroken as an example of a Native author critiquing a classic, The Wizard of Oz.

Reese added that “Characterizing Native peoples as ‘people of color’ erases our sovereign nation status. None of the other cultural groups in the U.S. have nationhood status, governments, jurisdiction over lands (though it is limited jurisdiction, it is a significant difference).” That nuance is beyond what an A.I. scan can handle at this point, especially as many the teams creating such systems often aren’t very diverse.

Digital Trends reached out to Barnes & Noble and Penguin Random House for comment and will update when they get back to us.

Jenny McGrath
Former Digital Trends Contributor
Jenny McGrath is a senior writer at Digital Trends covering the intersection of tech and the arts and the environment. Before…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more