Business

Jan 3, 2025 3:01 PM

A Book App Used AI to ‘Roast’ Its Users. It Went Anti-Woke Instead

One year-end summary from Fable, a social app where people share what books they read, told the user, “Don’t forget to surface for the occasional white author, OK?”
Photograph: Getty Images

Fable, a popular social media app that describes itself as a haven for “bookworms and bingewatchers,” created an AI-powered end-of-year summary feature recapping what books users read in 2024. It was meant to be playful and fun, but some of the recaps took on an oddly combative tone. Writer Danny Groves’ summary for example, asked if he’s “ever in the mood for a straight, cis white man’s perspective” after labeling him a “diversity devotee.”

Books influencer Tiana Trammell’s summary, meanwhile, ended with the following advice: “Don’t forget to surface for the occasional white author, okay?”

A reader summary as shown on the 2024 stats page from the Fable app.

Courtesy of Tiana Trammell

Trammell was flabbergasted, and she soon realized she wasn’t alone after sharing her experience with Fable’s summaries on Threads. “I received multiple messages,” she says, from people whose summaries had inappropriately commented on “disability and sexual orientation.”

Ever since the debut of Spotify Wrapped, annual recap features have become ubiquitous across the internet, providing users a rundown of how many books and news articles they read, songs they listened to, and workouts they completed. Some companies are now using AI to wholly produce or augment how these metrics are presented. Spotify, for example, now offers an AI-generated podcast where robots analyze your listening history and make guesses about your life based on your tastes. Fable hopped on the trend by using OpenAI’s API to generate summaries of the past 12 months of the reading habits for its users, but it didn’t expect that the AI model would spit out commentary that took on the mien of an anti-woke pundit.

Fable later apologized on several social media channels, including Threads and Instagram, where it posted a video of an executive issuing the mea culpa. “We are deeply sorry for the hurt caused by some of our Reader Summaries this week,” the company wrote in the caption. “We will do better.”

Kimberly Marsh Allee, Fable’s head of community, told WIRED before publication that the company was working on a series of changes to improve its AI summaries, including an opt-out option for people who don’t want them and clearer disclosures indicating that they’re AI-generated. “For the time being, we have removed the part of the model that playfully roasts the reader, and instead the model simply summarizes the user’s taste in books,” she said.

After publication, Marsh Allee said that Fable had instead made the decision to immediately remove the AI-generated 2024 reading summaries, as well as two other features that used AI.

For some users, adjusting the AI does not feel like an adequate response. Fantasy and romance writer A.R. Kaufer was aghast when she saw screenshots of some of the summaries on social media. “They need to say they are doing away with the AI completely. And they need to issue a statement, not only about the AI, but with an apology to those affected,” says Kaufer. “This ‘apology’ on Threads comes across as insincere, mentioning the app is ‘playful’ as though it somehow excuses the racist/sexist/ableist quotes.” In response to the incident, Kaufer decided to delete her Fable account.

So did Trammell. “The appropriate course of action would be to disable the feature and conduct rigorous internal testing, incorporating newly implemented safeguards to ensure, to the best of their abilities, that no further platform users are exposed to harm,” she says.

Groves concurs. “If individualized reader summaries aren’t sustainable because the team is small, I’d rather be without them than confronted with unchecked AI outputs that might offend with testy language or slurs,” he says. “That’s my two cents … assuming Fable is in the mood for a gay, cis Black man’s perspective.”

Generative AI tools already have a lengthy track record of race-related misfires. In 2022, researchers found that OpenAI’s image generator Dall-E had a bad habit of showing nonwhite people when asked to depict “prisoners” and all white people when it showed “CEOs.” Last fall, WIRED reported that a variety of AI search engines surfaced debunked and racist theories about how white people are genetically superior to other races.

Overcorrecting has sometimes become an issue, too: Google’s Gemini was roundly criticized last year when it repeatedly depicted World War II–era Nazis as people of color in a misguided bid for inclusivity. “When I saw confirmation that it was generative AI making those summaries, I wasn’t surprised,” Groves says. “These algorithms are built by programmers who live in a biased society, so of course the machine learning will carry the biases, too—whether conscious or unconscious.”

Updated 1/3/25 5:44pm ET: This story has been updated to note that Fable decided to immediately disable several AI-powered features.