Skip to main content

Google’s AI has been reading a lot of soft-core erotica lately

google ai erotica laptop desk
Image used with permission by copyright holder
It’s said that you are what you eat, and Google’s AI system has been consuming a lot of romance novels. So many, in fact, that while it hasn’t turned the entire system into erotica, the AI is now capable of writing some pretty decent content of its own (though not of the bodice-ripper variety).

For the last several months, Google has been doing some pretty interesting reading, devouring books like Unconditional Love, IgnitedFatal Desire, and Jacked Up. The original goal of the researchers behind the scheme was to add some personality to the automaton’s answers, and make their conversations with humans more … well, human. And apparently, the best way to go about doing that is by having machines read a whole lot of soft-core smut.

While Google is already pretty adept at providing information to its users, researchers believe that the search engine could work on its delivery tactics. And as it turns out, when it comes to developing better conversation skills for computers, romance novels are a great place to start. This is because most of these books follow an extremely predictable pattern, which makes it easier for the AI to detect little nuances within the English language itself.

“In the Google app, the responses are very factual,” Andrew Dai, the Google software engineer who led the project, told BuzzFeed News. “Hopefully with this work, and future work, it can be more conversational, or can have a more varied tone, or style, or register.”

In addition to improving the cadences of the Google app, researchers also hope to employ the AI’s new-found language abilities to Google Inbox’s “Smart Reply” tool, offering better auto responses that sound more like the sender. Already, Google claims that 10 percent of replies sent in the Inbox mobile app come from smart replies, and this proportion could increase if the language tool continues to improve.

So just how good could it get? “Theoretically,” Dai says, “[The AI] could write some romance novels of its own.” Oh my.

Lulu Chang
Former Digital Trends Contributor
Fascinated by the effects of technology on human interaction, Lulu believes that if her parents can use your new app…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more