Skip to main content

ChatGPT may soon moderate illegal content on sites like Facebook

GPT-4 — the large language model (LLM) that powers ChatGPT Plus — may soon take on a new role as an online moderator, policing forums and social networks for nefarious content that shouldn’t see the light of day. That’s according to a new blog post from ChatGPT developer OpenAI, which says this could offer “a more positive vision of the future of digital platforms.”

By enlisting artificial intelligence (AI) instead of human moderators, OpenAI says GPT-4 can enact “much faster iteration on policy changes, reducing the cycle from months to hours.” As well as that, “GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling,” OpenAI claims.

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

For example, the blog post explains that moderation teams could assign labels to content to explain whether it falls within or outside a given platform’s rules. GPT-4 could then take the same data set and assign its own labels, without knowing the answers beforehand.

The moderators could then compare the two sets of labels and use any discrepancies to reduce confusion and add clarification to their rules. In other words, GPT-4 could act as an everyday user and gauge whether the rules make sense.

The human toll

OpenAI's GPT-4 large language model attempts to moderate a piece of content. The result is compared to a human's analysis of the content.
OpenAI

Right now, content moderation on various websites is performed by humans, which exposes them to potentially illegal, violent, or otherwise harmful content on a regular basis. We’ve repeatedly seen the awful toll that content moderation can take on people, with Facebook paying $52 million to moderators who suffered from PTSD due to the traumas of their job.

Reducing the burden on human moderators could help to improve their working conditions, and since AIs like GPT-4 are immune to the kind of mental stress that humans feel when handling troublesome content, they could be deployed without worrying about burnout and PTSD.

However, it does raise the question of whether using AI in this manner would result in job losses. Content moderation is not always a fun job, but it is a job nonetheless, and if GPT-4 takes over from humans in this area, there will likely be concern that former content moderators will simply be made redundant rather than reassigned to other roles.

OpenAI does not mention this possibility in its blog post, and that really is something for content platforms to decide on. But it might not do much to allay fears that AI will be deployed by large companies simply as a cost-saving measure, with little concern for the aftermath.

Still, if AI can reduce or eliminate the mental devastation faced by the overworked and underappreciated teams who moderate content on the websites used by billions of people every day, there could be some good in all this. It remains to be seen whether that will be tempered by equally devastating redundancies.

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more