Skip to main content

AI is now being trained by AI to become a better AI

An OpenAI graphic for ChatGPT-4.
OpenAI

OpenAI has developed an AI assistant, dubbed CriticGPT, to help its crowd-sourced trainers further refine the GPT-4 model. It spots subtle coding errors that humans might otherwise miss.

After a large language model like GPT-4 is initially trained, it subsequently undergoes a continual process of refinement, known as Reinforcement Learning from Human Feedback (RLHF). Human trainers interact with the system and annotate the responses to various questions, as well as rate various responses against one another, so that the system learns to return the preferred response and increases the model’s response accuracy.

The problem is that as the system’s performance improves, it can outpace the level of expertise of its trainer, and the process of identifying mistakes and errors becomes increasingly difficult.

These AI trainers aren’t always subject matter experts, mind you. Last year, OpenAI got caught crowd sourcing the effort to Kenyan workers — and paying them less than $2 an hour — to improve its models’ performance.

a criticGPT screenshot
OpenAI

This issue is especially difficult when refining the system’s code generation capabilities, which is where CriticGPT comes in.

“We’ve trained a model, based on GPT-4, called CriticGPT, to catch errors in ChatGPT’s code output,” the company explained in a blog post Thursday. “We found that when people get help from CriticGPT to review ChatGPT code they outperform those without help 60 percent of the time.”

What’s more, the company released a whitepaper on the subject, titled “LLM Critics Help Catch LLM Bugs,” which found that “LLMs catch substantially more inserted bugs than qualified humans paid for code review, and further that model critiques are preferred over human critiques more than 80 percent of the time.”

Interestingly, the study also found that when humans collaborated with CriticGPT, the AI’s rate of hallucinating responses was lower than when CriticGPT did the work alone, but that rate of hallucination was still higher than if a human just did the work by themselves.

Andrew Tarantola
Andrew has spent more than a decade reporting on emerging technologies ranging from robotics and machine learning to space…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more