Skip to main content

GPT-5 will have ‘Ph.D.-level’ intelligence

OpenAI CTO Mira Murati on stage answering questions.
Dartmouth Engineering

The next major evolution of ChatGPT has been rumored for a long time. GPT-5, or whatever it will be called, has been talked about vaguely many times over the past year, but yesterday, OpenAI Chief Technology Officer Mira Murati gave some additional clarity on its capabilities.

In an interview with Dartmouth Engineering that was posted on X (formerly Twitter), Murati describes the jump from GPT-4 to GPT-5 as someone growing from a high-schooler up to university.

“If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence,” Murati says. “And then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we’re looking at Ph.D. intelligence for specific tasks. Things are changing and improving pretty rapidly.”

Mira Murati: GPT-3 was toddler-level, GPT-4 was a smart high schooler and the next gen, to be released in a year and a half, will be PhD-level pic.twitter.com/jyNSgO9Kev

— Tsarathustra (@tsarnick) June 20, 2024

Interestingly, the interviewer asked her to specify the timetable, asking if it’d come in the next year. Murati nods her head, and then clarifies that it’d be in a year and a half. If that’s true, GPT-5 may not come out until late 2025 or early 2026. Some will be disappointed to hear that the next big step is that far away.

After all, the first rumors about the launch time of GPT-5 were that it would be in late 2023. And then, when that didn’t turn out, reports indicated that it would launch later this summer. That turned out to be GPT-4o, which was an impressive release, but it wasn’t the kind of step function in intelligence Murati is referencing here.

In terms of the claim about intelligence, it confirms what has been said about GPT-5 in the past. Microsoft CTO Kevin Scott claims that the next-gen AI systems will be “capable of passing Ph.D. exams” thanks to better memory and reasoning operations.

Murati admits that the “Ph.D.-level” intelligence only applies to some tasks. “These systems are already human-level in specific tasks, and, of course, in a lot of tasks, they’re not,” she says.

Luke Larsen
Luke Larsen is the Senior editor of computing, managing all content covering laptops, monitors, PC hardware, Macs, and more.
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more