Skip to main content

GPT-5 could soon change the world in one incredible way

GPT-4 may have only just launched, but people are already excited about the next version of the artificial intelligence (AI) chatbot technology. Now, a new claim has been made that GPT-5 will complete its training this year, and could bring a major AI revolution with it.

The assertion comes from developer Siqi Chen on Twitter, who stated: “I have been told that GPT-5 is scheduled to complete training this December and that OpenAI expects it to achieve AGI.”

i have been told that gpt5 is scheduled to complete training this december and that openai expects it to achieve agi.

which means we will all hotly debate as to whether it actually achieves agi.

which means it will.

— Siqi Chen (@blader) March 27, 2023

AGI is the concept of “artificial general intelligence,” which refers to an AI’s ability to comprehend and learn any task or idea that humans can wrap their heads around. In other words, an AI that has achieved AGI could be indistinguishable from a human in its capabilities.

That makes Chen’s claim pretty explosive, considering all the possibilities AGI might enable. At the positive end of the spectrum, it could massively increase the productivity of various AI-enabled processes, speeding things up for humans and eliminating monotonous drudgery and tedious work.

At the same time, bestowing an AI with that much power could have unintended consequences — ones that we simply haven’t thought of yet. It doesn’t mean the robot apocalypse is imminent, but it certainly raises a lot of questions about what the negative effects of AGI could be.

It should be noted that other forecasters predict that AGI will not be achieved until 2032.

Interesting find: One year ago, forecasters estimated AGI to be ready by 2057.

Given the rapid pace of AI these past few weeks, AGI is now expected to be ready by October 2032. 🤯 pic.twitter.com/vHp6izeBAI

— Rowan Cheung (@rowancheung) March 28, 2023

And as for the timing of GPT-5, this is the first time we’ve heard that next level of progress, though based on the other clues OpenAI has offered, it’s not far fetched.

The organization has officially predicted that GPT-4.5, the step up from the current GPT-4, will be “introduced in September or October 2023 as an intermediate version between GPT-4 and the upcoming GPT-5.”

Potential troubles at Twitter?

elon musk stylized image
Getty Images/Digital Trends Graphic

If AGI goes off the rails, it could enable the spread of incredibly convincing bots on social media channels like Twitter, helping to disseminate harmful disinformation and propaganda that is increasingly difficult to detect.

That’s something Elon Musk is evidently aware of, and the controversial billionaire has made fighting AI bots a key pillar of his tenure as Twitter CEO. Yet his latest idea of restricting the reach of accounts that have not paid for a Twitter Blue membership has not gone down well, and his time in charge has been beset by divisive moves that have had limited success, to put it mildly.

Twitter is just one frontier in the AI-enabled future, and there are many other ways artificial intelligence could alter the way we live. If GPT-5 does indeed achieve AGI, it seems fair to say the world could change in ground-shaking ways. Whether it will be for better or for worse remains to be seen.

Update: Musk, in addition to over a thousand over leaders in tech and public figures, have since signed a petition to pause further development of further version of GPT, including GPT-4.5 and GPT-5.

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more