Skip to main content

GPT-4 Turbo is the biggest update since ChatGPT’s launch

A person typing on a laptop that is showing the ChatGPT generative AI website.
Matheus Bertelli / Pexels

OpenAI has just unveiled the latest updates to its large language models (LLM) during its first developer conference, and the most notable improvement is the release of GPT-4 Turbo, which is currently entering preview. GPT-4 Turbo comes as an update to the existing GPT-4, bringing with it a greatly increased context window and access to much newer knowledge. Here’s everything you need to know about GPT-4 Turbo.

OpenAI claims that the AI model will be more powerful while simultaneously being cheaper than its predecessors. Unlike the previous versions, it’s been trained on information dating to April 2023. That’s a hefty update on its own — the latest version maxed out in September 2021. I just tested this myself, and indeed, using GPT-4 allows ChatGPT to draw information from events that happened up until April 2023, so that update is already live.

GPT-4 Turbo has a significantly larger context window than the previous versions. This is essentially what GPT-4 Turbo takes into consideration before it generates any text in reply. To that end, it now has a 128,000-token (this is the unit of text or code that LLMs read) context window, which, as OpenAI reveals in its blog post, is the equivalent of around 300 pages of text.

That’s an entire novel that you could potentially feed to ChatGPT over the course of a single conversation, and a much greater context window than the previous versions had (8,000 and 32,000 tokens).

Context windows are important for LLMs because they help them stay on topic. If you interact with large language models, you’ll find that they may go off topic if the conversation goes on for too long. This can produce some pretty unhinged and unnerving responses, such as that time when Bing Chat told us that it wanted to be human. GPT-4 Turbo, if all goes well, should keep the insanity at bay for a much longer time than the current model.

GPT-4 Turbo is also going to be cheaper to run for developers, with the cost reduced to $0.01 per 1,000 input tokens, which rounds up to roughly 750 words, while outputs will cost $0.03 per 1,000 tokens. OpenAI estimates that this new version is three times cheaper than the ones that came before it.

The company also says that GPT-4 Turbo does a better job of following instructions carefully, and can be told to use the coding language of choice to produce results, such as XML or JSON. GPT-4 Turbo will also support images and text-to-speech, and it still offers DALL-E 3 integration.

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

This wasn’t the only big reveal for OpenAI, which also introduced GPTs, custom versions of ChatGPT that anyone can make for their own specific purpose with no knowledge of coding. These GPTs can be made for personal or company use, but can also be distributed to others. OpenAI says that GPTs are available today for ChatGPT Plus subscribers and enterprise users.

Lastly, in light of constant copyright concerns, OpenAI joins Google and Microsoft in saying that it will take legal responsibility if its customers are sued for copyright infringement.

With the enormous context window, the new copyright shield, and an improved ability to follow instructions, GPT-4 Turbo might turn out to be both a blessing and a curse. ChatGPT is fairly good at not doing things it shouldn’t do, but even still, it has a dark side. This new version, while infinitely more capable, may also come with the same drawbacks as other LLMs, except this time, it’ll be on steroids.

Editors' Recommendations

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
Apple finally has a way to defeat ChatGPT
A MacBook and iPhone in shadow on a surface.

OpenAI needs to watch out because Apple may finally be jumping on the AI bandwagon, and the news doesn't bode well for ChatGPT. Apple is reportedly working on a large language model (LLM) referred to as ReALM, which stands for Reference Resolution As Language Modeling. Made to give Siri a boost and help it understand context, the model comes in four variants, and Apple claims that even its smallest model performs on a similar level to OpenAI's ChatGPT.

This tantalizing bit of information comes from an Apple research paper, first shared by Windows Central, and it appears to be an early peek into what Apple has been cooking for a while now. ReALM is Apple's own LLM that was reportedly made to enhance Siri's capabilities; these improvements include a greater ability to understand context in a conversation.

Read more
GPT-4 vs. GPT-3.5: how much difference is there?
Infinix Zero 30 5G Android phone in gold color with ChatGPT virtual assistant.

The ChatGPT chatbot is an innovative AI tool developed by OpenAI. As it stands, there are two main versions of the software: GPT-4 and GPT-3.5. Toe to toe in more ways than one, there are a couple of key differences between both versions that may be deal-breakers for certain users. But what exactly are these differences? We’re here to help you find out. 

We’ve put together this side-by-side comparison of both ChatGPT versions, so when you’re done reading, you’ll know what version makes the most sense for you and yours.
What are GPT 3.5 and GPT-4?

Read more
ChatGPT AI chatbot can now be used without an account
The ChatGPT website on a laptop's screen as the laptop sits on a counter in front of a black background.

ChatGPT, the AI-powered chatbot that went viral at the start of last year and kicked off a wave of interest in generative AI tools, no longer requires an account to use.

Its creator, OpenAI, launched a webpage on Monday that lets you begin a conversation with the chatbot without having to sign up or log in first.

Read more