People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot. But when the highly anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, with some calling it the early glimpses of AGI (artificial general intelligence).
The creator of the model, OpenAI, calls it the company’s “most advanced system, producing safer and more useful responses.” Here’s everything you need to know about it, including how to use it and what it can do.
What is GPT-4?
GPT-4 is a new language model created by OpenAI that can generate text that is similar to human speech. It advances the technology used by ChatGPT, which is currently based on GPT-3.5. GPT is the acronym for Generative Pre-trained Transformer, a deep learning technology that uses artificial neural networks to write like a human.
According to OpenAI, this next-generation language model is more advanced than ChatGPT in three key areas: creativity, visual input, and longer context. In terms of creativity, OpenAI says GPT-4 is much better at both creating and collaborating with users on creative projects. Examples of these include music, screenplays, technical writing, and even “learning a user’s writing style.”
The longer context plays into this as well. GPT-4 can now process up to 25,000 words of text from the user. You can even just send GPT-4 a web link and ask it to interact with the text from that page. OpenAI says this can be helpful for the creation of long-form content, as well as “extended conversations.”
GPT-4 can also now receive images as a basis for interaction. In the example provided on the GPT-4 website, the chatbot is given an image of a few baking ingredients and is asked what can be made with them. It is not currently known if video can also be used in this same way.
Lastly, OpenAI also says GPT-4 is significantly safer to use than the previous generation. It can reportedly produce 40% more factual responses in OpenAI’s own internal testing, while also being 82% less likely to “respond to requests for disallowed content.”
OpenAI says it’s been trained with human feedback to make these strides, claiming to have worked with “over 50 experts for early feedback in domains including AI safety and security.”
Over the weeks since it launched, users have posted some of the amazing things they’ve done with it, including inventing new languages, detailing how to escape into the real world, and making complex animations for apps from scratch. As the first users have flocked to get their hands on it, we’re starting to learn what it’s capable of. One user apparently made GPT-4 create a working version of Pong in just sixty seconds, using a mix of HTML and JavaScript.
How to use GPT-4
The main way to access GPT-4 right now is to upgrade to ChatGPT Plus. To jump up to the $20 paid subscription, just click on “Upgrade to Plus” in the sidebar in ChatGPT. Once you’ve entered your credit card information, you’ll be able to toggle between GPT-4 and older versions of the LLM. You can even double-check that you’re getting GPT-4 responses since they use a black logo instead of the green logo used for older models.
From there, using GPT-4 is identical to using ChatGPT Plus with GPT-3.5. It’s more capable than ChatGPT and allows you to do things like fine-tune a dataset to get tailored results that match your needs.
If you don’t want to pay, there are some other ways to get a taste of how powerful GPT-4 is. First off, you can try it out as part of Microsoft’s Bing Chat. Microsoft revealed that it’s been using GPT-4 in Bing Chat, which is completely free to use. Some GPT-4 features are missing from Bing Chat, however, and it’s clearly been combined with some of Microsoft’s own proprietary technology. But you’ll still have access to that expanded LLM (large language model) and the advanced intelligence that comes with it. It should be noted that while Bing Chat is free, it is limited to 15 chats per session and 150 sessions per day.
There are lots of other applications that are currently using GPT-4, too, such as the question-answering site, Quora.
When was GPT-4 released?
GPT-4 was officially announced on March 13, as was confirmed ahead of time by Microsoft, even though the exact day was unknown. As of now, however, it’s only available in the ChatGPT Plus paid subscription. The current free version of ChatGPT will still be based on GPT-3.5, which is less accurate and capable by comparison.
GPT-4 has also been made available as an API “for developers to build applications and services.” Some of the companies that have already integrated GPT-4 include Duolingo, Be My Eyes, Stripe, and Khan Academy. The first public demonstration of GPT-4 was also livestreamed on YouTube, showing off some of its new capabilities.
Is GPT-4 better than GPT-3.5?
The free version of ChatGPT is still based around GPT 3.5, but GPT-4 is much better. It can understand and respond to more inputs, it has more safeguards in place, and it typically provides more concise answers compared to GPT 3.5.
However, as we noted in our comparison of GPT-4 versus GPT-3.5, the newer version has much slower responses, as it was trained on a much larger set of data.
The GPT-4 API
As mentioned, GPT-4 is available as an API to developers who have made at least one successful payment to OpenAI in the past. The company offers several versions of GPT-4 for developers to use through its API, along with legacy GPT-3.5 models.
The API is mostly focused around developers making new apps, but it has caused some confusion for consumers, too. Plex allows you to integrate ChatGPT into the service’s Plexamp music player, which calls for a ChatGPT API key. This is a separate purchase from ChatGPT Plus, so you’ll need to sign up for a developer account to gain API access if you want
Is GPT-4 getting worse?
As much as GPT-4 impressed people when it first launched, some users have noticed a degradation in its answers over the following months. It’s been noticed by important figures in the developer community and has even been posted directly to OpenAI’s forums. It was all anecdotal though, and an OpenAI executive even took to Twitter to dissuade the premise. According to OpenAI, it’s all in our heads.
No, we haven't made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one.
Current hypothesis: When you use it more heavily, you start noticing issues you didn't see before.
— Peter Welinder (@npew) July 13, 2023
Then, a study was published that showed that there was, indeed, worsening quality of answers with future updates of the model. By comparing GPT-4 between the months of March and June, the researchers were able to ascertain that GPT-4 went from 97.6% accuracy down to 2.4%.
It’s not a smoking gun, but it certainly seems like what users are noticing isn’t just being imagined.
Where is visual input in GPT-4?
One of the most anticipated features in GPT-4 is visual input, which allows ChatGPT Plus to interact with images not just text. Being able to analyze images would be a huge boon to GPT-4, but the feature has been held back due to mitigation of safety challenges, according to OpenAI CEO Sam Altman.
You can get a taste of what visual input can do in Bing Chat, which has recently opened up the visual input feature for some users. It can also be tested out using a different application called MiniGPT-4. The open-source project was made by some PhD students, and while it’s a bit slow to process the images, it demonstrates the kinds of tasks you’ll be able to do with visual input once it’s officially rolled out to GPT-4 in ChatGPT Plus.
What are the best GPT-4 plugins?
Plugins are one of the best reasons to pay for ChatGPT Plus, opening up the possibilities to what the model can do. Two of the most powerful examples were created by OpenAI itself: the code interpreter and web browser plugins.
By using these plugins in ChatGPT Plus, you can greatly expand the capabilities of GPT-4. ChatGPT Code Interpreter can use Python in a persistent session — and can even handle uploads and downloads. The web browser plugin, on the other hand, gives GPT-4 access to the whole of the internet, allowing it to bypass the limitations of the model and fetch live information directly from the internet on your behalf.
Some of the other best GPT-4 plugins include Zapier, Wolfram, and Speak, all of which allow you to take the AI in new directions.
What are GPT-4’s limitations?
While discussing the new capabilities of GPT-4, OpenAI also notes some of the limitations of the new language model. Like previous versions of GPT, OpenAI says the latest model still has problems with “social biases, hallucinations, and adversarial prompts.”
In other words, it’s not perfect. It’ll still get answers wrong, and there have been plenty of examples shown online that demonstrate its limitations. But OpenAI says these are all issues the company is working to address, and in general, GPT-4 is “less creative” with answers and therefore less likely to make up facts.
The other primary limitation is that the GPT-4 model was trained on internet data stopping in 2021. The web browser plugin helps get around this limitation, though.
Does Bing Chat use GPT-4?
Microsoft originally states that the new Bing, or Bing Chat, was more powerful than ChatGPT. Since OpenAI’s chat uses GPT-3.5, there was an implication at the time that Bing Chat could be using GPT-4. And now, Microsoft has confirmed that Bing Chat is, indeed, built on GPT-4.
Still, features such as visual input weren’t available on Bing Chat, so it’s not yet clear what exact features have been integrated and which have not.
Regardless, Bing Chat clearly has been upgraded with the ability to access current information via the internet, a huge improvement over the current version of ChatGPT, which can only draw from the training it received through 2021.
In addition to internet access, the AI model used for Bing Chat is much faster, something that is extremely important when taken out of the lab and added to a search engine.
An evolution, not a revolution?
We haven’t tried out GPT-4 in ChatGPT Plus yet ourselves, but it’s bound to be more impressive, building on the success of ChatGPT. In fact, if you’ve tried out the new Bing Chat, you’ve apparently already gotten a taste of it. Just don’t expect it to be something brand new.
Prior to the launch of GPT-4, OpenAI CEO Sam Altman said in a StrictlyVC interview posted by Connie Loizos on YouTube that “people are begging to be disappointed, and they will be.”
There, Altman acknowledged the potential of AGI (artificial general intelligence) to wreak havoc on world economies and expressed that a quick rollout of several small changes is better than a shocking advancement that provides little opportunity for the world to adapt to the changes. As impressive as GPT-4 seems, it’s certainly more of a careful evolution than a full-blown revolution.