Skip to main content

Here’s how to rewatch the first public demo of ChatGPT-4

OpenAI hosted a developer live stream that showed the first public demo of ChatGPT-4. The new Large Language Model (LLM) has reportedly been in development for a few years, and Microsoft confirmed it’s the tech powering the company’s new Bing Chat service.

The presentation started at 1 p.m. PT on Monday, March 14. OpenAI President and co-founder Greg Brockman led the presentation, walking through what GPT-4 is capable of, as well as its limitations. You can see a replay of the of the event below.

GPT-4 Developer Livestream

OpenAI has already announced that ChatGPT-4 will only be available to ChatGPT Plus subscribers. The free version of ChatGPT will continue to run on the GPT-3.5 model.

The live stream is focused on how developers can leverage GPT-4 in their own AI applications. OpenAI has recently made its API available to developers, and companies like Khan Academy and Duolingo have already announced that they plan on using GPT-4 in their own apps.

Although speculation has been wild for what GPT-4 could be capable of, OpenAI is describing it as an evolution of the existing model. The new model will be able to mimic a particular writing style more closely, for example, as well as process up to 25,000 words of text from the user.

OpenAI says that ChatGPT-4 doesn’t need text, either. It can receive an image as a prompt and generate a response based on it.

Otherwise, the new version has updated security features. OpenAI claims it’s 82% less likely to offer disallowed responses, and it provides 40% more factual responses. It’s tough to say what that means in practice at the moment, however.

Although the new model could vastly expand the capabilities of ChatGPT, it also comes with some worries. Microsoft’s Bing Chat has already shown some unhinged responses, and it uses the GPT-4 model. OpenAI warns that the new model could still have these issues, occasionally showing “social biases, hallucinations, and adversarial prompts.”

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more