Skip to main content

This one image breaks ChatGPT each and every time

ChatGPT's response to a prompt that includes an image featuring some digital noise.
Digital Trends

Sending images as prompts to ChatGPT is still a fairly new feature, but in my own testing, it works fine most of the time. However, someone’s just found an image that ChatGPT can’t seem to handle, and it’s definitely not what you expect.

The image, spotted by brandon_xyzw on X (formerly Twitter), presents some digital noise. It’s nothing special, really — just a black background with some vertical lines all over it. But if you try to show it to ChatGPT, the image breaks the chatbot each and every time, without fail.

I tried to feed the image to ChatGPT with additional text prompts and without, as part of a conversation, or at the beginning of a new chat. The only response I’m getting is ChatGPT’s error message, “Hmm … something seems to have gone wrong.” Attempting to generate a new response doesn’t help.

Whatever you do, don't show ChatGPT this image pic.twitter.com/DwSkmz0xP6

— Brandon (@brandon_xyzw) January 10, 2024

Interestingly enough, ChatGPT responded when I sent it a screenshot of the tweet containing the image, and it described it just fine. However, when I then tried to show it the actual image in that same conversation, it broke again.

What’s so special about this image that makes ChatGPT hate it so? It’s hard to say. I looked up similar images on the web and found that ChatGPT breaks when faced with some of them, but not all of them. It’s most likely just a bug, given that it handled the screenshot just fine, as well as similar images featuring digital noise.

Bing Chat responds to an image prompt.
Digital Trends

I tried the image with some of the other popular chatbots, and they had no problem telling me more about the image. While ChatGPT struggled to respond to my prompts, Bing Chat analyzed the image and described it to me in some detail. While Bing Chat went for an analysis of the technical sort, Google Bard started interpreting the image, saying the following: “The colors red and blue are often used together to represent opposites or complementary forces. In this case, the red and blue lines could be seen as representing positive and negative energy, or order and chaos.”

Ultimately, while ChatGPT works well enough most of the time, it’s not without fault. It sometimes forgets what it’s allowed or not allowed to do, declining to perform simple tasks that it had no problem with just two messages earlier. Errors with generating responses happen quite frequently too, but this particular image consistently breaks the chatbot. We’ll have to try this again in a couple of days and see if OpenAI has fixed the issue.

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more