Skip to main content

ChatGPT gets a private mode for secret AI chats. Here’s how to use it

OpenAI just launched a new feature that makes it possible to disable your chat history when using ChatGPT, allowing you to keep your conversations more private.

Previously, every new chat would appear in a sidebar to the left, making it easy for anyone nearby to get a quick summary of how you’ve been using the AI for fun, schoolwork, or productivity. This can prove problematic when you’re discussing something you want to keep secret.

I tested ChatGPT's privacy option to disable history
Image used with permission by copyright holder

A perfect example is when you ask ChatGPT for help with gift ideas, an excellent use for OpenAI’s chatbot. If the recipient likes to dig for clues, they won’t be hard to find if a ChatGPT window is left open in your browser.

I tested this new privacy feature by disabling chat history, then asking a somewhat shocking question about faking a Windsor knot for a necktie. The option to disable history is in settings under Data Controls.

OpenAI also recently added an export option in the Data Controls section, another nod to privacy and personal control of your data. Disabling chat history and exporting your data are features that are available to both free users and subscribers.

When I clicked the big green button at the left to reenable chat history again, my embarrassing conversation that revealed my lack of knot skills was nowhere to be seen. What a relief!

OpenAI notes that unsaved chats won’t be used to train its AI models; however, they will be retained for 30 days. OpenAI claims that it will only review these chats when needed, to check for abuse. After 30 days, unsaved chats are permanently deleted.

That means your chats aren’t entirely private, so you need to be aware that they might be read by OpenAI employees. This could be a concern for business use since proprietary information might accidentally be shared with ChatGPT.

OpenAI said it is working on a new ChatGPT Business subscription to give enterprise users and professionals more control over their data. There are already business-focused AIs such as JasperAI.

Alan Truly
Alan is a Computing Writer living in Nova Scotia, Canada. A tech-enthusiast since his youth, Alan stays current on what is…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more