Skip to main content

DuckDuckGo’s new AI service keeps your chatbot conversations private

DuckDuckGo
DuckDuckGo

DuckDuckGo released its new AI Chat service on Thursday, enabling users to anonymously access popular chatbots like GPT-3.5 and Claude 3 Haiku without having to share their personal information as well as preventing the companies from training the AIs on their conversations. AI Chat essentially works by inserting itself between the user and the model, like a high-tech game of telephone.

From the AI Chat home screen, users can select which chat model they want to use — Meta’s Llama 3 70B model and Mixtral 8x7B are available in addition to GPT-3.5 and Claude — then begin conversing with it as they normally would. DuckDuckGo will connect to that chat model as an intermediary, substituting the user’s IP address with one of their own. “This way it looks like the requests are coming from us and not you,” the company wrote in a blog post.

As with the company’s anonymized search feature, all metadata is stripped from the user queries, so even though DuckDuckGo warns that “the underlying model providers may store chats temporarily,” there’s no way to personally identify users based on those chats. And, as The Verge notes, DuckDuckGo also has agreements in place with those AI companies, preventing them from using chat prompts and outputs to train their models, as well as to delete any saved data within 30 days.

Data privacy is a growing concern among the AI community, even as the number of people using it both individually and at work continues to rise. A Pew Research study from October found that roughly eight in 10 “of those familiar with AI say its use by companies will lead to people’s personal information being used in ways they won’t be comfortable with.” While most chatbots already allow their users to opt out from having their data collected, those options are often buried in layers of menus with the onus on the user to find and select them.

AI Chat is available at both duck.ai and duckduckgo.com/chat. It’s free to use “within a daily limit,” though the company is currently considering a more expansive paid option with higher usage limits and access to more advanced models. This new service follows last year’s release of DuckDuckGo’s DuckAssist, which provides anonymized, AI-generated synopses of search results, akin to Google’s SGE.

Andrew Tarantola
Andrew has spent more than a decade reporting on emerging technologies ranging from robotics and machine learning to space…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more