Skip to main content

ChatGPT is violating your privacy, says major GDPR complaint

Ever since the first generative artificial intelligence (AI) tools exploded onto the tech scene, there have been questions over where they’re getting their data and whether they’re harvesting your private data to train their products. Now, ChatGPT maker OpenAI could be in hot water for exactly these reasons.

According to TechCrunch, a complaint has been filed with the Polish Office for Personal Data Protection alleging that ChatGPT violates a large number of rules found in the European Union’s General Data Protection Regulation (GDPR). It suggests that OpenAI’s tool has been scooping up user data in all sorts of questionable ways.

ChatGPT app running on an iPhone.
Joe Maring / Digital Trends

The complaint says that OpenAI has broken the GDPR’s rules on lawful basis, transparency, fairness, data access rights, and privacy by design.

These seem to be serious charges. After all, the complainant is not alleging OpenAI has simply breached one or two rules, but that it has contravened a multitude of protections that are designed to stop people’s private data from being used and abused without your permission. Seen one way, it could be taken as an almost systematic flouting of the rules protecting the privacy of millions of users.

Chatbots in the firing line

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

It’s not the first time OpenAI has found itself in the crosshairs. In March 2023, it ran afoul of Italian regulators, leading to ChatGPT getting banned in Italy for violating user privacy. It’s another headache for the viral generative AI chatbot at a time when rivals like Google Bard are rearing their heads.

And OpenAI is not the only chatbot maker raising privacy concerns. Earlier in August 2023, Facebook owner Meta announced that it would start making its own chatbots, leading to fears among privacy advocates over what private data would be harvested by the notoriously privacy-averse company.

Breaches of the GDPR can lead to fines of up to 4% of global annual turnover for the companies penalized, which could lead to OpenAI facing a massive fine if enforced. If regulators find against OpenAI, it might have to amend ChatGPT until it complies with the rules, as happened to the tool in Italy.

Huge fines could be coming

A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.
Sanket Mishra / Pexels

The Polish complaint has been put forward by a security and privacy researcher named Lukasz Olejnik, who first became concerned when he used ChatGPT to generate a biography of himself, which he found was full of factually inaccurate claims and information.

He then contacted OpenAI, asking for the inaccuracies to be corrected, and also requested to be sent information about the data OpenAI had collected on him. However, he states that OpenAI failed to deliver all the info it is required to under the GDPR, suggesting that it was being neither transparent, nor fair.

The GDPR also states that people must be allowed to correct the information that a company holds on them if it is inaccurate. Yet when Olejnik asked OpenAI to rectify the erroneous biography ChatGPT wrote about him, he says OpenAI claimed it was unable to do so. The complaint argues that this suggests the GDPR’s rule “is completely ignored in practice” by OpenAI.

It’s not a good look for OpenAI, as it appears to be infringing numerous provisions of an important piece of EU legislation. Since it could potentially affect millions of people, the penalties could be very steep indeed. Keep an eye on how this plays out, as it could lead to massive changes not just for ChatGPT, but for AI chatbots in general.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
Apple finally has a way to defeat ChatGPT
A MacBook and iPhone in shadow on a surface.

OpenAI needs to watch out because Apple may finally be jumping on the AI bandwagon, and the news doesn't bode well for ChatGPT. Apple is reportedly working on a large language model (LLM) referred to as ReALM, which stands for Reference Resolution As Language Modeling. Made to give Siri a boost and help it understand context, the model comes in four variants, and Apple claims that even its smallest model performs on a similar level to OpenAI's ChatGPT.

This tantalizing bit of information comes from an Apple research paper, first shared by Windows Central, and it appears to be an early peek into what Apple has been cooking for a while now. ReALM is Apple's own LLM that was reportedly made to enhance Siri's capabilities; these improvements include a greater ability to understand context in a conversation.

Read more
ChatGPT AI chatbot can now be used without an account
The ChatGPT website on a laptop's screen as the laptop sits on a counter in front of a black background.

ChatGPT, the AI-powered chatbot that went viral at the start of last year and kicked off a wave of interest in generative AI tools, no longer requires an account to use.

Its creator, OpenAI, launched a webpage on Monday that lets you begin a conversation with the chatbot without having to sign up or log in first.

Read more
OpenAI needs just 15 seconds of audio for its AI to clone a voice
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

In recent years, the listening time required by a piece of AI to clone someone’s voice has been getting shorter and shorter.

It used to be minutes, now it’s just seconds.

Read more