Skip to main content

Don’t fall for it — ChatGPT scams are running rampant across social media

Malware and scams for ChatGPT continue to become more prevalent as interest in the chatbot developed by OpenAI expands.

There have been a number of instances of bad actors taking advantage of the popularity of ChatGPT since its introduction in November 2022. Many have been using false ChatGPT interfaces to scam unsuspecting mobile users out of money or infect devices with malware. The most recent threat is a mix of both, with hackers targeting Windows and Android users through phishing pages and aiming to steal their private data, which could include credit card and other banking information, according to Bleeping Computer.

Chat GPT PC Online Redline redirect.

I redirected it to closed.

/chat-gpt-pc.online@OpenAI #cybersecurity #infosec pic.twitter.com/lXY5zUyMBj

— Dominic Alvieri (@AlvieriD) February 12, 2023

Security researcher Dominic Alvieri first observed the suspicious activity of chat-gpt-pc.online, a domain that hosted an info-stealing malware called Redline, which posed as a ChatGPT for Windows desktop download. The website, which featured ChatGPT branding, was being advertised on a Facebook page as a legitimate OpenAI link to persuade people into accessing the nefarious site.

Alvieri found there were also fake ChatGPT apps on Google Play and various other third-party Android app stores, which could send malware to devices if downloaded.

Other researchers have backed up the initial claims, having found other malware that executes different malicious campaigns. Researchers at Cyble discovered chatgpt-go.online, which sends out malware that “steals clipboard contents,” including Aurora stealer. Another domain called chat-gpt-pc[.]online sends out malware called Lumma stealer. Yet another called openai-pc-pro[.]online, malware that has not yet been identified.

Cyble has also connected the domain pay.chatgptftw.com to a credit card-stealing page that poses as a payment page for ChatGPT Plus.

Meanwhile, Cyble said it has uncovered over 50 dubious mobile applications posing as ChatGPT by using its branding or a name that could easily confuse users. The research team said they all have been determined fake and harmful to devices. One is an app called chatGPT1, which is an SMS-billing fraud app that likely steals credit card information similar to what is described above. Another app is AI Photo, which hosts Spynote malware that is able to access and “steal call logs, contact lists, SMS, and files” from a device

The influx of malware and paid scammers began when OpenAI began throttling the speeds and access to ChatGPT due to its booming popularity. The first fake paid mobile apps hit Apple App and Google Play stores in December 202 but didn’t get media attention until nearly a month later, in mid-January. The first known major ChatGPT hack soon followed in mid-February. Bad actors used the OpenAI GPT-3 API to create a dark version of ChatGPT that is able to generate phishing emails and malware scripts. The bots work through the messaging app Telegram.

Now, it seems to be open season for fakes and alternatives since OpenAI introduced its paid ChatGPT Plus tier for $20 per month as of February 10. However, users should be wary that the chatbot remains a browser-based tool that can be accessed only at chat.openai.com. There are no mobile or desktop apps currently available for ChatGPT on any system.

Fionna Agomuoh
Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more