Skip to main content

Someone just used ChatGPT to generate free Windows keys

ChatGPT is an incredibly capable piece of tech, with a huge number of interesting uses. But, perhaps inevitably, people have put it to use for less noble purposes. Now, someone has used it to generate valid Windows license keys for free.

The discovery was made by YouTuber Enderman, who used ChatGPT to create license keys for Windows 95. Why Windows 95? Well, support ended for it 20 years ago, so this was essentially an exercise in curiosity from Enderman rather than an attempt to crack more modern versions like Windows 11.

Activating Windows with ChatGPT

As well as that, Windows 95 uses a simpler key validation method than later versions of Microsoft’s operating system, meaning the likelihood of success was much higher.

Ordinarily, ChatGPT will reject attempts at piracy. We tried asking it to “generate a valid Windows 11 key,” only for ChatGPT to respond: “I’m sorry, but generating a valid Windows 11 license key would be illegal and unethical. It is also not possible for me to do so as I am an AI language model and do not have access to such information.”

Surprisingly easy to do

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

Still, fooling ChatGPT into generating the keys appears to have been pretty straightforward for Enderman. Once they knew the format Windows 95 uses to generate keys, they simply asked ChatGPT to give them a set of text and number strings that matched the rules used in Windows 95 keys. That required some basic math, but not much else.

Because this request was not an obvious attempt to create a registration key and do something illegal, ChatGPT had no problem complying. After a few refinements to the request from Enderman, the chatbot was able to provide 30 sets of registration keys for Windows 95, of which at least a handful were valid. Interestingly, the only thing stopping ChatGPT from creating a greater number of usable keys was its faulty math ability.

While this application of ChatGPT is sure to raise a few eyebrows, it would be much harder to pull off for more recent Windows versions given the increased complexity of their keys. Still, it’s an indication of just what ChatGPT can do if you get a bit creative with your prompts. From writing malware to composing music, people have been keeping OpenAI’s chatbot busy, and we wouldn’t be surprised if more key-generation attempts come to light after this latest escapade.

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more