Skip to main content

I used ChatGPT to interpret my astrological makeup, and it was surprisingly good

The ChatGPT chatbot prototype is available as a free research preview, and like many tech enthusiasts, I spent the weekend testing out a number of inquiries to see what kinds of results the tool would unveil.

Delving into my own unique interest, I was fascinated when I discovered that I could use ChatGPT to interpret tarot cards and astrological placements.

While technologically inclined people have quizzed the chatbot on things having to do with detailed code and science, I started my own exploration with simpler questions and requests, such as “when was the solar system made,” “how do you make banana bread,” and “explain Mary Shelley’s Frankenstein in four paragraphs.”

Then, glancing at the spread of tarot cards assembled next to me, I had a more unique idea and inputted, “what is the Ace of Swords in tarot?” I’d later update the request to “explain the Ace of Swords in tarot,” which gave an even more detailed response. However, both responses were up to the similar standard of the website I use to interpret my tarot cards — a website that is easily found on the first page of Google if you search “Ace of Swords.”

GPTChat astrology interpretation.
Image used with permission by copyright holder

Later, my curiosity still piqued, I moved on to something a little more challenging and inputted “explain what it’s like to have the sun in Pisces and moon in Sagittarius.” This yielded a detailed, three-paragraph response that I found to be sufficiently accurate as someone already knowledgeable in astrological natal charts, particularly my own. I believe that a novice who connected with the information would likely be enlightened by what they’d learned. However, it would likely be just the tip of the iceberg if ChatGPT was the first stop on their astrological journey.

A plus regarding the AI chatbot is that the information is presented immediately, and you don’t have to sift through several webpages to get the information you want. Rephrasing your inquiry can bring up different results if you desire.

A minus is you don’t get the care and nuance that comes from an actual person putting their unique touch into the information. Often, astrology websites add information such as notable people who share the same astrological makeup that you are researching. For example, Victor Hugo and Albert Einstein had the Pisces sun and Sagittarius moon astrological combination.

GPTChat notable people astrology interpretation.
Image used with permission by copyright holder

ChatGPT would not offer up that information unless you asked about it directly. When I did, many of the results were vastly incorrect, bringing up people such as Bruce Lee, who had a Sagittarius sun, and Ellen Degeneres, who has a Pisces ascendant.

Developer OpenAI has warned that some of the limitations of ChatGPT might surface if there is not enough information available. The generator has the potential to fill in gaps with incorrect data. But it is also important to note that many people that have toyed with AI chatbot have done so within their realm of interest and expertise and can more easily pick up on errors within responses.

Would ChatGPT put professional astrologers and tarot readers out of business any time soon? Likely not. But I found the information I gathered to be solid enough to help your everyday techie interested in learning more about astrology get off a good start.

Fionna Agomuoh
Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more