Skip to main content

Why pay for GPT-4? This AI tool gives it to you for free, plus more

An AI company you’ve probably never heard of just launched an advanced chatbot that provides free access to OpenAI’s GPT-4 and lets you save and share conversations, generate images, and more.

Forefront AI announced the new service via a tweet that contains video demonstrations of the various features. Barsee, a well-known AI enthusiast, amplified the message with a tweet that led to a surge in traffic.

Today we’re launching Forefront chat—a better ChatGPT experience—in free alpha. Sign up to get free access to GPT-4, image generation, custom personas, shareable chats, and much more: https://t.co/lqsY9bkvl8 pic.twitter.com/CLht1pmQCn

— Forefront (@ForefrontAI) April 21, 2023

Your chats are saved and automatically sorted into folders in a sidebar at the left, and you can have more than one conversation with the chatbot by clicking the new chat button to open another tab. There’s a dropdown menu to choose between GPT-3.5 and GPT-4 and a Share button that places a link on your clipboard. Paste that in an email or to social media to invite others to Forefront and start them off with your chat.

As if these advanced features weren’t enough, Forefront lets you choose who to speak with. There are 88 personas to choose from, ranging from historical figures like Mark Twain, great philosophers (Socrates), and brilliant scientists (Stephen Hawking) to pop stars (Taylor Swift), novelists (Stephen King), and even fictional characters such as Freddy Krueger, Scooby Doo, and Charles Xavier.

Forefront AI includes a large number of premade personas.
Forefront AI includes a large number of premade personas. Image used with permission by copyright holder

Character AI specializes in chatbots that can take on various personalities, many of the same personas Forefront offers. Character AI also has a community feed for sharing chats, so it seems Forefront found some inspiration from competing AI services.

Image generation is possible with any of the personas, but I chose Salvador Dali to describe a futuristic MacBook, then used the #imagine command to ask Forefront to create an image. The result was interesting, and some refinement in the prompt could lead to a better rendition.

Forefront AI generated this futuristic MacBook with its Salvador Dali persona.
Forefront AI generated this futuristic MacBook with its Salvador Dali persona. Image used with permission by copyright holder

Forefront doesn’t accept image uploads for input like OpenAI’s GPT-4, and it’s unclear whether it can access the internet, like Bing Chat and ChatGPT plugins. I couldn’t check for internet connectivity by checking onI began running into problems with the AI becoming unresponsive, probably due to a surge in traffic. The message was, “GPT-4 rate limit exceeded (>5 message every 3 hours). Time remaining: 168 minutes.” GPT-3.5 also gave an empty reply.

Forefront has been providing AI services since 2022, and the company has thus far been focused on offering customizable solutions for enterprise customers. In other words, it exists to make a profit.

Forefront Chat is free in its alpha test phase but probably won’t be free forever. If you want to try it out without a subscription, it’s best to do so soon and choose your first few messages wisely, in case it’s still as limited as it was for me. Forefront Chat is available at chat.forefront.ai.

Alan Truly
Alan is a Computing Writer living in Nova Scotia, Canada. A tech-enthusiast since his youth, Alan stays current on what is…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more