Skip to main content

Brave browser takes on ChatGPT, but not how you’d expect

Artificial intelligence (AI) is all the rage these days, and a bunch of Silicon Valley heavyweights are vying with OpenAI’s ChatGPT to shake up the tech landscape. Brave is the latest contender to take a swing, and the privacy-focused company has just announced its own AI-based tool for its web browser.

Called Summarizer, the new feature will seek to give you a quick answer to anything you ask it. It does this by taking information from a variety of sources and rolling them into a single coherent text block at the top of your search results.

brave browser
Image used with permission by copyright holder

Right now, Google has a similar feature on its search results page, but the difference here is that Google takes its summarized text from a single source that it deems trustworthy, while Brave collates several sources and uses AI cleverness to merge them into a unified answer.

Brave’s developer explains that Summarizer is not a generative AI, which differentiates it from things like ChatGPT. The company states that generative AIs have their problems and can “spout unsubstantiated assertions,” which obviously would be a problem for a tool designed to create reliable summaries and trustworthy answers. Microsoft’s addition of ChatGPT to Bing has been particularly badly behaved in this respect.

Instead, Brave’s Summarizer is powered by large language models (LLMs) that were trained to “process multiple sources of information present on the Web.” Not only does this aid accuracy, but its results are more concise too.

A different AI future?

The AI-based Summarizer feature in the Brave web browser is shown in use, with it summarizing an answer to a search query about The Last of US TV show.
Brave

Beneath the summarized text is a link to the cited sources, so you can always check out the original text if you want more context. Brave says those links will always be present in order to ensure answers are always properly attributed and so that users can “assess the trustworthiness of the sources.” That should help to “mitigate the authority biases of large language models,” Brave contends.

As well as the main summaries, the new feature will start to replace the description text that appears beneath the headline of each search result. Previously this text was a snippet taken from somewhere within each search result. Now, it will be an AI-created summary of the entire text.

Brave’s developer states it is not yet convinced that LLMs can fully replace traditional search, but rather feels it could help users find their way around search results. When LLMs are applied to other features in the browser, Brave says, the results could be “truly fruitful and revolutionary.”

As it’s still early in the development of Summarizer, Brave says the feature could produce AI “hallucinations” that merge unrelated text into the snippets, and could also end up producing false or offensive text. The company is working to improve Summarizer based on user feedback and hopes to iron out these kinks.

Brave’s Summarizer tool is available now, but if you don’t like the sound of it, you can disable it in the app’s settings. We’ll have to see whether it can provide a reliable alternative to ChatGPT and help shape the future of search.

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more