Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Meta wants to supercharge Wikipedia with an AI upgrade

Wikipedia has a problem. And Meta, the not-too-long-ago rebranded Facebook, may just have the answer.

Let’s back up. Wikipedia is one of the largest-scale collaborative projects in human history, with more than 100,000 volunteer human editors contributing to the construction and maintenance of a mind-bogglingly large, multi-language encyclopedia consisting of millions of articles. Upward of 17,000 new articles are added to Wikipedia each month, while tweaks and modifications are continuously made to its existing corpus of articles. The most popular Wiki articles have been edited thousands of times, reflecting the very latest research, insights, and up-to-the-minute information.

The challenge, of course, is accuracy. The very existence of Wikipedia is proof positive that large numbers of humans can come together to create something positive. But in order to be genuinely useful and not a sprawling graffiti wall of unsubstantiated claims, Wikipedia articles must be backed up by facts. This is where citations come in. The idea – and for the most part this works very well – is that Wikipedia users and editors alike can confirm facts by adding or clicking hyperlinks that track statements back to their source.

Citation needed

Say, for example, I want to confirm the entry on President Barack Obama’s Wikipedia article stating that Obama traveled to Europe and then Kenya in 1988, where he met many of his paternal relatives for the first time. All I have to do is to look at the citations for the sentence and, sure enough, there are three separate book references that seemingly confirm that the fact checks out.

By contrast, the phrase “citation needed” is probably the two most damning in all of Wikipedia, precisely because they suggest that there’s no evidence that the author didn’t conjure the words out of the digital ether. The words “citation needed” affixed to a Wikipedia claim is the equivalent of telling someone a fact while making finger quotes in the air.

the wikipedia logo on a pink background
Wikipedia

Citations don’t tell us everything, though. If I were to tell you that, last year, I was the 23rd highest-earning tech journalist in the world and that I once gave up a lucrative modeling career to write articles for Digital Trends, it appears superficially plausible because there are hyperlinks to support my delusions.

The fact that the hyperlinks don’t support my alternative facts at all, but rather lead to unrelated pages on Digital Trends is only revealed when you click them. For the 99.9 percent of readers who have never met me, they might leave this article with a slew of false impressions, not the least of which is the surprisingly low barrier to entry to the world of modeling. In a hyperlinked world of information overload, in which we increasingly splash around in what Nicholas Carr refers to as “The Shallows,” the existence of citations themselves appear to be factual endorsements.

Meta wades in

But what if citations are added by Wikipedia editors, even if they don’t link to pages that actually support the claims? As an illustration, a recent Wikipedia article on Blackfeet Tribe member Joe Hipp described how Hipp was the first Native American boxer to challenge for the WBA World Heavyweight title and linked to what seemed to be an appropriate webpage. However, the webpage in question mentioned neither boxing nor Joe Hipp.

In the case of the Joe Hipp claim, the Wikipedia factoid was accurate, even if the citation was inappropriate. Nonetheless, it’s easy to see how this could be used, either deliberately or otherwise, to spread misinformation.

Mark Zuckurburg introduces Facebook's new name, Meta.
Meta

It’s here that Meta thinks that it’s come up with a way to help. Meta AI (that’s the AI research and development research lab for the social media giant) has developed what it claims is the first machine learning model able to automatically scan hundreds of thousands of citations at once to check if they support the corresponding claims. While this would be far from the first bot Wikipedia uses, it could be among the most impressive — although it’s still currently in the research phase, and not in use on actual Wikipedia.

“I think we were driven by curiosity at the end of the day,” Fabio Petroni, research tech lead manager for the FAIR (Fundamental AI Research) team of Meta AI, told Digital Trends. “We wanted to see what was the limit of this technology. We were absolutely not sure if [this AI] could do anything meaningful in this context. No one had ever tried to do something similar [before].”

Understanding meaning

Trained using a dataset consisting of 4 million Wikipedia citations, Meta’s new tool is able to effectively analyze the information linked to a citation and then cross-reference it with the supporting evidence. And this isn’t just a straightforward text string comparison, either.

“There is a component like that, [looking at] the lexical similarity between the claim and the source, but that’s the easy case,” Petroni said. “With these models, what we have done is to build an index of all these webpages by chunking them into passages and providing an accurate representation for each passage … That is not representing word-by-word the passage, but the meaning of the passage. That means that two chunks of text with similar meanings will be represented in a very close position in the resulting n-dimensional space where all these passages are stored.”

a single-pane comic from xkcd about Wikipedia citaions
xkcd

Just as impressive as the ability to spot fraudulent citations, however, is the tool’s potential for suggesting better references. Deployed as a production model, this tool could helpfully suggest references that would best illustrate a certain point. While Petroni balks at it being likened to a factual spellcheck, flagging errors and suggesting improvements, that’s an easy way to think about what it might do.

But as Petroni explains, there is still much more work to be done before it reaches this point. “What we have built is a proof of concept,” he said. “It’s not really usable at the moment. In order for this to be usable, you need to have a fresh index that indexes much more data than what we currently have. It needs to be constantly updated, with new information coming every day.”

This could, at least in theory, include not just text, but multimedia as well. Perhaps there’s a great authoritative documentary that’s available on YouTube the system could direct users toward. Maybe the answer to a particular claim is hidden in an image somewhere online.

A question of quality

There are other challenges, too. Notable in its absence, at least at present, is any attempt to independently grade the quality of sources cited. This is a thorny area in itself. As a simple illustration, would a brief, throwaway reference to a subject in, say, the New York Times prove a more suitable, high-quality citation than a more comprehensive, but less-renowned source? Should a mainstream publication rank more highly than a non-mainstream one?

Google’s trillion-dollar PageRank algorithm – certainly the most famous algorithm ever built around citations – had this built into its model by, in essence, equating a high-quality source with one that had a high number of incoming links. At present, Meta’s AI has nothing like this.

If this AI was to work as an effective tool, it would need to have something like that. As a very obvious example of why, imagine that one was to set out to “prove” the most egregious, reprehensible opinion for inclusion on a Wikipedia page. If the only evidence needed to confirm that something is true is whether similar sentiments could be found published elsewhere online, then virtually any claim could technically prove correct — no matter how wrong it might be.

“[One area we are interested in] is trying to model explicitly the trustworthiness of a source, the trustworthiness of a domain,” Petroni said. “I think Wikipedia already has a list of domains that are considered trustworthy, and domains that are considered not. But instead of having a fixed list, it would be nice if we can find a way to promote these algorithmically.”

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more