Skip to main content

Algorithm outperforms humans at spotting fake news

An artificial intelligence system that can tell the difference between real and fake newsoften with better success rates than its human counterparts — has been developed by researchers at the University of Michigan. Such a system may hep social media platforms, search engines, and news aggregators filter out articles meant to misinform.

“As anyone else, we have been disturbed by the negative effect that fake news can have in major political events [and] daily life,” Rada Mihalcea, a UM computer science professor who developed the system, told Digital Trends. “My group has done a significant amount of work on deception detection for nearly ten years. We saw an opportunity to address a major societal problem through the expertise we accumulated over the years.”

Mihalcea and her team developed a linguistic algorithm that analyzes written speech and looks for cues such as grammatical structure, punctuation, and complexity, which may offer telltale signs of fake news. Since many of today’s news aggregators and social media sites rely on human editors to spot misinformation, assistance from an automated system could help streamline the process.

To train their system, the researchers represented linguistic features like punctuation and word choice as data, then fed that data into an algorithm.

“Interestingly, what algorithms look for is not always intuitive for people to look for,” Mihalcea said. “In this and other research we have done on deception, we have found for instance that the use of the word ‘I’ is associated with truth. It is easy for an algorithm to count the number of times ‘I’ is said, and find the difference. People however do not do such counting naturally, and while it may be easy, it would distract them from the actual understanding of the text.”

The system demonstrated a 76-percent success rate at spotting fake news articles, compared to around 70 percent for humans. Mihalcea envisions such a system helping both news aggregators and end users distinguish between true and intentionally false stories.

The system can’t completely compensate for humans, however. For one, it doesn’t fact check, so well-meaning (but ultimately false) content will still slip through.

The researchers will present a paper detailing the system was presented at the International Conference on Computational Linguistics in Santa Fe, New Mexico on August 24.

Dyllan Furness
Dyllan Furness is a freelance writer from Florida. He covers strange science and emerging tech for Digital Trends, focusing…
Facial recognition tech for bears aims to keep humans safe
A brown bear in Hokkaido, Japan.

If bears could talk, they might voice privacy concerns. But their current inability to articulate thoughts means there isn’t much they can do about plans in Japan to use facial recognition to identify so-called "troublemakers" among its community.

With bears increasingly venturing into urban areas across Japan, and the number of bear attacks on the rise, the town of Shibetsu in the country’s northern prefecture of Hokkaido is hoping that artificial intelligence will help it to better manage the situation and keep people safe, the Mainichi Shimbun reported.

Read more
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more
Algorithmic architecture: Should we let A.I. design buildings for us?
Generated Venice cities

Designs iterate over time. Architecture designed and built in 1921 won’t look the same as a building from 1971 or from 2021. Trends change, materials evolve, and issues like sustainability gain importance, among other factors. But what if this evolution wasn’t just about the types of buildings architects design, but was, in fact, key to how they design? That’s the promise of evolutionary algorithms as a design tool.

While designers have long since used tools like Computer Aided Design (CAD) to help conceptualize projects, proponents of generative design want to go several steps further. They want to use algorithms that mimic evolutionary processes inside a computer to help design buildings from the ground up. And, at least when it comes to houses, the results are pretty darn interesting.
Generative design
Celestino Soddu has been working with evolutionary algorithms for longer than most people working today have been using computers. A contemporary Italian architect and designer now in his mid-70s, Soddu became interested in the technology’s potential impact on design back in the days of the Apple II. What interested him was the potential for endlessly riffing on a theme. Or as Soddu, who is also professor of generative design at the Polytechnic University of Milan in Italy, told Digital Trends, he liked the idea of “opening the door to endless variation.”

Read more