Skip to main content

A new A.I. can guess your personality type based on your eye movements

The eyes … they never lie,” said noted philosopher Tony Montana in the gangster movie Scarface. While Montana chose to go down the drug-dealing and murdering route, however, had he been born 30 years later he could probably have had a promising career as a computer interface designer. At least, that’s the message we’re choosing to take away from a new project created by researchers in Australia and Germany. They developed an artificial intelligence that is able to predict a person’s personality type by looking into their eyes.

“Several previous works suggested that the way in which we move our eyes is modulated by who we are — by our personality,” Andreas Bulling, a professor from Germany’s Max Planck Institute for Informatics, told Digital Trends. “For example, studies reporting relationships between personality traits and eye movements suggest that people with similar traits tend to move their eyes in similar ways. Optimists, for example, spend less time inspecting negative emotional stimuli — [such as] skin cancer images — than pessimists. Individuals high in openness spend a longer time fixating and dwelling on locations when watching abstract animations.”

These insights are interesting, but the challenge for the researchers was figuring out a way to turn such observations into an artificial intelligence system. To do so, they turned to a deep learning A.I. to offer some help.

The researchers asked 42 students to wear an off-the-shelf head-mounted eye tracker as they ran errands. They also had the students’ personality types tested using established self-report questionnaires. With both the input (the eye data) and output (personality types) gathered, the A.I. was then able to work out the correlating factors linking the two.

“We found that we were able to reliably predict four of the big five personality traits — neuroticism, extraversion, agreeableness, conscientiousness — as well as perceptual curiosity only from eye movements,” Bulling continued.

While there are definitely potential ethical dilemmas involved (imagine what companies like the now-defunct Cambridge Analytica might have been able to do with this information), Bulling noted that there are plenty of positive applications, too.

“Robots and computers are currently socially ignorant and don’t adapt to the person’s non-verbal signals,” Bulling said. “When we talk, we see and react if the other person looks confused, angry, disinterested, distracted, and so on. Interactions with robots and computers will become more natural and efficacious if they were to adapt their interactions based on a person’s non-verbal signals.”

A paper describing the work was recently published in the journal Frontiers in Human Neuroscience.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Futuristic new appliance uses A.I. to sort and prep your recycling
Lasso

Lasso: The power to change recycling for good

Given the potential planet-ruining stakes involved, you’d expect that everyone on Earth would be brilliant at recycling. But folks are lazy and, no matter how much we might see footage of plastic-clogged oceans on TV, the idea of sorting out the plastic, glass, and paper for the weekly recycling day clearly strikes many as just a little bit too much effort.

Read more
A.I. fail as robot TV camera follows bald head instead of soccer ball
ai fail as robot camera mistakes bald head for soccer ball

CaleyJags : SPFL Championship : Real Highlights: ICTFC 1 v 1 AYR : 24/10/2020

While artificial intelligence (A.I.) has clearly made astonishing strides in recent years, the technology is still susceptible to the occasional fail.

Read more
This groundbreaking new style of A.I. learns things in a totally different way
History of AI neural networks

With very rare exceptions, every major advance in artificial intelligence this century has been the result of machine learning. As its name implies (and counter to the symbolic A.I. that characterized much of the first half of the field’s history), machine learning involves smart systems that don’t just follow rules but actually, well, learn.

But there’s a problem. Unlike even a small human child, machine learning needs to be shown large numbers of training examples before it can successfully recognize them. There’s no such thing as, say, seeing an object like a “doofer” (you don’t know what it is, but we bet you would remember it if you saw one) and, thereafter, being able to recognize every subsequent doofer you see.

Read more