Skip to main content

Robots can peer pressure kids, but don’t think for a second that we’re immune

robots peer pressure study
University of Plymouth

To slightly modify the title of a well-known TV show: Kids do the darndest things. Recently, researchers from Germany and the U.K. carried out a study, published in the journal Science Robotics, that demonstrated the extent to which kids are susceptible to robot peer pressure. TLDR version: the answer to that old parental question: “If all your friends told you to jump off a cliff, would you?” may well be “Sure. If all my friends were robots.”

The test reenacted a famous 1951 experiment pioneered by the Polish psychologist Solomon Asch. The experiment demonstrated how people can be influenced by the pressures of groupthink, even when this flies in the face of information they know to be correct. In Asch’s experiments, a group of college students were gathered together and shown two cards. The card on the left displayed an image of a single vertical line. The card on the right displayed three lines of varying lengths. The experimenter then asked the participants which line on the right card matched the length of the line shown on the left card.

“The special thing about that age range of kids is that they’re still at an age where they’ll suspend disbelief.”

So far, so straightforward. Where things got more devious, however, was in the makeup of the group. Only one person out of the group was a genuine participant, while the others were all actors, who had been told what to say ahead of time. The experiment was to test whether the real participant would go along with the rest of the group when they unanimously gave the wrong answer. As it turned out, most would. Peer pressure means that the majority of people will deny information that is clearly correct if it means conforming to the majority opinion.

In the 2018 remix of the experiment, the same principle was used — only instead of a group of college age peers, the “real participant” was a child, aged seven to nine years old. The “actors” were played by three robots, programmed to give the wrong answer. In a sample of 43 volunteers, 74 percent of kids gave the same incorrect answer as the robots. The results suggest that most kids of this age will treat pressure from robots the same as peer pressure from their flesh-and-blood peers.

In the experiment, participants were presented with a group of lines and asked to pick the one with the greatest length. The robotic participants would then unanimously give an incorrect answer in an attempt to influence the answer of the human participant. Anna-Lisa Vollmer, Robin Read, Dries Trippas, and Tony Belpaeme

“The special thing about that age range of kids is that they’re still at an age where they’ll suspend disbelief,” Tony Belpaeme, Professor in Intelligent and Autonomous Control Systems, who helped carry out the study, told Digital Trends. “They will play with toys and still believe that their action figures or dolls are real; they’ll still look at a puppet show and really believe what’s happening; they may still believe in [Santa Claus]. It’s the same thing when they look at a robot: they don’t see electronics and plastic, but rather a social character.”

Interestingly, the experiment contrasted this with the response from adults. Unlike the kids, adults weren’t swayed by the robots’ errors. “When an adult saw the robot giving the wrong answer, they gave it a puzzled look and then gave the correct answer,” Belpaeme continued.

So nothing to worry about then? So long as we stop children getting their hands on robots programmed to give bad responses, everything should be fine, right? Don’t be so fast.

Are adults really so much smarter?

As Belpaeme acknowledged, this task was designed to be so simple that there was no uncertainty as to what the answer might be. The real world is different. When we think about the kinds of jobs readily handed over to machines, these are frequently tasks that we are not, as humans, always able to perform perfectly.

This task was designed to be so simple that there was no uncertainty as to what the answer might be.

It could be that the task is incredibly simple, but that the machine can perform it significantly faster than we can. Or it could be a more complex task, in which the computer has access to far greater amounts of data than we do. Depending on the potential impact of the job at hand, it is no surprise that many of us would be unhappy about correcting a machine.

Would a nurse in a hospital be happy about overruling the FDA-approved algorithm which can help make prioritizations about patient health by monitoring vital signs and then sending alerts to medical staff? Or would a driver be comfortable taking the wheel from a driverless car when dealing with a particularly complex road scenario? Or even a pilot overriding the autopilot because they think the wrong decision is being made? In all of these cases, we would like to think the answer is “yes.” For all sorts of reasons, though, that may not be reality.

Nicholas Carr writes about this in his 2014 book The Glass Cage: Where Automation is Taking Us. The way he describes it underlines the kind of ambiguity that real life cases of automation involve, where the problems are far more complex than the length of a line on a card, the machines are much smarter, and the outcome is potentially more crucial.

nicholas carr
Nicholas Carr is a Pulitzer Prize-winning author, best known for his books “The Shallows: What the Internet is Doing to Our Brains” and “The Glass Cage: How Our Computers are Changing Us” Image used with permission by copyright holder

“How do you measure the expense of an erosion of effort and engagement, or a waning of agency and autonomy, or a subtle deterioration of skill? You can’t,” he writes. “These are the kinds of shadowy, intangible things that we rarely appreciate until after they’re gone, and even then we may have trouble expressing the losses in concrete terms.”

“These are the kinds of shadowy, intangible things that we rarely appreciate until after they’re gone.”

Social robots of the sort that Belpaeme theorizes about in the research paper are not yet mainstream, but already there are illustrations of some of these conundrums in action. For example, Carr opens his book with mention of a Federal Aviation Administration memo which noted how pilots should spend less time flying on autopilot because of the risks this posed. This was based on analysis of crash data, showing that pilots frequently rely too heavily on computerized systems.

A similar case involved a 2009 lawsuit in which a woman named Lauren Rosenberg filed a suit against Google after being advised to walk along a route that headed into dangerous traffic. Although the case was thrown out of court, it shows that people will override their own common sense in the belief that machine intelligence has more intelligence than we do.

For every ship there’s a shipwreck

Ultimately, as Belpaeme acknowledges, the issue is that sometimes we want to hand over decision making to machines. Robots promise to do the jobs that are dull, dirty, and dangerous — and if we have to second-guess every decision, they’re not really the labor-saving devices that have been promised. If we’re going to eventually invite robots into our home, we will want them to be able to act autonomously, and that’s going to involve a certain level of trust.

“Robots exerting social pressure on you can be a good thing; it doesn’t have to be sinister,” Belpaeme continued. “If you have robots used in healthcare or education, you want them to be able to influence you. For example, if you want to lose weight you could be given a weight loss robot for two months which monitors your calorie intake and encourages you to take more exercise. You want a robot like that to be persuasive and influence you. But any technology which can be used for good can also be used for evil.”

What’s the answer to this? Questions such as this will be debated on a case-by-case basis. If the bad ultimately outweighs the good, technology like social robots will never take off. But it’s important that we take the right lessons from studies like the one about robot-induced peer pressure. And it’s not the fact that we’re so much smarter than kids.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more