Skip to main content

Zoom’s A.I. tech to detect emotion during calls upsets critics

Zoom has begun to develop A.I. technology which can reportedly scan the faces and speech of users in order to determine their emotions, which was first reported by Protocol.

While this technology appears to still be in its early phases of development and implementation, several human rights groups project that it could be used for more discriminatory purposes down the line, and are urging Zoom to turn away from the practice.

A woman on a Zoom call.
Zoom

Currently, Zoom has detailed plans for using the A.I. technology in a sales and training protocol. In a blog post shared last month, Zoom explained how its concept of  ‘Zoom IQ’ works for helping salespeople determine the emotions of people they are on a call with to improve their pitches.

The blog notes that Zoom IQ tracks such metrics as talk-listen ratio, talking speed, monologue, patience, engaging questions, next steps, step up, and sentiment and engagement.

Zoom also noted on its blog that the data it collects is “for informational purposes and may contain inaccuracies.”

“Results are not intended to be used for employment decisions or other comparable decisions. All recommended ranges for metrics are based on publicly available research,” the company added.

Nevertheless, over 25 rights groups sent a joint letter to Zoom CEO, Eric Yuan on Wednesday, urging that the company halt any further research into emotion-based artificial intelligence that could have unfortunate consequences for the disadvantaged. Some of these groups include Access Now, the American Civil Liberties Union (ACLU), and the Muslim Justice League.

ACLU Speech, Privacy, and Technology Project deputy director, Esha Bhandari told the Thomson Reuters Foundation that emotion A.I. was “a junk science” and “creepy technology.”

Beyond Zoom’s initial note in its April blog, the company has yet to respond to this critique, which began as early as last week.

We’ve recently seen brands, such as DuckDuckGo, stand up against Google in the name of privacy. After claiming to get rid of invasive cookies, on web browsers, Google has essentially replaced them with technology that can similarly track and collect user data.

Fionna Agomuoh
Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more