Skip to main content

We’re not ‘going dark’: Harvard study refutes FBI’s view on encryption

Harvard
rabbit75123/123rf
A new study from Harvard has challenged the FBI’s claims that the use of encryption, or “going dark,” will inhibit law enforcement when investigating crimes or terrorism.

The report, Don’t Panic. Making Progress on the “Going Dark” Debate, published by Harvard’s Berkman Center, claims that officials’ anti-encryption stances are overblown and the glut of new Internet-connected devices hitting the market gives law enforcement more avenues for carrying out investigations.

“Appliances and products ranging from televisions and toasters to bed sheets, light bulbs, cameras, toothbrushes, door locks, cars, watches and other wearables are being packed with sensors and wireless connectivity,” said the report, which goes on to name-check the myriad major companies making smart home products like TVs or wearable devices that could be utilized by law enforcement.

“These devices will all be connected to each other via the Internet, transmitting telemetry data to their respective vendors in the cloud for processing,” the authors said.

“The audio and video sensors on IoT devices will open up numerous avenues for government actors to demand access to real-time and recorded communications.”

Authors of the report include renowned cryptographer Bruce Schneier and Jonathan Zittrain from Harvard Law School. Their report says the FBI is largely ignoring this wave of new Internet-connected devices by focusing on encrypted communications. They argue that the Internet will still be populated by mostly unencrypted traffic.

The encryption or “going dark” debate centers on the argument that law enforcement need a means to access communications in order to pursue criminal investigations. Encryption put in place for users by device manufacturers or services would make this practically impossible, regardless of whether there was a warrant or not.

Recently, NSA Director Admiral Mike Rogers questioned the argument put forward by the FBI and its chief, James Comey, over the need to apply backdoors in encrypted communications.

However the authors of the Harvard report do concede that technology has “to some extent” made investigations more difficult but not impossible.

“[We] question whether the “going dark” metaphor accurately describes the state of affairs. Are we really headed to a future in which our ability to effectively surveil criminals and bad actors is impossible? We think not.”

Jonathan Keane
Former Digital Trends Contributor
Jonathan is a freelance technology journalist living in Dublin, Ireland. He's previously written for publications and sites…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more