Skip to main content

The NSA promises to delete its phone metadata early next year

the nsa promises to delete its phone metadata this year sign
NSA
The National Security Agency will lose the power to keep and monitor “historical metadata” on November 29, and the data will be permanently deleted shortly afterwards.

Following the passing of the USA Freedom Act earlier this year, the agency could no longer collect phone metadata in bulk. Now it must dispose of data it had previously collected and stored under the old laws, according to the Office of the Director of National Intelligence.

Data will not be deleted immediately, as the agency was given a six month grace period to phase in a new program. That will come to a head on November 29 when authorities can neither collect new phone metadata nor access old records. Following November 29, the data will be stored for three months and then wiped.

“(S)olely for data integrity purposes to verify the records produced under the new targeted production authorized by the USA FREEDOM Act, NSA will allow technical personnel to continue to have access to the historical metadata for an additional three months,” said the ODNI.

Not all data will be deleted, though, as any information that is currently being used in litigation will be kept while it is necessary. This data will be deleted “as soon as possible”, once legal proceedings have finished.

The new laws won’t necessarily bring an end to mass surveillance but will remove the NSA’s ability to consult historical data in investigations.

The collection of phone metadata from millions of Americans drew the most controversy following the Edward Snowden leaks in 2013. The practice was found to be illegal earlier in 2015 by a federal court.

Jonathan Keane
Former Digital Trends Contributor
Jonathan is a freelance technology journalist living in Dublin, Ireland. He's previously written for publications and sites…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more