Skip to main content

Federal judge voices concerns over FBI and NSA surveillance

FBI Cybercrime
FBI
A federal court judge has voiced worries about a string of instances where the Federal Bureau of Investigation and the National Security Agency overstepped the approved limits of their surveillance activities. Foreign Intelligence Surveillance Court Judge Thomas Hogan stated that he was “extremely concerned” about the agencies’ behavior.

These criticisms stem from surveillance data being kept much longer than it should have been, according to a report from Politico. These materials were meant to be wiped after either two or five years, but apparently remained accessible for four years after they should have been purged.

Hogan wrote that it was “perhaps more disappointing” that the government had failed to inform the Court that this information was being retained. The Office of the Director of National Intelligence made a response claiming that there was no intent to mislead, but acknowledged that the situation could have been explained with more clarity.

The NSA claimed that retention of some of the data was required to prevent future situations where data might be collected without legal authority. However, that argument ignores a separate court order that officials are required to comply with, and the fact that not all the information in question was related to that scenario.

Hogan made these comments in November 2015, but they’ve only been released to the public this week. A hearing last October saw FBI representatives detail plans to tighten up their efforts, which Hogan found to be satisfactory — although the judge did confirm that he would be checking in with their progress at a later date.

It’s becoming more and more clear that data privacy will be a major issue shaping discourse over the next few years. Traditionally, we’ve seen these debates center around protecting information from outside attackers, but examples like this demonstrate the need for our own governmental agencies to be held to exacting standards.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more