Skip to main content

The FBI is collecting its own library of deadly malware

fbi collecting library deadly malware locks
Image used with permission by copyright holder

A recent posting on the Federal Business Opportunities website suggests that the FBI is looking to build up its own collection of malware to better understand the security threat presented by these tools. Security vendors are invited to submit examples of malicious software that could cause problems for users and institutions in the wild.

According to documents attached to the listing, the malware will be used by the Investigate Analysis Unit (IAU): “The collection of malware from multiple industries, law enforcement and research sources is critical to the success of the IAU’s mission to obtain global awareness of the malware threat. The collection of this malware allows the IAU to provide actionable intelligence to the investigator in both criminal and intelligence matters.”

The FBI is asking for malware including executable files, Office documents, digital media files and exploits coded to work through a Web browser. “The stated requirements are not intended to limit the offeror’s initiative and ingenuity,” say the official documents, so if you want to try and surprise the Federal Bureau of Investigation then take your best shot.

Of course, government agencies have snooping tools of their own that they can utilize, while simultaneously trying to protect themselves and others from outside malware threats as they appear on the Web. Any interested parties have until Feb 14 to get in touch through the Federal Business Opportunities site and strike a deal with the FBI.

David Nield
Dave is a freelance journalist from Manchester in the north-west of England. He's been writing about technology since the…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more