Skip to main content

Microsoft wins partial victory, can pursue lawsuit against U.S. government

microsoft sues us government gag orders 2
Image used with permission by copyright holder
In April, Microsoft sued the U.S. government over the right to let its customers know when authorities had requested access and are going through their email accounts. According to Microsoft, government gag orders that prevent the company from keeping customers informed infringe on Constitutional rights and should be prohibited.

A little less than a year later, Microsoft has won a small victory, with a federal court ruling that the company can continue its lawsuit. In a recent ruling, federal judge James Robart upheld Microsoft’s First Amendment assertions but turned down the company’s claim to Fourth Amendment protection, ZDNet reports.

According to Microsoft, it originally brought the case “… because its customers have a right to know when the government obtains a warrant to read their emails, and because Microsoft has a right to tell them.” In the 18 months prior to its issuing its lawsuit, Microsoft notes that it received 5,624 warrants for access to customer information under the Electronic Communications Privacy Act, with 2,576 of those including a gag order that prevented Microsoft from saying anything about them.

Robart’s ruling basically denied part of the government’s motion to dismiss Microsoft’s case, specifically that Microsoft had grounds to proceed with its lawsuit that the gag orders violated its First Amendment rights to free speech. However, Robarts granted the government’s motion to dismiss Microsoft’s claims based on the Fourth Amendment, stating that his court does not have the authority to overturn precedent that says Fourth Amendment lawsuit cannot be brought on behalf of a third party.

In response to the ruling, Microsoft’s chief legal officer Brad Smith said, “We’re pleased this ruling enables our case to move forward to a reasonable solution that works for law enforcement and ensures secrecy is used only when necessary.” In other words, while not a complete victory, the ruling allows Microsoft’s lawsuit to proceed at least on part of its original assertions.

Mark Coppock
Mark has been a geek since MS-DOS gave way to Windows and the PalmPilot was a thing. He’s translated his love for…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more