Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Microsoft responds to hack of Cortana and Bing source code

A hacking group has hit Microsoft, getting into Azure DevOps source code repositories and leaking source code for Cortana and several other Microsoft projects. It is the latest round of attacks by the group going by the name of “LAPSUS$,” which also successfully targeted Nvidia, Ubisoft, and other large technology giants.

The latest update from the group, coming on March 22, includes the sharing of a 9GB archive, which has source code for 250 Microsoft projects. Of those, the group claims to have 90% of the source code for Bing, and 45% of the source code for Bing Maps and Cortana. This is only some of the hacked data, with the full archive having 37GB of Microsoft source code.

Alexa says hello on a windows PC that's next to a smart speaker with Alexa.
Image used with permission by copyright holder

The source code for Windows and Office are not included in the leak, according to Bleeping Computer, which believes the leaked files are genuine. The files instead are tied to mobile apps or websites and contain emails and other documents used internally by Microsoft engineers who worked on the projects.

Microsoft confirmed the hack in a blog post, which details the actions of the LAPSUS$ group that it tracks as DEV-0537. In the post, Microsoft said that the hackers had “limited access” to source code since a single account had been compromised. Microsoft went on to explain that no customer code or data was involved in the activities.

“Our investigation has found a single account had been compromised, granting limited access. Our cybersecurity response teams quickly engaged to remediate the compromised account and prevent further activity,” said Microsoft.

The company also mentioned that it does not rely on the secrecy of code as a security measure and that viewing the source code does not lead to elevation of risk. This is similar to what Microsoft explained during the Solarigate investigation, where a compromised account had been used to view source code, though it didn’t have permission to modify engineering systems.

“Our team was already investigating the compromised account based on threat intelligence when the actor publicly disclosed their intrusion. This public disclosure escalated our action, allowing our team to intervene and interrupt the actor mid-operation, limiting broader impact,” explained Microsoft.

As dangerous as this sounds, the hacking group LAPSUS$, isn’t typical. The group is more interested in holding the source code ransom for tech giants in order to make a profit. That’s because source code repositories could also have API keys and code signing certificates. LAPSUS$ did this with Nvidia when it stole DLSS code and demanded that the GPU maker “completely open-source (and distribute under a FOSS license) [its] GPU drivers.”

Article updated on March 23 with Microsoft response to LAPSUS$ hack.

Arif Bacchus
Arif Bacchus is a native New Yorker and a fan of all things technology. Arif works as a freelance writer at Digital Trends…
Microsoft is already expanding Bing Chat to Skype and phones
Microsoft Edge browser showing Bing Chat on an iPhone.

Bing Chat, the AI chatbot powered by ChatGPT, is one of Microsoft's most exciting products, and the Windows developer is wasting no time in incorporating artificial intelligence into more of its products, including three of its mobile apps: Skype, Bing mobile, and Edge.

Microsoft announced the news in a blog post this morning. The Edge browser and the Bing app are obvious choices for adding AI-enhanced search, and early access users will begin seeing Bing Chat in those apps soon. We'd seen hints about Bing Chat on mobile, just two days ago, so Microsoft is moving quickly.

Read more
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more