Skip to main content

Ex-Microsoft employee arrested for leaking Windows 8, passing trade secrets

Windows 8.1
Image used with permission by copyright holder

Alex Kibkalo, a former Microsoft software engineer, was arrested in Seattle on Wednesday for allegedly leaking Windows 8 to a blogger prior to its release in 2012, according to The Seattle Post-Intelligencer

Kibkalo sent a build of Windows 8 and a matching software activation kit to a French tech blogger, who then posted screenshots of the software online.

Kibkalo, who worked for Microsoft for years in Russia and Lebanon, allegedly leaked the software in 2012 in response to a bad performance review, Redmond believes. The FBI opened an investigation into the leak in July 2013. Kibkalo was arrested after the blogger he leaked the data to contacted Microsoft to confirm the authenticity of the leak. Upon obtaining the blogger’s email and instant messaging records, investigators found messages where Kibkalo encouraged the blogger to not only publish information, but distribute a cracked version of the OS.

“I would leak enterprise today probably,” Kibkalo said according to a message from August, 2012 reportedly shown in Kibkalo’s arrest report. 

“Hmm… Are you sure you want to do that? Lol,” said the blogger, who explicitly warns Kibkalo that handing over the information was “pretty illegal”. Kibkalo allegedly responded with “I know :)”

 Kibkalo has also allegedly been connected to earlier pre-release leaks of chunks of Windows 7. In a conversation with the French blogger,  he also allegedly bragged about breaking into a building on Microsoft’s Redmond, WA campus to copy data from a company server.

Kibkalo is expected to appear in U.S. district court next week, facing charges of stealing trade secrets.

Mike Epstein
Former Digital Trends Contributor
Michael is a New York-based tech and culture reporter, and a graduate of Northwestwern University’s Medill School of…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more