Skip to main content

Microsoft Abruptly Fires CIO

Microsoft Abruptly Fires CIO

Microsoft Corporation has abruptly fired its Chief Information Officer Stuart Scott. Scott joined Microsoft in 2005 after a 17-year run at General Electric.

Microsoft declined to offer any specific reason for Scott’s dismissal. “Stuart Scott’s employment with Microsoft was terminated after an investigation for violation of company policies,” said Microsoft spokesman Lou Gellos, reading from a company statement Tuesday.

Microsoft plans to have corporate VP Alain Crozier and service group general manager Shahla Aly take over CIO duties until the company names a replacement.

At Microsoft, Scott headed up the groups responsible for the company’s security, infrastructure, business applications, and messaging; these groups were responsible for supporting Microsoft’s product and development groups, as well as the company’s sales and marketing organizations.

Needless to say, it is highly unusual for a high profile company like Microsoft to unceremoniously dismiss its CIO, without even attempting the tried-and-true escape that an officer is leaving a company to “pursue other interests” or “spend more time with his/her family.” The firing has led to rampant speculation about what policies, exactly, Scott may have violated, and whether those actions will have any impact on Microsoft’s business.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more