Skip to main content

Sony defends time it took to notify users of data loss

Sony logoAmid stories of a plan by hackers to infiltrate Sony servers for a third time, Reuters has obtained a letter from the electronics company’s computer  entertainment president, Kazuo Hirai. The letter, dated May 5, was sent to Connecticut Senator Richard Blumenthal, who had accused Sony of acting too slowly in response to last month’s massive data breach when millions of users of the PlayStation Network and Qriocity services had their details stolen by hackers.

According to Reuters, Hirai wrote in the letter that the company had responded as quickly as possible in notifying users of the breach. Sony had sent out half a million e-mails an hour to more than 100 million affected users – but Blumenthal claimed this was too slow.

Defending their actions, Hirai wrote that the sending of the notifications took time because “these emails are not ‘batch’ emails. The emails are individually tailored to our consumers’ accounts.” The letter also stated that the company had made sure it complied with state laws by notifying customers about the problem through posts on its PlayStation blog.

Upon noticing that their had been suspicious activity on its network on April 19, the electronics giant contacted the FBI three days later. The first blog post relating to the outage and data theft came on April 26.

In the letter, written on the same day that Sony CEO Howard Stringer posted a letter of apology to affected users on the PlayStation blog, Hirai tried to put the incident into perspective. “What happened to us, though more vast in scope, has happened to many others before,” he wrote.

According to Reuters, Blumenthal said on Friday that Sony had taken a “strong first step.” Those PlayStation Network and Qriocity users who haven’t deserted Sony and are waiting anxiously in their comfy armchairs for the return of services shouldn’t have much longer to wait. In his letter on the PlayStation blog, Stringer stated that the service will resume “in the coming days.”

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more