Skip to main content

Oracle takes down CSO’s chastising blog post on home-security testing

oracle cso blog security testing oraclestory
Image used with permission by copyright holder
If the last few years have taught us anything about the nature of digital security, it’s that nothing is airtight. Whether you look to Edward Snowden absconding with the NSA’s secretive files, Sony having its servers’ contents dumped all over the floor or indeed, Hacking Team itself being hacked, it’s clear everyone and everything is vulnerable.

Perhaps that’s why a lot of Oracle software users have been trying to find flaws in its software, something that the chief security officer at the company, Mary Davidson, isn’t happy with. So much so, in fact, that she penned a sarcastic, chastising blog post over the weekend that pointed out not only were people breaking their license agreement by reverse engineering Oracle programs, but that they were wasting their time too.

“I’ve been writing a lot of letters to customers that start with ‘hi, howzit, aloha,’ but end with ‘please comply with your license agreement and stop reverse engineering our code, already,'” she said in the now deleted post (via Ars Technica).

She went on to poke fun at those using automated tools to scan Oracle software for flaws, suggesting that not only that those tools’ reports do not — as she is concerned — quantify an actual potential exploit, but that they are roping someone else into breaking their license agreement too.

“Oh, and we require customers/consultants to destroy the results of such reverse engineering and confirm they have done so,” she said.

Her reasoning for this attack on customers, who she seems to believe are either misguided or want to catch Oracle out, is that she doesn’t want to send out more sternly worded letters telling people to stop. She also reiterated that third-party tools and analyzers don’t do a good job of looking at Oracle code anyway.

“I do not need you to analyze the code since we already do that.”

Do you think those sending in reports of Oracle bugs are doing it because they want the praise for finding a flaw, as Davison seems to think, or does this suggest a growing climate of more security concious software users?

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more