Skip to main content

Samsung keylogger accusations prove false

samsung-r540-laptopSamsung shot down claims Thursday that it installs keylogger software on its laptops, according to an official statement by the company. The denial follows an “internal investigation” launched by Samsung on Wednesday after the publication of a report that some of its laptops came loaded with a keylogger.

In an article posted to Network World, security consultant Mohammed Hassan claimed to have found the StarLogger spyware installed on both a factory-sealed Samsung R525 laptop and a brand new R540 model, after he had performed a series of virus scans. On both devices, the offending software was located in the c:\windows\SL directory, say Hassan.

StarLogger is a publicly available commercial spyware that records every stroke made on a computer’s keyboard and can send that information to a third party without the knowledge of the computer’s user.

When Hassan confronted Samsung with his discovery, a company support representative confirmed that the company knew that the software had been installed on its computers, and said that it was used to “monitor the performance of the machine and to find out how it is being used.”

“In other words,” writes Hassan, “Samsung wanted to gather usage data without obtaining consent from laptop owners.” If true, such an action could cause the technology company to face serious legal consequences.

According to Samsung, however, Mr. Hassan has it all wrong.

“Our findings indicate that the person mentioned in the article used a security program called VIPRE that mistook a folder created by Microsoft’s Live Application for a key logging software, during a virus scan,” says Samsung in an official statement on the matter. “The confusion arose because VIPRE mistook Microsoft’s Live Application multi-language support folder, “SL, as StarLogger.

keylogger-starlogger-vipre-samsung
Image used with permission by copyright holder

Additional testing by a variety of independent sources, like ZNet‘s Adrian Kingsley-Hughes, confirms Samsung’s claims that the keylogger finding is actually a false-positive result by VIPRE.

Andrew Couts
Former Digital Trends Contributor
Features Editor for Digital Trends, Andrew Couts covers a wide swath of consumer technology topics, with particular focus on…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more