Skip to main content

So much for the unhackable Mac: Root exploit hits the wild with no fix in sight

restore a Mac to factory settings
mama_mia/Shutterstock
There’s a common misconception that Macs aren’t susceptible to any sort of malware or virus, but if evidence of exploits in the past hasn’t convinced you that isn’t the case, this news from Malwarebytes might. A recently discovered exploit, known by the file that makes it possible, DYLD_PRINT_TO_FILE, allows attackers to use the error reporting system within Mac OS X to create a file with root privileges. Once software has access to your root, it can manage every aspect of your system from installing malicious applications to locking you out entirely.

Fortunately, the practical example of the exploit is a bit less sinister than that. By modifying the sudoers file, the file which contains the list of users that have root privileges, the software can erase the evidence of the exploit and will still have root privileges. From there, it silently uses an app called VSInstaller to install adware called VSearch, Genieo, and MacKeeper, three different pieces of malicious software, then launches an app store page for a download manager called Shuttle.

Security researcher Stefan Esser and another researcher made the exploit known to Apple privately, and then publicly weeks ago, but as of yet Apple hasn’t made any indication that there’s a solution on the horizon. Some users have reported the exploit no longer works in the El Capitan beta, but that has more to do with revamped file permissions and a change to the error reporting software.

For now, if you want to ensure you’re protected from the DYLD_PRINT_TO_FILE exploit, your only option is to install Esser’s SUIDGuard and have faith that his software is trustworthy. As always, your best line of defense is to run anti-virus software on your Mac, and ensure that you’re only downloading files and software from trusted sources like Apple.

Brad Bourque
Former Digital Trends Contributor
Brad Bourque is a native Portlander, devout nerd, and craft beer enthusiast. He studied creative writing at Willamette…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more