Skip to main content

Hackers are using stolen Nvidia certificates to hide malware

Nvidia code-signing certificates that were extracted from a recent hack of the chip maker are being used for malware purposes, according to security researchers.

Hacking group LAPSUS$ recently claimed to have stolen 1TB of data from Nvidia. Now, sensitive information has appeared online in the form of two code-signing certificates that are used by Nvidia developers to sign their drivers.

A person surrounded by several computers types on a laptop.
Image used with permission by copyright holder

As reported by BleepingComputer, the compromised signing certificates expired in 2014 and 2018, respectively. However, Windows still enables drivers to be authorized with these certificates. As a result, malware can be masked by them in order to appear trustworthy, subsequently paving the way for harmful drivers to be opened in a Windows PC without being detected.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Certain variations of malware that were signed with the aforementioned Nvidia certificates were discovered on VirusTotal, a malware scanning service. The samples that were uploaded found that they were being used to sign hacking tools and malware, including Cobalt Strike Beacon, Mimikatz, backdoors, and remote access trojans.

One individual was able to use one of the certificates to sign a Quasar remote access trojan. In another case, a Windows driver was signed by a certificate, which resulted in 26 security vendors flagging the file as malicious as of the time of this writing.

BleepingComputer says certain files could in all likelihood have been uploaded to VirusTotal by security researchers. There is also evidence that appears to suggest that other files that were checked by the service were uploaded by individuals and hackers who wish to spread malware; one such file was flagged as malicious by 54 security vendors.

Once a threat actor uncovers the method to integrate these stolen certificates, they can make programs that appear to be official Nvidia applications. Once opened, malicious drivers will then be loaded onto a Windows system.

David Weston, director of enterprise and OS security at Microsoft, commented on the situation on Twitter. He stated that an admin will be able to configure Windows Defender Application Control (WDAC) policies in order to manage which specific Nvidia driver can be loaded onto the system. However, as BleepingComputer points out, being familiar with implementing WDAC is not a common trait among the average Windows user.

So what does this all actually mean for Windows users? In a nutshell, those who create malware can target individuals with malicious drivers that can’t be easily detected. They typically spread such files through Google via fake driver download websites. With this in mind, don’t download any drivers from suspicious and untrustworthy websites. Instead, download them directly from Nvidia’s official website moving forward. Microsoft, meanwhile, is likely working on revoking the certificates in question.

Elsewhere, LAPSUS$ is expected to release a 250GB hardware folder it obtained from the Nvidia hack. It initially threatened to make it available last Friday should Nvidia fail to make its GPU drivers completely open-source “from now on and forever.” The group has already leaked Team Green’s proprietary DLSS code, while it also claims to have stolen the algorithm behind Nvidia’s crypto-mining limiter.

Zak Islam
Former Digital Trends Contributor
Zak Islam was a freelance writer at Digital Trends covering the latest news in the technology world, particularly the…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more