Skip to main content

Password-hacker tool KeeFarce can lift passwords from KeePass

keefarce lift passwords from keepass tablet password logins
Image used with permission by copyright holder
A new tool has been developed that can decrypt and extract passwords from the password manager KeePass, which highlights how all password managers cannot be perfect.

Using a password manager may be a convenient way to manage your online security but they aren’t much use if your computer is already compromised.

The tool, KeeFarce, needs to run on a computer that a hacker or pentester already has access to or control of. When KeeFarce runs on this computer and the user has the KeePass database unlocked, the actor can decrypt the database and write the information onto a file that they can then access.

The key takeaway here is that the computer in question must already be compromised in order for KeePass to work. If the operating system has been compromised, it’s “game over,” said the creator of KeeFarce.

KeePass itself has warned users about potential attacks or spyware like this. It uses what is called process memory protection to encrypt the master passwords stored in the computer’s memory, which can help in preventing attacks such as these.

While this tool targets KeePass specifically, it is not unique to the password manager. Anyone with the know-how could potentially develop a similar tool that takes advantage of a compromised computer and as a result can extract a password manager’s data.

Password managers are very popular and useful but they are, like any other program, never 100 percent secure and if they ever do fail, it creates a gaping hole into all of your passwords.

Jonathan Keane
Former Digital Trends Contributor
Jonathan is a freelance technology journalist living in Dublin, Ireland. He's previously written for publications and sites…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more