Skip to main content

Hackers use Britney Spears’ Instagram to hide instructions for malware attack

ransomware
pwstudio/123RF
Hacking groups are always working on new ways to perpetrate attacks, and now there’s evidence that a Russian outfit known as Turla has figured out a method of using Instagram to carry out its activities. Earlier this week, a report was published that suggests Britney Spears’ account on the photo-sharing service was used as a staging area for a Trojan attack.

The information published by antivirus developer Eset revolves around a Firefox browser extension, according to a report from Ars Technica. The extension purported to offer enhanced security, but in fact served to give the hackers a method of seizing control over an infected system.

A bit.ly URL directed the extension toward its command and control server, but the address was not actually present in its source code. Instead, it was hidden away in a seemingly random comment on one of Spears’ Instagram posts.

The extension would pore over each photo’s comments, computing a custom hash value for each string of text. When it found the comment with a hash that matched with the stipulated value of 183, it ran a regular expression — a sequence of characters that defines a search pattern — on the comment to translate it into the URL.

Eset researchers managed to discover a bit.ly URL hidden in this manner, which linked to a domain that has been used by Turla in the past.

The URL in question only received a small number of visits around the time when the Instagram post was published, which can be interpreted either as a sign that the malware is still being put through its paces, or that the attack was highly targeted.

Firefox developers are apparently in the process of tweaking the browser so that the current implementation of this attack won’t work in the future.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more