Skip to main content

This severe TikTok vulnerability gives hackers 70 ways to steal your info

After internal testing, Microsoft discovered an exploit in the Android version of TikTok that could have given attackers access to huge amounts of personal data with a single click.

The vulnerability has already been fixed, and it does not appear that anyone has been affected by the exploit. The attackers could have used this vulnerability to access user profiles, allowing outside forces to publicize private videos, send messages, and even upload videos.

The exploit took advantage of the way TikTok handles WebView code by bypassing deep link verification. When a TikTok user selects an affected deep link, the URL could access JavaScript bridges that granted attackers functionality on the account. JavaScript bridges continue to pose a security risk on a variety of apps, and Microsoft, in a blog post, emphasized how “… collaboration within the security community is necessary to improve defenses for the overall digital ecosystem.”

The exploit could have affected over 1.5 billion TikTok installations from the Google Play Store.

The vulnerability is actually a combination of several issues that, when combined together, could give attackers access to these accounts. Microsoft details all of its findings and how it discovered the exploit in its in-depth blog post.

When Microsoft notified TikTok’s security team of the issue, they “responded by releasing a fix to address the reported vulnerability, now identified as CVE-2022-28799, and users can refer to the CVE entry for more information. We commend the efficient and professional resolution from TikTok’s security team.”

News of this exploit comes on the heels of frequent reports of TikTok’s excessive data collection. Hopefully, this quick patch reflects how seriously the company takes user data and privacy. Microsoft and TikTok both recommend you double-check to make sure you are on the latest version of the app to avoid any issues.

Caleb Clark
Former Digital Trends Contributor
Caleb Clark is a full-time writer that primarily covers consumer tech and gaming. He also writes frequently on Medium about…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more