Skip to main content

New zero-day Adobe Flash vulnerability discovered by research firm

Image used with permission by copyright holder

According to Internet security research firm FireEye, they have a new security exploit found in Adobe Flash. Last week, we reported that FireEye uncovered an Internet Explorer 10 vulnerability that was used to target a site for an organization that helps assists U.S. military veterans. FireEye is dubbing these attacks “Operation GreedyWonk.”

Visitors of the websites for the Peter G. Peterson Institute for International Economics, the American Research Center in Egypt, as well as the Smith Richardson Foundation “were redirected to an exploit server hosting this Flash zero-day through a hidden iframe,” according to FireEye.

Here’s what FireEye had to say about those behind Operation GreedyWonk:

“The group behind this campaign appears to have sufficient resources (such as access to zero-day exploits) and a determination to infect visitors to foreign and public policy websites. The threat actors likely sought to infect users to these sites for follow-on data theft, including information related to defense and public policy matters.”

FireEye recommends that to reduce your risk of falling prey to this threat, you should upgrade your OS from Windows XP, update Java and update Microsoft Office to the latest versions.

FireEye is working closely with Adobe on this matter, which has released a security bulletin of their own. You can check it out here.

What do you think? Sound off in the comments below.

Konrad Krawczyk
Former Digital Trends Contributor
Konrad covers desktops, laptops, tablets, sports tech and subjects in between for Digital Trends. Prior to joining DT, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more