Skip to main content

Russia arrests Kaspersky Labs security investigator on treason charges

A hand on a laptop in a dark surrounding.
Image used with permission by copyright holder
Safe computing requires the involvement of people in all industries, locations, and fields of expertise. Normally, that’s not a problem, as most people are willing and able to provide whatever input is necessary to help alleviate security risks in the technology we all use.

Some regions of the world are not as free and open as others, however, and so not all professionals as able to participate without concern for their own safety. Such could be the case with a Kaspersky Lab investigator who was arrested on treason charges in Russia, as Ars Technica reports.

Kaspersky Labs was quick to disassociate itself from the incident, saying, “The case against this employee does not involve Kaspersky Lab. The employee, who is Head of the Computer Incidents Investigation Team, is under investigation for a period predating his employment at Kaspersky Lab. We do not possess details of the investigation. The work of Kaspersky Lab’s Computer Incidents Investigation Team is unaffected by these developments.”

Details are sketchy as to why the investigator, Ruslan Stoyanov, was arrested. Stoyanov was in charge of Kaspersky Lab’s investigations unit, in addition to serving in Russia’s Ministry of Interior in charge of cybercrime. As Forbes reports, Stoyanov’s arrest might be related to an investigation involving Sergei Mikhailov, deputy head of the information security department of the FSB, involving monies paid by foreign companies.

However, Stoyanov recently contributed to the Kaspersky Lab Securelist blog, posting on cybercrime in Russia, and the Lawfare Blog has speculated — perhaps erroneously — that Stoyanov might have been a source of information leading to the conclusion that Russia sponsored hacking efforts aimed at interfering with the 2016 presidential election in the U.S. While nobody can be certain of the reasons for Stoyanov’s arrest, one general concern is that anyone who participates in efforts to fight cybercrime can come under political fire.

As Jake Williams of security firm Rendition Software put it, “For those living and working under oppressive regimes, keep up the good fight. But also remember that no incident response report or conference talk is worth jail time (or worse). I think that these charges will cause security researchers, particularly those in states with oppressive governments, to carefully consider the weight of reporting details of security incidents.”

Stoyanov’s arrest was filed under Article 275 of the Russian criminal code, which can impose treason charges on anyone who provides financial, technical, advisory, or other assistance to foreign states or organizations that are not friendly to Russia. This means that, as Forbes indicated in its coverage, merely providing the U.S. FBI with insights on malware such as botnets could run someone afoul of government agencies.

Nevertheless, the chilling effect on cybercrime research and mitigation could be significant if Stoyanov’s arrest indicates a trend of penalizing researchers and others for international cooperation. Even if Stoyanov’s arrest was for unrelated reasons, anyone involved with researching security in countries with oppressive governments might now think twice before working with foreign entities on resolving information security concerns.

Mark Coppock
Mark has been a geek since MS-DOS gave way to Windows and the PalmPilot was a thing. He’s translated his love for…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more