Skip to main content

Surprising EU court ruling holds websites responsible for reader comments

eu court human rights liable euchr
Wikimedia
The Internet is going through a teething period at the moment, where it straddles between the wild west of yesteryear, with whatever it will be tomorrow. To help figure out what that is, we need to decide to what extent websites are responsible for the comments of their users. In the opinion of the European Court of Human Rights (EUCHR), their responsibility is significant, as its latest ruling on the matter has made a news site accountable for some of the statements posted by its users.

This decision by the court is noteworthy since it is opposed to the European Union’s protections for E-Commerce, which as Ars points out, “guarantees liability protection from intermediaries,” as long as they implement a notice-takedown system.

This ruling has divided commentators, with some suggesting that this is the beginning of the end of web freedoms, since it provides a precedent for other cases, while others believe this is a happy middle ground, where the site simply didn’t take enough action to delete the comments. It’s a difficult line that needs to be toed by everyone involved, ourselves included.

The real problem with any decisions like the one taken by the EUCHR, is that whether sites should bear some responsibility or not, enforcing it is difficult. Reported comments can be investigated, but on what scale? Should administrators merely miss a post that is deemed offensive or hateful, is negligence as punishable as allowing that content to be posted in the first place?

Fortunately for those concerned, the European Court of Human Rights has limited legal jurisdiction. Its ruling in this case cannot be used to change national or international law. It may be cited by lawyers, though, and the fact that this ruling was reached by many of the court’s 47 member states hints at the climate of opinion in the region.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more