Self-Regulation of Fundamental Rights? The EU Code of Conduct on Hate Speech, Related Initiatives and Beyond

2019
Self-Regulation of Fundamental Rights? The EU Code of Conduct on Hate Speech, Related Initiatives and Beyond
Expertise
Empirische Studien
Maßnahmen gegen Hate Speech
Politische Aspekte
Rechtliche Aspekte
Edward Elgar Publishing
Bilyana Petkova & Tuomas Ojanen
21 Seiten
kostenlos

This contribution will give a brief overview of EU legislation encouraging self-regulation, such as codes of conduct, communications and recommendations and propose an alternative approach towards fighting illegal content on online platforms, which ventures squarely into co-regulation.

There is no formal and straightforward definition on what constitutes illegal hate speech. However, hate speech might be classified as targeting minority groups in a way that promotes violence or social disorder and hatred. The use of social media and online platforms to spread illegal content and hate speech has increased progressively during recent years, as content may be disseminated anonymously and further shared by other users. Therefore, the timely removal or blocking of access to illegal content is essential to limit the wider dissemination and harm of individuals targeted by hate speech.

The prominent role of online platforms in revolutionising modern communication and as influencers of the public opinion has increasingly come to the attention of policy makers. Since online platforms provide an important stage for phenomena such as ‘fake news’, ‘hate speech’ or ‘disinformation’, the pressure to take more responsibility over content hosted by them has grown.

The EU Commission took action via several attempts to set certain rules for online intermediaries, mostly relying on non-binding agreements, often in the form of self-regulatory measures, such as codes of conduct, guidelines and recommendations.

These measures have raised concerns regarding possible limitations of Freedom of Expression, because they require online platforms to adjudicate on the legality of content, often by relying on automated systems. Meanwhile decisions over the unlawfulness of hate speech and “disinformation” are often notoriously difficult.

The deployment of algorithms to analyse the content generated on platforms, such as recognition and filtering technologies, bear risks and pitfalls of automated compliance solutions. Although the use of algorithms to monitor content online still happens based on the “human-in-the-loop principle”, the diligence and efficiency with which illegal content can be reviewed is also dependent on the financial capacity and resources of each company. In addition, these privatised removal procedures may be influenced by commercial interests and lack effective appeals mechanisms.

All these issues generate serious questions about the democratic legitimacy of self-regulatory removal procedures.

An alternative solution, proposed in this article, would require platforms to apply a risk-based approach to preventing and removing illegal content. The norms and standards of such an approach would be based on duty of care and be subject to regulatory oversight. It is suggested that the current self-regulatory proposals be replaced by co-regulatory solutions.

Deine Publikation ist nicht dabei? Dann schick' uns doch bitte mehr Informationen darüber!

zum Newsletter
Newsletter-icon

Du willst zum Thema "Hass im Netz"
auf dem Laufenden bleiben?

Dann abonniere unseren
DAS NETTZ-Newsletter.