
On 28 April 2021, the European Parliament adopted a new Regulation that aims to address the dissemination of terrorist content on online platforms.
In order to do so, the Regulation first of all obliges online platforms to remove or disable access to flagged terrorist content in all Member States within one hour of receiving a removal order from the competent national authority.
Furthermore, when the competent authority has taken the decision that an online platform is exposed to terrorist content, additional obligations will apply. The competent authority will need to base this decision on objective factors, such as the hosting service provider having received two or more final removal orders in the previous 12 months. Once this decision has reached the platform involved, it will need to put in place measures to prevent the propagation of terrorist content.
Online platforms remain free to choose the preventive measures they put in place to do so. However, the Regulation lists some examples, such as the use of appropriate technical means to identify and expeditiously remove or disable access to terrorist content or easily accessible and user-friendly mechanisms for users to report terrorist content.
From the day the proposal for this new Regulation was launched, these rules have been subject to strong criticism. Its criticasters fear that these new rules and obligations will encourage platforms to use algorithms for their moderation, which might lead to overblocking of content. This, together with an absence of any judicial control, could threaten freedom of expression and represents a danger for democracy, they say.
These criticisms did not fall on deaf ears. The European legislator has added a number of safeguards to the proposed legislation. First of all, the text of the Regulation now explicitly mentions that “material disseminated for educational, journalistic, artistic or research purposes or for awareness-raising purposes against terrorist activity should not be considered to be terrorist content”.
With regard to the deployment of automated filter tools, the Regulation states that “any requirement to take specific measures shall not include an obligation to use automated tools by the hosting service provider” and adds transparency obligations in this respect.
Furthermore, the Regulation also contains an obligation for online platforms to have a complaint mechanism in place. This is intended to allow users who are faced with the unjust removal or disabling of their content to request the reinstatement of the content.
Whether these safeguards will be enough to protect freedom of expression online, only time will tell. In any event, this Regulation shows once again that when regulating online platforms, the use of automated filters raises concerns and as such a future regulatory framework covering such use is urgently needed. Also in that regard, the draft proposal for a Regulation on Artificial Intelligence is a welcome step in the right direction.
In the meantime, these new rules will enter into force on the twentieth day following publication of the Regulation in the Official Journal. It will start to apply 12 months after its entry into force.
Please contact Karel Janssens for further information and/or for general legal advice relating to online platforms.