Large tech firms urge EU for protection to tackle illegal content online

Large tech firms urge EU for more protection to tackle illegal content online.   | Photo Credit: Reuters

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

Facebook, Amazon, Twitter, Google, Microsoft and others have urged the European Union to introduce legal safeguards to help them tackle illegal content online.

Edima, the association that represents the firms, said stronger protection would allow companies to take proactive actions to remove illegal content and activity from their services, without the risk of additional liability for those attempts to tackle illegal content.

It added that such safeguards will ensure better quality content moderation by companies of content generated by users.

“The EU approach to the freedom of expression is different to that of the US so our approach to moderating content online must be different also. Our proposal is based on European values and laws and sets clear limits to the legal safeguard for service providers in order to protect the freedom of expression and to prevent overaction by service providers,” El Ramly, EDiMA’s Director General, said.

Under existing rules, online service providers are obliged to remove illegal content when they have ‘actual knowledge’ of its presence. This ensures that companies are not obliged to police all content uploaded by users, which would inevitably infringe on the fundamental rights to speech and privacy.

However, if a service provider is deemed to have ‘actual knowledge’ of illegal activity/content on their service that they fail to remove, they cannot benefit from the limited liability provisions of the e-Commerce Directive.

If a service provider has put in place an algorithm to detect infringements, it is not entirely clear if these measures confer ‘actual knowledge’ on the service provider.

In the US, protection has been provided through Good Samaritan Principle, under Section 230(c) of the Communications Decency Act. It states that actions taken by a service provider to reduce illegal activity online do not impact on the limited liability of the service provider.

However, it gives online service providers greater control on removing not only illegal content but also “not-illegal-but-harmful” content. The principle has received criticism as it is framed in manner of freedom of speech to service providers than as a tool to protect them from liability for illegal content.

“This has raised concerns that the US Good Samaritan Principle potentially creates a difficult hurdle to overcome when someone wishes to question a service provider’s moderation decision on content that is not illegal. For these reasons and others, Europe needs to take a different approach,” Edima said.

In a paper published as part of its Online Responsibility Framework Series, the association recommends new measures for service providers including minimum information levels in notices of illegal content and the requirement of a human review of appeals of removal of content. It also acknowledges the need for an appropriate governance structure to manage these provisions and called this to be a part of Digital Services Act.

This article is closed for comments.
Please Email the Editor

Printable version | Dec 5, 2020 10:26:54 AM |

Next Story