Based on a compromise deal struck between France and Germany, the final version of Article 13 requires for-profit Internet platforms to license content from copyright holders. In the event that’s not possible, platforms will be required to take infringing content down and ensure that it’s not re-uploaded to their services.
Proponents of the legislation argue that this does not automatically mean the use of so-called “upload filters” but those in opposition simply cannot see any way the objectives can be achieved without them. Needless to say, privacy advocates aren’t happy with the prospect of their data being analyzed every time they carry out an upload to qualifying platforms.
While most of the furor has centered around known false positive weaknesses in existing content recognition systems such as YouTube’s Content ID, there is increasing concern that only the richest of platforms will be able to build their own competing systems.
Those that don’t will be required to use third-party vendors and the fear is that the big players will scoop up the market, effectively channeling the traffic of millions of EU citizens through a small number of companies
In a statement published this week in German (translated by
Florian Mueller of fosspatents.com and officially approved by the government agency),
“Even though upload filters are not explicitly mandated by the bill, they will be employed as a practical effect. Especially smaller platform operators and service providers will not be in a position to conclude license agreements with all [copy]right holders. Nor will they be able to make the software
“Instead, they will utilize offerings by large IT companies just the way it is already happening, for one example, in the field of analytics tools, where the relevant components created by Facebook, Amazon
“At the end of the day, this would result in an oligopoly consisting of a few vendors of filtering technologies, which would then be instrumental to more or less the entire Internet data traffic of relevant platforms and services,”
“The wealth of information those vendors would receive about all users in the process is evidenced by, among other examples, current media coverage of data transfers by eHealth apps to Facebook.”
“If the EU believes platform operators can meet their new obligations [under the proposed copyright directive] without upload filters, it must make a clear showing. That is why I am
“In the other event, data privacy considerations require a thorough overhaul of the bill. Notwithstanding the need to update the protection of author’s rights in our times, such a measure must not harm or compromise the protection of Internet users’ data,” he concludes.
Also this week, the European Parliament’s legal affairs committee voted through the final draft of the Copyright Directive, with sixteen votes in favor and nine against. Then, following the ‘mob’ PR disaster (1,2) last month, the EU managed to make a mess of things again with the following Tweet/video.
If you didn’t spot the need for clarification or correction of errors already, here’s Julia Reda MEP with the details.
“The video claims that the reform is directed at ‘large platforms’: In fact, the size of the platform does not matter for the application of #article13, merely whether it hosts ‘large amounts’ of protected content. This can also be the case for a platform run by a single person,” Reda wrote on Twitter, adding:
“The video says a lighter regime applies to platforms that have a turnover below 10 Million *or* less than 5 million unique visitors. This is just wrong. Actually, the lighter regime only applies if both criteria are met, and platforms are also younger than 3 years old.”
While the errors are of course a problem, the fact that the EU is weighing in yet again in a manner that appears to be biased is not being well received by opponents of Article 13. Indeed, it only appears to have further fueled the fire.