Tech giants have gotten worse at removing illegal hate speech from their platforms under a voluntary arrangement in the European Union, according to the Commission’s latest assessment. The sixth evaluation report of the EU’s Code of Conduct on removing illegal hate speech found what the bloc’s executive calls a “mixed picture”, with platforms reviewing 81% of the notifications within 24 hours and removing an average of 62.5% of flagged content. These results are lower than the average recorded in both 2019 and 2020, the Commission notes. The self-regulatory initiative kicked off back in 2016, when Facebook, Microsoft, Twitter and YouTube agreed to remove in less than 24 hours hate speech that falls foul of their community guidelines. Since then Instagram, Google+, Snapchat, Dailymotion, Jeuxvideo.com, TikTok and LinkedIn have also signed up to the code. While the headline promises were bold, the reality of how platforms have performed has often fallen short of what was pledged. And while there had been a trend of improving performance, that’s now stopped or stalled per the Commission — with Facebook and YouTube among the platforms performing worse than in earlier monitoring rounds.
A key driver for the EU to establish the code five years ago was concern about the spread of terrorist content online, as lawmakers sought ways to quickly apply pressure to platforms to speed up the removal of hate-preaching content. But the bloc now has a regulation for that: In April the EU adopted a law on terrorist content takedowns that set one hour as the default time for removals to be carried out. EU lawmakers have also proposed a wide-ranging update to digital regulations that will expand requirements on platforms and digital services in a range of areas around their handling of illegal content and/or goods. This Digital Services Act (DSA) has not yet been passed, so the self-regulatory code is still operational — for now.

via techcrunch: Tech giants’ slowing progress on hate speech removals underscores need for law, says EC