Asking the big social media companies to remove extremist content more quickly will do little to fight terrorism
Barely a day goes by when social media is not in the firing line from activists and advertisers over hate speech and racist rhetoric.
The controversy goes to the heart of the debate about the extent to which social media platforms should become the arbiter of content decisions and whether internet companies should be solely responsible for dealing with abhorrent content posted by users. Facebook and Twitter are both doing more than ever to reduce “online harms” – certainly much more than is legally mandated – but work carried out by Tech Against Terrorism shows that the majority of activity by terrorists and violent extremists has now shifted to the smaller, newer messaging apps, and niche social networks.
We need to acknowledge that, for all the understandable focus on the bigger platforms, it