Freedom of expression and content moderation on social networks

During the 2016 US presidential election, this headline spread rapidly through Facebook, provoking a wave of Tweets and YouTube videos. It looked true. Yet it was false. How should policymakers and social media platforms fight such “fake news”? Participants at the latest CEPS Digital Forum event on September 5th attempted to answer this elusive question. In Europe, governments are demanding that Facebook, Google, YouTube and Twitter identify and delete hate speech, terrorist propaganda and other forms of problematic expression. The European Commission has signed a memorandum of understanding that obliges social platforms to speed up their takedowns of objectionable material. Germany has adopted a law that imposes large fines on any network that fails to remove unlawful speech within 24 hours of notification.  But these automatic tools represent a danger to free speech. According to Emma Llansó, Director of the Free Expression Project at the Center for Democracy and Technology (CDT) who served as a discussant at the event, much “real” news ends up being removed, along with the fake news like the Pope’s endorsement of Trump. All too often, machines find it difficult to distinguish between not only between fake and real, but also between what is appropriate and what is not.

For a fuller discussion of these questions, see the CEPS Commentary by William Echikson, “To filter or not to filter – That is the question”, 12 September 2017.