Spare the banhammer, spoil the commenter
So the European Commission, Microsoft, Google, Facebook and Twitter are in agreement: the Internet has a bullying problem. Well, bullying, misogyny, bigotry, religious intolerance, racism, radicalisation – you get the idea. Such is the problem of having the technology you develop get hijacked by its user base and become something you had no idea it was capable of. Would we known in 2005 that YouTube would give birth to a new kind of video star; that Facebook would become the world’s biggest publisher of news; that Twitter would become a go-to destination for breaking stories, political point scoring and outraged #mobs that dissipate almost as quickly as they appear. All these wildly profitable companies wanted to build something cool, then the user base showed up and forced them into having to act all adult and responsible.
Thankfully, hatred is bad for business and hate speech is already illegal but the absence of consistent measures and patchy enforcement mean only the most high profile of transgressions get sanctioned. Now the EC and its industry partners have hit upon a formula for responding in a manner consistent, logical and doesn’t let platforms palm off their duty of care to their users.
Key to the EC’s 12-point code of conduct are the need for clear community guidelines, user experience tweaks to make it easier to flag objectionable content, the rapid moderation of content (preferably less than 24 hours), and the development of clear lines of communication between EU member states and industry to establish best practices.
As the Commission put it in a statement: “While the effective application of provisions criminalising hate speech is dependent on a robust system of enforcement of criminal law sanctions against the individual perpetrators of hate speech, this work must be complemented with actions geared at ensuring that illegal hate speech online is expeditiously reviewed by online intermediaries and social media platforms, upon receipt of a valid notification, in an appropriate time-frame.”
I’m encouraged by the EC’s emphasis on the role of human-powered moderation – basic cop on, if you will. Over the past few months there have been numerous stories about overzealous removal of topics on Facebook such as breastfeeding and body shaming while the platform struggles to suppress material from Isis recruiters in Belgium.
The success of the code of conduct will come down to a combination of easily understood policy, technology, and the ability to discern the distasteful from the illegal, the sarcastic from the literal. Successful moderation can’t be reduced to culling material read out of context because someone, somewhere might react badly to it.
Twitter’s head of public policy for Europe Karen White expressed a similar position: “Hateful conduct has no place on Twitter… However, there is a clear distinction between freedom of expression and conduct that incites violence and hate.”
As for how this will work, there are aspects of the new code that are appealing, however there is a disjoin between what the it has to do and the methods businesses are being encouraged to use to implement it. For example, rather than take down comments or entire threads/pages in real time the code requires a “rapid” response and even the promotion of “counternarratives” – which sounds like a euphemism for propaganda – to offset toxic content.
In terms of countering a long tail effect where material left online has an indefinite shelf life, a 24-hour recommended response period is fine but if a troll is posting obscene comments on the profile page of someone who has recently died, then the shock value has already played out by then. Let’s not forget the speed at which a tweetstorm can get unleashed then die down – the damage being done before the story hits the mainstream media with the line ‘in a tweet since deleted’ paraphrasing the original comment.
It’s defeatist to say the new code of conduct is ‘better than nothing’ or ‘good enough’ to deal with illegal hate speech in social media but there are other models being trialled. Periscope, Twitter’s live-streaming service, is looking at a community-based approach where trolls can be kicked silenced as soon as they start kicking up. This kind of self-moderation has been used with some success on titles like The Guardian, where comment sections are presented as a mix of chronological and popular order. For speed it could be a more precise solution but a ‘spare the banhammer, spoil the user’ approach may not have the same appeal for larger platforms. In any case, know that whatever you post online, the mods are more likely to be watching.