New research indicates that preventing the online proliferation of hate speech could be effected using a similar technique to that deployed against malicious software.
In a proposal titled ‘Quarantining online hate speech: technical and ethical perspectives’, published in the Ethics and Information Technology journal, researchers at the University of Cambridge advocate implementing a ‘quarantine’ approach for preventing and addressing hate speech and threats. The technique, which has its roots in the cybersecurity practices employed to prevent the spread of malware, uses databases of hateful and violent language to build an algorithm capable of anticipating and corralling potential hate speech.
Dr Stefanie Ullman, the study’s co-author, said: “Hate speech is a form of intentional online harm, like malware, and can therefore be handled by means of quarantining. In fact, a lot of hate speech is actually generated by software such as Twitter bots. Identifying individual keywords isn’t enough, we are looking at entire sentence structures and far beyond. Sociolinguistic information in user profiles and posting histories can all help improve the classification process.”
The algorithm is currently in the early stages, but its developers predict that, once it has been sufficiently refined, it could act in a similar fashion to spam and malware filters. When a user receives a hateful or threatening message, the software will provide them with a ‘severity score’, the name of the sender and the option to either view the message or delete it unread.
Co-author of the study Dr Marcus Tomalin said: “Companies like Facebook, Twitter and Google generally respond reactively to hate speech. This may be okay for those who don’t encounter it often. For others it’s too little, too late. Many women and people from minority groups in the public eye receive anonymous hate speech for daring to have an online presence. We are seeing this deter people from entering or continuing in public life, often those from groups in need of greater representation. Through automated quarantines that provide guidance on the strength of hateful content, we can empower those at the receiving end of the hate speech poisoning our online discourses.”