Online hate speech could be contained like a computer virus, say researchers
For Cambridge students For our researchers Business and enterprise Colleges and Departments Email and phone search Give to Cambridge Museums and collections Undergraduate - Events and open days Fees and finance Postgraduate - Postgraduate courses Fees and funding Frequently asked questions International students Continuing education Executive and professional education Courses in education How the University and Colleges work Visiting the University Equality and diversity A global university Public engagement Give to Cambridge - Artificial intelligence is being developed that will allow advisory "quarantining" of hate speech in a manner akin to malware filters - offering users a way to control exposure to "hateful content" without resorting to censorship. We can empower those at the receiving end of the hate speech poisoning our online discourses Marcus Tomalin The spread of hate speech via social media could be tackled using the same "quarantine" approach deployed to combat malicious software, according to University of Cambridge researchers. Definitions of hate speech vary depending on nation, law and platform, and just blocking keywords is ineffectual: graphic descriptions of violence need not contain obvious ethnic slurs to constitute racist death threats, for example. As such, hate speech is difficult to detect automatically. It has to be reported by those exposed to it, after the intended "psychological harm" is inflicted, with armies of moderators required to judge every case.


