Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

'The enemy among us': detecting cyber hate speech with threats-based othering language embeddings

Alorainy, Wafa, Burnap, Pete ORCID:, Liu, Han ORCID: and Williams, Matthew L. ORCID: 2019. 'The enemy among us': detecting cyber hate speech with threats-based othering language embeddings. ACM Transactions on the Web 13 (3) , 14. 10.1145/3324997

[thumbnail of ACM_Transactions_on_Web (24).pdf]
PDF - Accepted Post-Print Version
Download (1MB) | Preview


Offensive or antagonistic language targeted at individuals and social groups based on their personal characteristics (also known as cyber hate speech or cyberhate) has been frequently posted and widely circulated via the World Wide Web. This can be considered as a key risk factor for individual and societal tension surrounding regional instability. Automated Web-based cyberhate detection is important for observing and understanding community and regional societal tension - especially in online social networks where posts can be rapidly and widely viewed and disseminated. While previous work has involved using lexicons, bags-of-words or probabilistic language parsing approaches, they often suffer from a similar issue which is that cyberhate can be subtle and indirect - thus depending on the occurrence of individual words or phrases can lead to a significant number of false negatives, providing inaccurate representation of the trends in cyberhate. This problem motivated us to challenge thinking around the representation of subtle language use, such as references to perceived threats from ‘the other’ including immigration or job prosperity in a hateful context. We propose a novel ‘othering’ feature set that utilises language use around the concept of ‘othering’ and intergroup threat theory to identify these subtleties, and we implement a wide range of classification methods using embedding learning to compute semantic distances between parts of speech considered to be part of an ‘othering’ narrative. To validate our approach we conducted two sets of experiments. The first involved comparing the results of our novel method with state of the art baseline models from the literature. Our approach outperformed all existing methods. The second tested the best performing models from the first phase on unseen datasets for different types of cyberhate, namely religion, disability, race and sexual orientation. The results showed F-measure scores for classifying hateful instances obtained through applying our model of 0.81, 0.71, 0.89 and 0.72 respectively, demonstrating the ability of the ‘othering’ narrative to be an important part of model generalisation.

Item Type: Article
Date Type: Publication
Status: Published
Schools: Computer Science & Informatics
Social Sciences (Includes Criminology and Education)
Subjects: H Social Sciences > H Social Sciences (General)
Q Science > QA Mathematics > QA76 Computer software
Publisher: Association for Computing Machinery (ACM)
ISSN: 1559-1131
Funders: ESRC
Date of First Compliant Deposit: 2 April 2019
Date of Acceptance: 18 March 2019
Last Modified: 08 Nov 2023 04:52

Actions (repository staff only)

Edit Item Edit Item


Downloads per month over past year

View more statistics