Online Hate Speech: Linguistic Challenges in the Age of AI

  • 2,534

    Total views and downloads

About this Research Topic

Submission closed

Background

Hate speech is not a new phenomenon; the spread of hate propaganda has often been the prelude to physical violence in historical conflicts, including war crimes and genocides. However, with the advent of the internet, online spaces have become preferred platforms for some communities to inflame and attack others, as the broad reach and high impact of shared messages amplify reactions.

The emergence and consolidation of online platforms and social media have created new opportunities for expressing extreme emotions. The specific features of online communication—endless availability, perceived anonymity, and immediacy—offer various ways to express opinions and engage in global public discussions. Simultaneously, these features facilitate the spread of vitriolic rhetoric, incite violence, and cause emotional and psychological harm.

Due to its significant impact on the population, controlling and regulating hate speech have become central issues in public policies across many countries. Recent developments include the EU’s Digital Services Act (DSA) to regulate online content. Measures like content moderation have been adopted by institutions, governments, and private companies to curb hate speech. These policies frequently stir debate and raise concerns about protecting human rights online while preserving free speech in democratic societies. The rapid development of AI has introduced new tools that can both spread and combat hate speech, adding a new dimension to the issue.

Against this background, this Research Topic aims to foster a broader and more fruitful critical discussion on hate speech in online contexts. It welcomes original, empirical, up-to-date case studies from diverse global contexts. Contributors are invited to engage with theoretical and methodological reflections on the relationship between language, discourse, and AI in amplifying disinformation and online hate speech.

Submissions to this Research Topic may focus on any of the following subthemes:

• hate speech: conceptualization and challenges
• sociolinguistic variables of online hate speech
• pragmatics aspects of hate speech: impoliteness and hate speech acts (insulting, threatening, inciting, accusing, etc.)
• pragmatic strategies used in hate speech: othering, blame reversal, denial, and agency deletion
• the use of humor as a strategy to spread and fight online hate speech
• building and annotating hate speech corpora
• natural language processing (NLP) of online hate speech
• forensic linguistics and hate speech
• online hate speech in educational contexts
• hate speech vs. free speech
• counter-hate speech strategies
• linguistic strategies to regulate hate speech online
• AI and hate speech
• hate speech and discourse
• multimodal realizations of hate speech.

Information and Instructions for Authors

This Research Topic accepts any of the article types listed below, except for Editorial (Editorials are submitted exclusively by the Topic Editors).

Please also note that this Research Topic accepts submissions via Frontiers in Communication only.

Research Topic Research topic image

Keywords: online hate speech, linguistics, social media, discourse

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors