As social media and other online platforms continue to develop and proliferate, online hate speech* is on the rise, with cases surging worldwide in recent years, particularly during the COVID-19 pandemic. Extant scholarship in the area has explored and analyzed various aspects of the issue, including the spread and detection of online hate speech. Yet the fast proliferation of social media and other online platforms, many of which are increasingly becoming polarized arenas of ideological conflict, calls once again for a rethinking, reimagining, and reconfiguration of the phenomenon.
Against this background, the aim of this Research Topic is to provide an inter- and multidisciplinary platform to address the increasing prevalence of hate speech and hateful conduct online. The idea is to bring together a broad selection of empirical works that address the key aspects, patterns, configurations, and manifestations of online hate speech.
Contributors can consider the following themes as a guide in preparing submissions:
• online hate speech during the COVID-19 pandemic and/or post-pandemic era
• comparative, cross-national or global studies of online hate speech
• empirical work examining theoretical perspectives on online hate speech
• evaluation research on online hate speech prevention programs, policies, laws, or procedures
• methodological innovations for the detection of online hate speech
• the detection and tracing of emerging types and patterns of online hate speech
• online hate speech in the context of political extremism
• pathways to online hate speech generators
• the spread of online hate speech across communities
• the prevention of online hate speech.
We welcome a variety of methodologies, including quantitative and qualitative methods as well as computational analysis. Manuscripts should include clear methodological justifications and explicitly identify the particular sources of information under analysis.
*Online hate speech is broadly defined as hate speech, hateful expressions, and hateful conduct (in any format: text, video, image, etc.) published in online spaces or platforms and on social media. This includes hate speech toward racial/ethnic/religious/sexual minorities, disabled people, immigrants, and other marginalized groups of people.
Keywords:
online hate speech, social media, online communities, minorities, marginalized groups
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
As social media and other online platforms continue to develop and proliferate, online hate speech* is on the rise, with cases surging worldwide in recent years, particularly during the COVID-19 pandemic. Extant scholarship in the area has explored and analyzed various aspects of the issue, including the spread and detection of online hate speech. Yet the fast proliferation of social media and other online platforms, many of which are increasingly becoming polarized arenas of ideological conflict, calls once again for a rethinking, reimagining, and reconfiguration of the phenomenon.
Against this background, the aim of this Research Topic is to provide an inter- and multidisciplinary platform to address the increasing prevalence of hate speech and hateful conduct online. The idea is to bring together a broad selection of empirical works that address the key aspects, patterns, configurations, and manifestations of online hate speech.
Contributors can consider the following themes as a guide in preparing submissions:
• online hate speech during the COVID-19 pandemic and/or post-pandemic era
• comparative, cross-national or global studies of online hate speech
• empirical work examining theoretical perspectives on online hate speech
• evaluation research on online hate speech prevention programs, policies, laws, or procedures
• methodological innovations for the detection of online hate speech
• the detection and tracing of emerging types and patterns of online hate speech
• online hate speech in the context of political extremism
• pathways to online hate speech generators
• the spread of online hate speech across communities
• the prevention of online hate speech.
We welcome a variety of methodologies, including quantitative and qualitative methods as well as computational analysis. Manuscripts should include clear methodological justifications and explicitly identify the particular sources of information under analysis.
*Online hate speech is broadly defined as hate speech, hateful expressions, and hateful conduct (in any format: text, video, image, etc.) published in online spaces or platforms and on social media. This includes hate speech toward racial/ethnic/religious/sexual minorities, disabled people, immigrants, and other marginalized groups of people.
Keywords:
online hate speech, social media, online communities, minorities, marginalized groups
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.