Skip to main content

About this Research Topic

Abstract Submission Deadline 31 July 2023
Manuscript Submission Deadline 28 February 2024

As social media and other online platforms continue to develop and proliferate, online hate speech* is on the rise, with cases surging worldwide in recent years, particularly during the COVID-19 pandemic. Extant scholarship in the area has explored and analyzed various aspects of the issue, including the spread and detection of online hate speech. Yet the fast proliferation of social media and other online platforms, many of which are increasingly becoming polarized arenas of ideological conflict, calls once again for a rethinking, reimagining, and reconfiguration of the phenomenon.

Against this background, the aim of this Research Topic is to provide an inter- and multidisciplinary platform to address the increasing prevalence of hate speech and hateful conduct online. The idea is to bring together a broad selection of empirical works that address the key aspects, patterns, configurations, and manifestations of online hate speech.

Contributors can consider the following themes as a guide in preparing submissions:

• online hate speech during the COVID-19 pandemic and/or post-pandemic era
• comparative, cross-national or global studies of online hate speech
• empirical work examining theoretical perspectives on online hate speech
• evaluation research on online hate speech prevention programs, policies, laws, or procedures
• methodological innovations for the detection of online hate speech
• the detection and tracing of emerging types and patterns of online hate speech
• online hate speech in the context of political extremism
• pathways to online hate speech generators
• the spread of online hate speech across communities
• the prevention of online hate speech.

We welcome a variety of methodologies, including quantitative and qualitative methods as well as computational analysis. Manuscripts should include clear methodological justifications and explicitly identify the particular sources of information under analysis.

*Online hate speech is broadly defined as hate speech, hateful expressions, and hateful conduct (in any format: text, video, image, etc.) published in online spaces or platforms and on social media. This includes hate speech toward racial/ethnic/religious/sexual minorities, disabled people, immigrants, and other marginalized groups of people.

Keywords: online hate speech, social media, online communities, minorities, marginalized groups


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

As social media and other online platforms continue to develop and proliferate, online hate speech* is on the rise, with cases surging worldwide in recent years, particularly during the COVID-19 pandemic. Extant scholarship in the area has explored and analyzed various aspects of the issue, including the spread and detection of online hate speech. Yet the fast proliferation of social media and other online platforms, many of which are increasingly becoming polarized arenas of ideological conflict, calls once again for a rethinking, reimagining, and reconfiguration of the phenomenon.

Against this background, the aim of this Research Topic is to provide an inter- and multidisciplinary platform to address the increasing prevalence of hate speech and hateful conduct online. The idea is to bring together a broad selection of empirical works that address the key aspects, patterns, configurations, and manifestations of online hate speech.

Contributors can consider the following themes as a guide in preparing submissions:

• online hate speech during the COVID-19 pandemic and/or post-pandemic era
• comparative, cross-national or global studies of online hate speech
• empirical work examining theoretical perspectives on online hate speech
• evaluation research on online hate speech prevention programs, policies, laws, or procedures
• methodological innovations for the detection of online hate speech
• the detection and tracing of emerging types and patterns of online hate speech
• online hate speech in the context of political extremism
• pathways to online hate speech generators
• the spread of online hate speech across communities
• the prevention of online hate speech.

We welcome a variety of methodologies, including quantitative and qualitative methods as well as computational analysis. Manuscripts should include clear methodological justifications and explicitly identify the particular sources of information under analysis.

*Online hate speech is broadly defined as hate speech, hateful expressions, and hateful conduct (in any format: text, video, image, etc.) published in online spaces or platforms and on social media. This includes hate speech toward racial/ethnic/religious/sexual minorities, disabled people, immigrants, and other marginalized groups of people.

Keywords: online hate speech, social media, online communities, minorities, marginalized groups


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Loading..

Topic Coordinators

Loading..

Articles

Sort by:

Loading..

Authors

Loading..

views

total views views downloads topic views

}
 
Top countries
Top referring sites
Loading..

Share on

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.