Research Topic

Resilience of Machine Learning Techniques against Adversarial Attacks in Bot and Fake News Detection

About this Research Topic

Fake news is usually defined as ‘fabricated information that mimics news media content in form but not in organizational process or intent’. Their presence has been documented in several contexts, such as politics, vaccinations, food habits, and financial markets to name but a few. Despite false stories had always been circulated over the course of the past centuries, why are we so particularly concerned about them now? While the advent of the internet has facilitated the access to news, it has also led to a proliferation of user-generated content, unscreened by any moderator. Usually, fake news is published on some little-known outlet, and amplified through social media posts, quite often using so-called social bots. These software agents can mimic the behavior of a genuine account and maliciously generate artificial hype.

In recent years, researchers have intensified efforts against the creation and spread of fake news, as well as the use of social bots as tools for spreading. Currently, the most common detection methods for both social bots and fake news themselves are based on supervised machine learning algorithms. While these approaches achieve good performance on specific test cases, they suffer from attacks that critically degrade the performance of the learning algorithms. Like in a continuous adversarial battle of bot developers and detection mechanism inventors, social bots have evolved over time: in the late 2000s, bots were easily detectable. Nowadays and due to the evolutionary pressure induced by first detection mechanisms, social bots have become more sophisticated. As such, they are almost indistinguishable from genuine accounts because they mimic human behavior in a very realistic way. Moreover, bots now work “in teams”, thus leading to severe difficulties in detecting whether a single account has genuine or false characteristics. This is specifically the case when single-account evaluation mechanisms omit features relating to coordination. In the context of fake news, it has recently been shown that it is possible to act on the title, content, or source of the news to alter the result of the classifier: changing it from “true” to “false”, and vice versa.

This Research Topic aims at discussing new techniques and collecting new results about attacks that can be launched against detection systems to deceive them and lower their performance. We believe that a field of research to investigate for this purpose is represented by Adversarial Machine Learning (AML) methods, which aim at understanding when, why, and how learning models can be attacked, and at investigating the most effective strategies to mitigate attacks. In the Computer Vision domain, as early as 2014, researchers discovered that image classifiers can revert their outcome if they are fed an image modified with ad hoc noise (which together constitute the so-called adversarial example). Adversarial examples opened an interesting research avenue, which brought more researchers to work on machine learning.
We argue that research efforts on how to make stronger image recognition systems should also be invested in the context of social bots and fake news detection systems: e.g., artificially creating evolved bots and testing the outcome of state-of-art detection techniques. In the same vein, test classifiers of fake news against modifications of the same, and/or their metadata.

Thus, we encourage authors to submit their work in the macro-field of AML for fake news and social bot detection, whose topics can be, non-exhaustively, the following:
• Real-world attacks against current learning models for fake news/social bot- detection;
• Theoretic understanding of AML and certifiable robustness for text/bot classification;
• Vulnerabilities and potential solutions to adversarial machine learning in fake news/social bots’ recognitions applications;
• Repeatable experiments adding to the knowledge about adversarial examples on text, image, video, and online accounts datasets;
• Fake news/social bots data distribution drift and its implications to model generalization and robustness;
• Detection and defense mechanisms against adversarial examples for fake news/social bots detection systems;
• Novel challenges and discoveries in adversarial machine learning for fake news, bot, and disinformation campaign detection systems;

Data-driven approaches, supported by publicly available datasets, are more than welcome.


Keywords: Fake news, Social Bots, Robustness of Learning Algorithms, Adversarial Examples, Machine Learning


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Fake news is usually defined as ‘fabricated information that mimics news media content in form but not in organizational process or intent’. Their presence has been documented in several contexts, such as politics, vaccinations, food habits, and financial markets to name but a few. Despite false stories had always been circulated over the course of the past centuries, why are we so particularly concerned about them now? While the advent of the internet has facilitated the access to news, it has also led to a proliferation of user-generated content, unscreened by any moderator. Usually, fake news is published on some little-known outlet, and amplified through social media posts, quite often using so-called social bots. These software agents can mimic the behavior of a genuine account and maliciously generate artificial hype.

In recent years, researchers have intensified efforts against the creation and spread of fake news, as well as the use of social bots as tools for spreading. Currently, the most common detection methods for both social bots and fake news themselves are based on supervised machine learning algorithms. While these approaches achieve good performance on specific test cases, they suffer from attacks that critically degrade the performance of the learning algorithms. Like in a continuous adversarial battle of bot developers and detection mechanism inventors, social bots have evolved over time: in the late 2000s, bots were easily detectable. Nowadays and due to the evolutionary pressure induced by first detection mechanisms, social bots have become more sophisticated. As such, they are almost indistinguishable from genuine accounts because they mimic human behavior in a very realistic way. Moreover, bots now work “in teams”, thus leading to severe difficulties in detecting whether a single account has genuine or false characteristics. This is specifically the case when single-account evaluation mechanisms omit features relating to coordination. In the context of fake news, it has recently been shown that it is possible to act on the title, content, or source of the news to alter the result of the classifier: changing it from “true” to “false”, and vice versa.

This Research Topic aims at discussing new techniques and collecting new results about attacks that can be launched against detection systems to deceive them and lower their performance. We believe that a field of research to investigate for this purpose is represented by Adversarial Machine Learning (AML) methods, which aim at understanding when, why, and how learning models can be attacked, and at investigating the most effective strategies to mitigate attacks. In the Computer Vision domain, as early as 2014, researchers discovered that image classifiers can revert their outcome if they are fed an image modified with ad hoc noise (which together constitute the so-called adversarial example). Adversarial examples opened an interesting research avenue, which brought more researchers to work on machine learning.
We argue that research efforts on how to make stronger image recognition systems should also be invested in the context of social bots and fake news detection systems: e.g., artificially creating evolved bots and testing the outcome of state-of-art detection techniques. In the same vein, test classifiers of fake news against modifications of the same, and/or their metadata.

Thus, we encourage authors to submit their work in the macro-field of AML for fake news and social bot detection, whose topics can be, non-exhaustively, the following:
• Real-world attacks against current learning models for fake news/social bot- detection;
• Theoretic understanding of AML and certifiable robustness for text/bot classification;
• Vulnerabilities and potential solutions to adversarial machine learning in fake news/social bots’ recognitions applications;
• Repeatable experiments adding to the knowledge about adversarial examples on text, image, video, and online accounts datasets;
• Fake news/social bots data distribution drift and its implications to model generalization and robustness;
• Detection and defense mechanisms against adversarial examples for fake news/social bots detection systems;
• Novel challenges and discoveries in adversarial machine learning for fake news, bot, and disinformation campaign detection systems;

Data-driven approaches, supported by publicly available datasets, are more than welcome.


Keywords: Fake news, Social Bots, Robustness of Learning Algorithms, Adversarial Examples, Machine Learning


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

20 June 2021 Abstract
30 September 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

20 June 2021 Abstract
30 September 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..