About this Research Topic
Speaking at the Web Summit technology conference in Lisbon, Tim Berners-Lee listed some of the current problems of the Web: fake news, privacy issues, collection, and abuse of personal data as well as the way people are profiled and then manipulated. Indeed, the initial optimism about the positive potentials of the Internet and social media has given way to concerns that people are being manipulated through the constant harvesting of personal information and through the control over the information they see can online that are based on the categories they are classified into. Even though online users, in general, report a high concern for their privacy, they tend to have privacy-compromising online behavior. This privacy-compromising behavior is not accidental and is often the result of crafty and sly manipulation of the users by deceivers with malicious objectives.
The digital, including in particular Artificial Intelligence (AI), is essential in the fight against online deception attacks such as spearphishing, web cache deception, practical cache poisoning, marketing or economical dark patterns , whether deployed as a countermeasure to automatically identify and mitigate such attacks or simply to help and guide the user to steer away from online threats. Indeed, the main advantage of AI is its adaptability, and learning from data to detect attacks and personalize the protection to the user’s needs and preferences. However, this might be its weakness as well. Specifically, AI could be biased (e.g. reflecting the biases that might be in the data), it might be used for mass surveillance purposes, or it might guide people towards behavior which is not necessarily beneficial for them, unintentionally or even purposefully, steering them away from better choices and alternatives. Consequently, it is imperative to develop new policy frameworks and ethical guidelines to govern the development and the deployment of such AI platforms.
The goal of this Research Topic is to (1) provide an understanding of the landscape of online deception, (2) highlight how AI is the perfect vehicle to provide personalized privacy and online deception awareness, and (3) detail how to develop responsible and ethical AI applied solutions. For so doing, we welcome theoretical and applied submissions that address, but are not limited to the following subtopics:
1. Types and mechanisms of deception : phishing, fake news, and AI-generated fake videos, unethical persuasive technologies, unethical behavior (system’s, designer’s, user’s, company’s).
2. Psychological factors predicting susceptibility to deception : trust, cognitive biases, social influence, vulnerability, personalization for malicious purposes, and psychological/social impacts (e.g. manipulation, social exclusion, threats to autonomy, identity, etc.).
3. AI solutions (Chatbots and recommender systems as awareness tools, detecting/mitigating AI bias solutions, AI-enabled online security companions Frameworks for developing responsible AI), etc.
4. Mitigation : ethical and social implications of AI-powered security solutions, legal and policy instruments: risk management, governance, ethical and policy frameworks.
5. Applications and case studies of online deception.
Keywords: online deception, manipulation, bias, privacy, ethics
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.