About this Research Topic
We focus on recent research and future direction in trustworthy machine learning (ML) related to the broader study of security, robustness, privacy, and fairness in ML. We aim to bring together experts from ML, security, formal verification, natural language processing and computer vision communities in an attempt to clarify the foundations of trustworthy ML. We hope to develop a rigorous understanding of the vulnerabilities and/or biases inherent to machine learning and to develop the fundamental tools, metrics and methods to understand, explain and mitigate them. We seek to come to a consensus on a rigorous framework to formulate the trustworthy machine learning problem and characterize the properties to ensure the security, privacy and fairness of machine learning based systems. Finally, we hope to point out the important directions for future works and cross-community collaborations.
Possible submission topics include:
• Test time attacks to ML: adversarial examples for digital or physical attacks
• Training time attacks to ML: e.g. data poisoning attack
• Exploitable bugs in ML systems
• New algorithms to enhance ML robustness against various adversarial attacks
• Formal verification of ML systems against safety or fairness specifications
• New algorithms and theories for uncovering and understanding biases in ML systems
• New algorithms and architectures explicitly designed to reduce bias and improve fairness of ML systems.
• New datasets to improve and measure bias/diversity
• Model stealing
• Interpretable machine learning
• Privacy in machine learning
• AI Forensics
• Federate learning
Keywords: Machine Learning, Security, Robustness, Privacy, Fairness
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.