Research Topic

Trustworthy Machine Learning

About this Research Topic

Recent studies have shown that machine learning (ML) models could be deliberately fooled, evaded, misled, and stolen. These studies result in profound security and privacy implications, especially when employing ML to critical applications such as autonomous driving, surveillance systems, and disease diagnosis. Additionally, recent studies have revealed potential societal biases in ML models, where the models learn inappropriate correlations between the final predictions and sensitive attributes such as gender and race. Without properly quantifying and reducing the reliance on such correlations, the broad adoption of ML models can have the inadvertent effect of magnifying stereotypes. To allow wide deployment of ML and enable pro-social outcomes, we desire trustworthy ML systems that are able to resist attacks from strong adversaries, protect user privacy, and produce fair decisions.

We focus on recent research and future direction in trustworthy machine learning (ML) related to the broader study of security, robustness, privacy, and fairness in ML. We aim to bring together experts from ML, security, formal verification, natural language processing and computer vision communities in an attempt to clarify the foundations of trustworthy ML. We hope to develop a rigorous understanding of the vulnerabilities and/or biases inherent to machine learning and to develop the fundamental tools, metrics and methods to understand, explain and mitigate them. We seek to come to a consensus on a rigorous framework to formulate the trustworthy machine learning problem and characterize the properties to ensure the security, privacy and fairness of machine learning based systems. Finally, we hope to point out the important directions for future works and cross-community collaborations.

Possible submission topics include:

• Test time attacks to ML: adversarial examples for digital or physical attacks
• Training time attacks to ML: e.g. data poisoning attack
• Exploitable bugs in ML systems
• New algorithms to enhance ML robustness against various adversarial attacks
• Formal verification of ML systems against safety or fairness specifications
• New algorithms and theories for uncovering and understanding biases in ML systems
• New algorithms and architectures explicitly designed to reduce bias and improve fairness of ML systems.
• New datasets to improve and measure bias/diversity
• Model stealing
• Interpretable machine learning
• Privacy in machine learning
• AI Forensics
• Federate learning



Keywords: Machine Learning, Security, Robustness, Privacy, Fairness


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Recent studies have shown that machine learning (ML) models could be deliberately fooled, evaded, misled, and stolen. These studies result in profound security and privacy implications, especially when employing ML to critical applications such as autonomous driving, surveillance systems, and disease diagnosis. Additionally, recent studies have revealed potential societal biases in ML models, where the models learn inappropriate correlations between the final predictions and sensitive attributes such as gender and race. Without properly quantifying and reducing the reliance on such correlations, the broad adoption of ML models can have the inadvertent effect of magnifying stereotypes. To allow wide deployment of ML and enable pro-social outcomes, we desire trustworthy ML systems that are able to resist attacks from strong adversaries, protect user privacy, and produce fair decisions.

We focus on recent research and future direction in trustworthy machine learning (ML) related to the broader study of security, robustness, privacy, and fairness in ML. We aim to bring together experts from ML, security, formal verification, natural language processing and computer vision communities in an attempt to clarify the foundations of trustworthy ML. We hope to develop a rigorous understanding of the vulnerabilities and/or biases inherent to machine learning and to develop the fundamental tools, metrics and methods to understand, explain and mitigate them. We seek to come to a consensus on a rigorous framework to formulate the trustworthy machine learning problem and characterize the properties to ensure the security, privacy and fairness of machine learning based systems. Finally, we hope to point out the important directions for future works and cross-community collaborations.

Possible submission topics include:

• Test time attacks to ML: adversarial examples for digital or physical attacks
• Training time attacks to ML: e.g. data poisoning attack
• Exploitable bugs in ML systems
• New algorithms to enhance ML robustness against various adversarial attacks
• Formal verification of ML systems against safety or fairness specifications
• New algorithms and theories for uncovering and understanding biases in ML systems
• New algorithms and architectures explicitly designed to reduce bias and improve fairness of ML systems.
• New datasets to improve and measure bias/diversity
• Model stealing
• Interpretable machine learning
• Privacy in machine learning
• AI Forensics
• Federate learning



Keywords: Machine Learning, Security, Robustness, Privacy, Fairness


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

10 January 2021 Abstract
10 May 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

10 January 2021 Abstract
10 May 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..