Research Topic

Safe and Trustworthy Machine Learning

About this Research Topic

Machine learning (ML) provides incredible opportunities to answer some of the most important and difficult questions in a wide range of applications. However, ML systems often face a major challenge when applied in the real world: the conditions under which the system was deployed can differ from those under which it was developed. Recent examples have shown that ML methods are highly susceptible to minor changes in image orientation, minute amounts of adversarial corruption, or bias in the data. Susceptibility of ML methods to test-time shift is a major hurdle in a universal acceptance of these solutions in several high-regret applications.
In this Article Collection, we encourage authors to contribute to research that will provide potentially viable solutions to address trust, safety and security issues faced by ML methods. Examples include adversarial robustness of ML systems in different domains (e.g., adversarial attacks, defenses, and property verification), and robust representation learning (e.g., adversarial loss for learning embedding), to name a few.

Topics of interest include but are not limited to:
- Adversarial attacks (e.g. evasion, poison and inversion) and defenses
- Robustness certification and specification verification techniques
- Representation learning, knowledge discovery and model generalizability
- Interplay between model robustness and model compression (e.g. network
pruning and quantization)
- Robust optimization methods and (computational) game theory
- Explainable and fair machine learning models via adversarial learning
techniques
- Privacy and security in machine learning systems
- Trustworthy machine learning
We welcome diverse article types, e.g., Original Research, Reviews, Perspective Papers, and other article types.


Keywords: safe machine learning, trustworthy machine learning, adversarial attack, property verification, robust representation, robustness certification, specification verification, model robustness, model compression, robust optimization, game theory, fair machine learning, privacy, security, adversarial learning


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Machine learning (ML) provides incredible opportunities to answer some of the most important and difficult questions in a wide range of applications. However, ML systems often face a major challenge when applied in the real world: the conditions under which the system was deployed can differ from those under which it was developed. Recent examples have shown that ML methods are highly susceptible to minor changes in image orientation, minute amounts of adversarial corruption, or bias in the data. Susceptibility of ML methods to test-time shift is a major hurdle in a universal acceptance of these solutions in several high-regret applications.
In this Article Collection, we encourage authors to contribute to research that will provide potentially viable solutions to address trust, safety and security issues faced by ML methods. Examples include adversarial robustness of ML systems in different domains (e.g., adversarial attacks, defenses, and property verification), and robust representation learning (e.g., adversarial loss for learning embedding), to name a few.

Topics of interest include but are not limited to:
- Adversarial attacks (e.g. evasion, poison and inversion) and defenses
- Robustness certification and specification verification techniques
- Representation learning, knowledge discovery and model generalizability
- Interplay between model robustness and model compression (e.g. network
pruning and quantization)
- Robust optimization methods and (computational) game theory
- Explainable and fair machine learning models via adversarial learning
techniques
- Privacy and security in machine learning systems
- Trustworthy machine learning
We welcome diverse article types, e.g., Original Research, Reviews, Perspective Papers, and other article types.


Keywords: safe machine learning, trustworthy machine learning, adversarial attack, property verification, robust representation, robustness certification, specification verification, model robustness, model compression, robust optimization, game theory, fair machine learning, privacy, security, adversarial learning


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

30 June 2020 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

30 June 2020 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..