Research Topic

Quality Assurance of Machine Learning Systems

About this Research Topic

In the past few decades, machine learning (ML) has been widely applied in many edge-cutting applications such as face recognition and autonomous driving. However, it has been demonstrated that ML systems are often vulnerable, e.g., the prediction of input can become incorrect when adding some minor perturbations. Hence, quality assurance of ML systems is very important, especially when applied in safety- and security-critical applications. Although there have been some mature techniques (e.g., testing or verification) in traditional software, it is hard to adapt such techniques to ML software due to the different development paradigms. New techniques are required for quality assurance of ML systems.

In this field, we encourage authors to contribute to research that focuses on the quality assurance of ML systems. We would like to solicit research to provide potential solutions for building trustable, reliable, and secure ML systems. For example, the adversarial attacks, defense, testing, debugging, and robustness measurement. We also welcome the empirical studies on the quality issues and quality assurance in the industrial ML systems.

Topics of interest include but are not limited to:
- Adversarial attacks and defenses
- Out of data distribution analysis
- Testing and verification techniques on the machine learning systems
- Test adequacy criteria of machine learning systems
- Robustness and Fairness of machine learning systems
- Robustness estimation and enhancement of machine learning systems
- Security and privacy of machine learning
- Empirical experience on quality issues and solutions of machine learning systems
- Quality assurance of machine learning frameworks (e.g., Tensorflow)
- Interpretation and understanding of machine learning
- Debugging and repair of machine learning systems


Keywords: Quality Assurance, Security, Trustworthy, Testing, Attack, Defense


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

In the past few decades, machine learning (ML) has been widely applied in many edge-cutting applications such as face recognition and autonomous driving. However, it has been demonstrated that ML systems are often vulnerable, e.g., the prediction of input can become incorrect when adding some minor perturbations. Hence, quality assurance of ML systems is very important, especially when applied in safety- and security-critical applications. Although there have been some mature techniques (e.g., testing or verification) in traditional software, it is hard to adapt such techniques to ML software due to the different development paradigms. New techniques are required for quality assurance of ML systems.

In this field, we encourage authors to contribute to research that focuses on the quality assurance of ML systems. We would like to solicit research to provide potential solutions for building trustable, reliable, and secure ML systems. For example, the adversarial attacks, defense, testing, debugging, and robustness measurement. We also welcome the empirical studies on the quality issues and quality assurance in the industrial ML systems.

Topics of interest include but are not limited to:
- Adversarial attacks and defenses
- Out of data distribution analysis
- Testing and verification techniques on the machine learning systems
- Test adequacy criteria of machine learning systems
- Robustness and Fairness of machine learning systems
- Robustness estimation and enhancement of machine learning systems
- Security and privacy of machine learning
- Empirical experience on quality issues and solutions of machine learning systems
- Quality assurance of machine learning frameworks (e.g., Tensorflow)
- Interpretation and understanding of machine learning
- Debugging and repair of machine learning systems


Keywords: Quality Assurance, Security, Trustworthy, Testing, Attack, Defense


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

03 August 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

03 August 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..