Research Topic

Human-Interpretable Machine Learning

About this Research Topic

In the last couple of decades, the increasing disposal of large volumes of data, generated both by humans and machines (i.e., the so-called "Big Data" phenomenon), has opened up unprecedented challenges, which, in turn, have propelled remarkable advancements in the machine learning (ML) and, more generally, artificial intelligence (AI) fields.
The application of ML and AI has proven extremely effective to solve business-critical tasks in several domains: e.g., image recognition in healthcare, failure prediction in manufacturing, credit risk assessment in finance, just to name a few.
However, ML/AI models are often perceived as "black-boxes": they are given inputs and hopefully produce desired outputs.
There are many circumstances, in fact, where human-interpretability is crucial to understand (i) why a model outputs a certain prediction on a given instance (interpretability), (ii) which adjustable features of that instance contribute the most to the given prediction (explainability), and (iii) how to modify the instance so as to change the prediction made by the model (actionability). This need is also formally included in the European Union's General Data Protection Regulation (GDPR), which states that any business using personal data for automated processing must be able to explain how the system makes decisions (see Article 22 of GDPR).

The aim of this Research Topic is to further advance the state-of-the-art knowledge on human-interpretable machine learning. More specifically, the goal is to explore novel machine learning techniques whose focus is not only on the generalizability of the predictive accuracy of a trained model but also on the interpretability of the predictions that such a model produces as output.
Currently, at least two possible approaches seem viable and promising, depending on the stage at which interpretability comes into play: at training time or after the model is learned. The former should factor an interpretable component into the actual learning process (i.e., within the optimization process), whilst the latter should derive a surrogate "inherently-interpretable" model which fairly approximates the learned target model. Both approaches set up many new research challenges: for example, in the first approach the notion of "interpretability" must be formalized in such a way that it can be plugged seamlessly into the optimization process; in the second approach, instead, the surrogate model must be feasible to compute.
Of course, any answer to other critical research questions will be more than welcome.

Authors are expected to contribute to the advancement of human-interpretable machine learning with novel submissions on the following subjects:
- Methodology and formalization of interpretable and explainable ML/AI systems;
- Feature interpretation, explanation, and recommendation;
- Transparency in ML/AI systems: ethical and legal aspects, fairness issues;
- Evaluation of ML/AI interpretable and explainable systems: interpretability vs. complexity and interpretability vs. effectiveness tradeoffs;
- Impact of the lack of interpretability and explainability in ML/AI systems;
- Causality and inference in ML/AI systems;
- Trustworthiness and robustness of ML/AI systems;
- Human factors in ML/AI systems;
- Semantic interpretability of ML/AI systems;
- Psychological acceptability of ML/AI systems.

Topic editor Dr. Fabrizio Silvestri is employed by Facebook AI. Topic Editor Dr. Fabio Pinelli is tenure-track assistant professor at IMT Lucca. All other Topic Editors declare no competing interests with regards to the Research Topic subject.


Keywords: ML/AI interpretability, ML/AI explainability, ML/AI actionability, ML/AI transparency, human-in-the-loop ML/AI


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

In the last couple of decades, the increasing disposal of large volumes of data, generated both by humans and machines (i.e., the so-called "Big Data" phenomenon), has opened up unprecedented challenges, which, in turn, have propelled remarkable advancements in the machine learning (ML) and, more generally, artificial intelligence (AI) fields.
The application of ML and AI has proven extremely effective to solve business-critical tasks in several domains: e.g., image recognition in healthcare, failure prediction in manufacturing, credit risk assessment in finance, just to name a few.
However, ML/AI models are often perceived as "black-boxes": they are given inputs and hopefully produce desired outputs.
There are many circumstances, in fact, where human-interpretability is crucial to understand (i) why a model outputs a certain prediction on a given instance (interpretability), (ii) which adjustable features of that instance contribute the most to the given prediction (explainability), and (iii) how to modify the instance so as to change the prediction made by the model (actionability). This need is also formally included in the European Union's General Data Protection Regulation (GDPR), which states that any business using personal data for automated processing must be able to explain how the system makes decisions (see Article 22 of GDPR).

The aim of this Research Topic is to further advance the state-of-the-art knowledge on human-interpretable machine learning. More specifically, the goal is to explore novel machine learning techniques whose focus is not only on the generalizability of the predictive accuracy of a trained model but also on the interpretability of the predictions that such a model produces as output.
Currently, at least two possible approaches seem viable and promising, depending on the stage at which interpretability comes into play: at training time or after the model is learned. The former should factor an interpretable component into the actual learning process (i.e., within the optimization process), whilst the latter should derive a surrogate "inherently-interpretable" model which fairly approximates the learned target model. Both approaches set up many new research challenges: for example, in the first approach the notion of "interpretability" must be formalized in such a way that it can be plugged seamlessly into the optimization process; in the second approach, instead, the surrogate model must be feasible to compute.
Of course, any answer to other critical research questions will be more than welcome.

Authors are expected to contribute to the advancement of human-interpretable machine learning with novel submissions on the following subjects:
- Methodology and formalization of interpretable and explainable ML/AI systems;
- Feature interpretation, explanation, and recommendation;
- Transparency in ML/AI systems: ethical and legal aspects, fairness issues;
- Evaluation of ML/AI interpretable and explainable systems: interpretability vs. complexity and interpretability vs. effectiveness tradeoffs;
- Impact of the lack of interpretability and explainability in ML/AI systems;
- Causality and inference in ML/AI systems;
- Trustworthiness and robustness of ML/AI systems;
- Human factors in ML/AI systems;
- Semantic interpretability of ML/AI systems;
- Psychological acceptability of ML/AI systems.

Topic editor Dr. Fabrizio Silvestri is employed by Facebook AI. Topic Editor Dr. Fabio Pinelli is tenure-track assistant professor at IMT Lucca. All other Topic Editors declare no competing interests with regards to the Research Topic subject.


Keywords: ML/AI interpretability, ML/AI explainability, ML/AI actionability, ML/AI transparency, human-in-the-loop ML/AI


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

05 March 2021 Abstract
01 April 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

05 March 2021 Abstract
01 April 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..