Note for Contributors: although strongly encouraged, submission of a manuscript summary is optional, and not a prerequisite for submitting a full manuscript.
Recent years have witnessed a great increase in the use of artificial intelligence (AI) methods – mostly based on machine learning (ML) algorithms – across almost every field of knowledge and application. The potential impact of these methods on daily life raises important questions that extend beyond the usual quantitative metrics used to assess ML models. Characteristics like fairness, transparency, accountability, trustworthiness, responsibility and explainability or interpretability are more important than ever in such applications. However, as the impact of these models has continued to increase in both breadth and depth, so too has their complexity grown exponentially, with current state-of-the-art ML models commonly involving hundreds of billions of internal parameters. This combination of sheer scale and revolutionary (yet potentially harmful) applications means that it is more important than ever to design powerful analytical tools for understanding and diagnosing ML models. Visualization techniques are known to be effective in the analysis of large-scale data, and can help both practitioners and users to gain insights into these characteristics of AI applications. Thus, this Research Topic focuses on new concepts, methods, and techniques for assessing the diverse space of intrinsic but unseen characteristics of ML models through visual representation and interaction. These are important areas of current research, for which various results have been already presented and published in high-impact conferences and journals, recently. Nevertheless, there remains plenty of room for further development. Especially, for example, when it comes to combining these characteristics in order to assess their combined effects, dealing with the exponential increase in the scale of the models, and addressing the recent accelerated mass adoption of AI applications across such diverse scenarios as biological and human sciences research, and policy-making by governmental institutions.
This Research Topic aims to highlight research and perspectives from both academia and industry on how visualization techniques can improve AI applications by representing data related to the characteristics mentioned above. Contributions are welcome on all aspects of the use of visualization and interactive visual analytics for building, improving, and deploying ML models, as well as for interpreting and explaining different types of ML models in order to improve fairness, transparency, accountability, trustworthiness, and other characteristics relevant to all steps of the ML pipeline.
This Research Topic accepts surveys, novel results, and position papers. Topics of interest include, but are not limited to:
- Conceptual explorations on the use of visualization for improving access to and trust in ML services, such as frameworks, taxonomies, ontologies, etc.
- Novel visual abstractions and interactive techniques for interpreting and explaining different types of ML models.
- ML model engineering, improvement, and deployment supported by interactive visualization.
- Visual and interactive ML-driven tools and applications for specific knowledge domains and problem scenarios.
- Explicit visual and/or interactive mechanisms for providing guidance and provenance regarding concepts such as trustworthiness, transparency, accountability, etc., for all steps in the ML pipeline.
Potential authors may contact the Topic Editors with any questions they have relating to this Research Topic, at carla@inf.ufrgs.br .
Keywords:
visualization, artificial intelligence, machine learning, explainability, fairness
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Note for Contributors: although strongly encouraged, submission of a manuscript summary is optional, and not a prerequisite for submitting a full manuscript.
Recent years have witnessed a great increase in the use of artificial intelligence (AI) methods – mostly based on machine learning (ML) algorithms – across almost every field of knowledge and application. The potential impact of these methods on daily life raises important questions that extend beyond the usual quantitative metrics used to assess ML models. Characteristics like fairness, transparency, accountability, trustworthiness, responsibility and explainability or interpretability are more important than ever in such applications. However, as the impact of these models has continued to increase in both breadth and depth, so too has their complexity grown exponentially, with current state-of-the-art ML models commonly involving hundreds of billions of internal parameters. This combination of sheer scale and revolutionary (yet potentially harmful) applications means that it is more important than ever to design powerful analytical tools for understanding and diagnosing ML models. Visualization techniques are known to be effective in the analysis of large-scale data, and can help both practitioners and users to gain insights into these characteristics of AI applications. Thus, this Research Topic focuses on new concepts, methods, and techniques for assessing the diverse space of intrinsic but unseen characteristics of ML models through visual representation and interaction. These are important areas of current research, for which various results have been already presented and published in high-impact conferences and journals, recently. Nevertheless, there remains plenty of room for further development. Especially, for example, when it comes to combining these characteristics in order to assess their combined effects, dealing with the exponential increase in the scale of the models, and addressing the recent accelerated mass adoption of AI applications across such diverse scenarios as biological and human sciences research, and policy-making by governmental institutions.
This Research Topic aims to highlight research and perspectives from both academia and industry on how visualization techniques can improve AI applications by representing data related to the characteristics mentioned above. Contributions are welcome on all aspects of the use of visualization and interactive visual analytics for building, improving, and deploying ML models, as well as for interpreting and explaining different types of ML models in order to improve fairness, transparency, accountability, trustworthiness, and other characteristics relevant to all steps of the ML pipeline.
This Research Topic accepts surveys, novel results, and position papers. Topics of interest include, but are not limited to:
- Conceptual explorations on the use of visualization for improving access to and trust in ML services, such as frameworks, taxonomies, ontologies, etc.
- Novel visual abstractions and interactive techniques for interpreting and explaining different types of ML models.
- ML model engineering, improvement, and deployment supported by interactive visualization.
- Visual and interactive ML-driven tools and applications for specific knowledge domains and problem scenarios.
- Explicit visual and/or interactive mechanisms for providing guidance and provenance regarding concepts such as trustworthiness, transparency, accountability, etc., for all steps in the ML pipeline.
Potential authors may contact the Topic Editors with any questions they have relating to this Research Topic, at carla@inf.ufrgs.br .
Keywords:
visualization, artificial intelligence, machine learning, explainability, fairness
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.