AUTHOR=Liu Zhuoyang , Xu Feng TITLE=Interpretable neural networks: principles and applications JOURNAL=Frontiers in Artificial Intelligence VOLUME=Volume 6 - 2023 YEAR=2023 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2023.974295 DOI=10.3389/frai.2023.974295 ISSN=2624-8212 ABSTRACT=In recent years, with the rapid development of deep learning technology, great progress has been made in computer vision, image recognition, pattern recognition and speech signal processing. However, due to the black-box nature of deep neural networks, one cannot explain the parameters in the deep network and why it can perfectly perform the assigned tasks. The interpretability of neural networks has now become a research hotspot in the field of deep learning. It covers a wide range of topics in speech and text signal processing, image processing, differential equation solving and other fields. There are subtle differences in the definition of interpretability in different fields. In this paper, interpretable neural network (INN) methods are divided into the following two directions: model decomposition neural networks, semantic interpretable neural networks. The former mainly constructs an interpretable neural network by converting the analytical model of conventional method into different layers of neural network, and combining the interpretability of conventional model-based method with the powerful learning capability of the neural network. This type of INNs is further classified differnet subtypes depending on which type of models they are derived from, i.e. mathematical models, physical models, and other models. The second type is interpretable network with visual semantic information for user understanding. Its basic idea is to use the visualization of the whole or partial network structure to assign semantic information to the network structure, which further includes convolutional layer output visualization, decision tree extraction, semantic graph, etc. This type of method mainly uses human visual logic to explain the structure of a black-box neural network. So it is a post-network-design method which tries to assign intepretability to a black-box network structure afterwards, as opposed to the pre-network-design method of model-based INNs, which designs intepretable network structure beforehand. This paper reviews recent progresses in these areas as well as various application scenarios of INNs, and discusses existing problems and future development directions.