AUTHOR=Zhang Li , Krestinskaya Olga , Fouda Mohammed E. , Eltawil Ahmed M. , Salama Khaled Nabil TITLE=Quantized convolutional neural networks: a hardware perspective JOURNAL=Frontiers in Electronics VOLUME=Volume 6 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/electronics/articles/10.3389/felec.2025.1469802 DOI=10.3389/felec.2025.1469802 ISSN=2673-5857 ABSTRACT=With the rapid development of machine learning, Deep Neural Network (DNN) exhibits superior performance in solving complex problems like computer vision and natural language processing compared with classic machine learning techniques. On the other hand, the rise of the Internet of Things (IoT) and edge computing set a demand on executing those complex tasks on corresponding devices. As the name suggested, deep neural networks are sophisticated models with complex structures and millions of parameters, which overwhelm the capacity of IoT and edge devices. To facilitate the deployment, quantization, as one of the most promising methods, is proposed to alleviate the challenge in terms of memory usage and computation complexity by quantizing both the parameters and data flow in the DNN model into formats with shorter bit-width. Consistently, dedicated hardware accelerators are developed to further boost the execution efficiency of DNN models. In this work, we focus on Convolutional Neural Network (CNN) as an example of DNNs and conduct a comprehensive survey on various quantization and quantized training methods. We also discuss various hardware accelerator designs for quantized CNN (QCNN). Based on the review of both algorithm and hardware design, we provide general software-hardware co-design considerations. Based on the analysis, we discuss open challenges and future research directions for both algorithms and corresponding hardware designs of quantized neural networks (QNNs).