About this Research Topic

Abstract Submission Deadline 30 September 2022
Manuscript Submission Deadline 30 November 2022

In the last decades, deep learning has been applied to a variety of fields such as healthcare, transportation, education, agriculture, security surveillance, etc. Developing good deep learning models requires high-quality data during training, as well as, good hardware infrastructure to carry out a variety of processes. Due to the execution of these heavy computational operations, there are large energy consumption demands. Moreover, cloud deployment models require a lot of storage space which may lead to high energy consumption. This issue may be reduced by compressing the deep learning models.

Indeed, a possible solution to reduce the energy consumption of AI deep learning models is to run them on low-energy edge devices like Raspberry Pi, Jetson Nano, mobile phones, etc. This can be done by compressing deep learning models while maintaining performance. If supercomputers and desktop PCs can be replaced with low-energy devices, at least during real-world testing, then it can considerably reduce the energy consumption.

The goal of this Research Topic is to assist and advance the state-of-the-art AI solutions toward resource-constrained edge computing devices. Manuscript contributions should aim to help reduce the energy demands to make real-time intelligent applications that can work on these low-energy edge devices, eventually offering functionalities in IoT (Internet of Things) environments. The pathway for this process can be aimed at the compression of CNN (Convolutional Neural Networks) models which may transform hundreds of MBs into compressed and optimized models with size in KBs. Power consumption is reduced as the hardware device needs to perform a smaller number of calculations. Enabling the model to reduce storage space which will lead to tremendous saving of space as well as power consumption.

This Research Topic welcomes papers that focus on compressing CNNs and Segmentation models (FCN, SegNet, UNet, etc.) applied in various domains such as agriculture, medicine, and transport. Authors can use metaheuristics approaches such as Genetic Algorithm, Differential Evolution, and PSO (Particle Swarm Optimization), in addition to mathematical approaches of binarization, quantization, matrix factorization, etc. for compression of deep learning models to reduce space and power consumption of devices while giving similar accuracy.

Subtopics of interest for publication include but are not limited to the following:
• Design and development of energy-efficient (with less storage and less computation required) deep learning models for real-time applications.
• Deployment of deep learning model on energy constraint edge devices such as Raspberry Pi, Jetson Nano, mobile phones, etc.
• Development of approaches for energy-efficient compressed deep learning models.
• Design and deployment of a deep learning model for low-energy IoT devices.
• Efficient use of computation resources for executing deep learning models.
• Architectures and models that work with less training data to reduce energy usage on remote applications.

Keywords: Edge devices, CNN, Compression, Acceleration, Deep learning, AI solutions


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

In the last decades, deep learning has been applied to a variety of fields such as healthcare, transportation, education, agriculture, security surveillance, etc. Developing good deep learning models requires high-quality data during training, as well as, good hardware infrastructure to carry out a variety of processes. Due to the execution of these heavy computational operations, there are large energy consumption demands. Moreover, cloud deployment models require a lot of storage space which may lead to high energy consumption. This issue may be reduced by compressing the deep learning models.

Indeed, a possible solution to reduce the energy consumption of AI deep learning models is to run them on low-energy edge devices like Raspberry Pi, Jetson Nano, mobile phones, etc. This can be done by compressing deep learning models while maintaining performance. If supercomputers and desktop PCs can be replaced with low-energy devices, at least during real-world testing, then it can considerably reduce the energy consumption.

The goal of this Research Topic is to assist and advance the state-of-the-art AI solutions toward resource-constrained edge computing devices. Manuscript contributions should aim to help reduce the energy demands to make real-time intelligent applications that can work on these low-energy edge devices, eventually offering functionalities in IoT (Internet of Things) environments. The pathway for this process can be aimed at the compression of CNN (Convolutional Neural Networks) models which may transform hundreds of MBs into compressed and optimized models with size in KBs. Power consumption is reduced as the hardware device needs to perform a smaller number of calculations. Enabling the model to reduce storage space which will lead to tremendous saving of space as well as power consumption.

This Research Topic welcomes papers that focus on compressing CNNs and Segmentation models (FCN, SegNet, UNet, etc.) applied in various domains such as agriculture, medicine, and transport. Authors can use metaheuristics approaches such as Genetic Algorithm, Differential Evolution, and PSO (Particle Swarm Optimization), in addition to mathematical approaches of binarization, quantization, matrix factorization, etc. for compression of deep learning models to reduce space and power consumption of devices while giving similar accuracy.

Subtopics of interest for publication include but are not limited to the following:
• Design and development of energy-efficient (with less storage and less computation required) deep learning models for real-time applications.
• Deployment of deep learning model on energy constraint edge devices such as Raspberry Pi, Jetson Nano, mobile phones, etc.
• Development of approaches for energy-efficient compressed deep learning models.
• Design and deployment of a deep learning model for low-energy IoT devices.
• Efficient use of computation resources for executing deep learning models.
• Architectures and models that work with less training data to reduce energy usage on remote applications.

Keywords: Edge devices, CNN, Compression, Acceleration, Deep learning, AI solutions


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Loading..

Topic Coordinators

Loading..

articles

Sort by:

Loading..

authors

Loading..

views

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..

Share on

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.