Research Topic

Machine Learning at the Edge - Engineering advances enabling intelligence at the computing edge.

About this Research Topic

In recent years, we have knowingly or unknowingly become reliant on machine learning and deep learning technology. These include everyday applications such as photo sorting, content customizations, facial and speech recognition to more momentous use-cases such as self-driving cars and medical diagnosis. The majority of these models are too big and compute intensive to be run on any of the hundreds of billions of smaller micro-controllers in service today. These micro-controllers contain a small CPU and a small amount of RAM. Microcontrollers are embedded in everyday consumer, medical, automotive and industrial devices. These micro-controllers typically orchestrate the steps of a state machine based on the sensor inputs. The sensor data is consumed and then overwritten as there isn’t enough power or storage budget to transmit this data to the cloud. This implies that the ML models tap into a very small fraction of sensor data compared to what is available. This also means that there is an entire realm of possibility of making these controllers smarter by themselves and create an intelligent fabric of these devices by leveraging their data and capabilities—Otherwise known as the Internet of Things (IoT). This possibility of bringing intelligence to the “edge” has sparked innovation across engineering and science disciplines.

To bring machine learning to the edge; innovation and technological advances span everything from more efficient computer architecture to smarter communication protocols. Energy and compute are both scarce at the edge. This constraint has resulted in researchers coming up with smaller memory footprint models developing an arsenal of tricks for quantization and compression. Compressed models, especially binary models, have enabled a discussion drawing parallels to neuroscience and information processing in the brain. Much work has been done in efficient computer architecture and data flows that reduce data movement, resulting in reduced latency and energy consumption. Start-ups have been funded to do analog and other novel-devices based compute engines for deep learning. Models that are energy and hardware architecture aware are being proposed. Communication and network protocols that enable distributed learning and inference are being assessed. Industry and academia have started to interweave machine learning based tiling and loop unrolling schemes into the existing compiler pipelines. In this journal, we hope to capture these engineering advances. The advances for ML on Edge span more or less the entire electrical and computer engineering computing paradigm – from analog computing architectures to communication protocols. We would like to solicit papers related to advances on machine learning on the edge. Potential topics include but are not limited to:

• Machine Learning accelerators.
• Deep leanring model Quantization and compression.
• Compiler design for machine learning.
• Machine learning based compilers.
• Dataflow techniques for mapping machine learning workloads.
• Distributed computing
• Smart sensor designs
• Analog machine learning compute engines.
• Novel memory and/or compute designs for machine learning.


Keywords: machine learning, deep learning, micro-controllers, sensor data, accelerators, distributed computing


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

In recent years, we have knowingly or unknowingly become reliant on machine learning and deep learning technology. These include everyday applications such as photo sorting, content customizations, facial and speech recognition to more momentous use-cases such as self-driving cars and medical diagnosis. The majority of these models are too big and compute intensive to be run on any of the hundreds of billions of smaller micro-controllers in service today. These micro-controllers contain a small CPU and a small amount of RAM. Microcontrollers are embedded in everyday consumer, medical, automotive and industrial devices. These micro-controllers typically orchestrate the steps of a state machine based on the sensor inputs. The sensor data is consumed and then overwritten as there isn’t enough power or storage budget to transmit this data to the cloud. This implies that the ML models tap into a very small fraction of sensor data compared to what is available. This also means that there is an entire realm of possibility of making these controllers smarter by themselves and create an intelligent fabric of these devices by leveraging their data and capabilities—Otherwise known as the Internet of Things (IoT). This possibility of bringing intelligence to the “edge” has sparked innovation across engineering and science disciplines.

To bring machine learning to the edge; innovation and technological advances span everything from more efficient computer architecture to smarter communication protocols. Energy and compute are both scarce at the edge. This constraint has resulted in researchers coming up with smaller memory footprint models developing an arsenal of tricks for quantization and compression. Compressed models, especially binary models, have enabled a discussion drawing parallels to neuroscience and information processing in the brain. Much work has been done in efficient computer architecture and data flows that reduce data movement, resulting in reduced latency and energy consumption. Start-ups have been funded to do analog and other novel-devices based compute engines for deep learning. Models that are energy and hardware architecture aware are being proposed. Communication and network protocols that enable distributed learning and inference are being assessed. Industry and academia have started to interweave machine learning based tiling and loop unrolling schemes into the existing compiler pipelines. In this journal, we hope to capture these engineering advances. The advances for ML on Edge span more or less the entire electrical and computer engineering computing paradigm – from analog computing architectures to communication protocols. We would like to solicit papers related to advances on machine learning on the edge. Potential topics include but are not limited to:

• Machine Learning accelerators.
• Deep leanring model Quantization and compression.
• Compiler design for machine learning.
• Machine learning based compilers.
• Dataflow techniques for mapping machine learning workloads.
• Distributed computing
• Smart sensor designs
• Analog machine learning compute engines.
• Novel memory and/or compute designs for machine learning.


Keywords: machine learning, deep learning, micro-controllers, sensor data, accelerators, distributed computing


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

31 May 2020 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

31 May 2020 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..