Research Topic

Hardware for Artificial Intelligence

About this Research Topic

AI is now a fundamental part of the technological landscape enabling pervasive computing and becoming an omnipresent feature of everyday life. From data centers to smartphones and voice-activated assistants, there is a compelling need to develop bespoke hardware that is energy-efficient and that supports the increasingly multi-faceted functionality of AI systems. The field of “hardware for AI” is rapidly evolving, from custom neural network accelerators and event-based neuromorphic chips designed to process sensory information at the edge, to powerful AI platforms for training models in the cloud. We foresee that in the future the diversity in architectures, hardware platforms, and applications that drive AI hardware development will only continue to increase.

With this Research Topic, we wish to create a broad-ranging and comprehensive compendium of cutting-edge research in AI hardware. The main objective is to help the community broaden their understanding of what constitutes AI hardware and become aware of the sheer richness of this vast subject. To achieve this, we intend to use this topic to index in one location the state-of-the-art hardware systems that solve very different problems in AI. It will present a well-rounded and inclusive view of hardware solutions addressing the myriad aspects of the truly vast-ranging topic of “AI”.

While industrial hardware AI has become synonymous with deep learning accelerators, an aim here is also to showcase research into future hardware platforms exploiting principles from neuroscience to change the way we think about ‘intelligent’ systems. This can include the development of new algorithms for on-chip online training under power and memory constraints similar to the biology, and also the incorporation of bio-inspired information coding, communication and electronic modeling strategies in neural networks to reduce energy consumption and improve performance.

Contributions ranging from technically focused hardware implementations, domain reviews, and perspectives on the present and future of AI hardware (at all levels of abstraction from synaptic function to cognition) are all welcome.
The topic is designed to be inclusive and give voice to a broad variety of views in order to stimulate debate and cross-pollination in the community.
To this purpose, we welcome articles addressing the following:

• Technical contributions describing cutting-edge advances in the field of AI hardware.
a) Hardware systems for core areas of AI such as machine learning and deep learning accelerators, vector processing units, etc.
b) Hardware for more unconventional areas, such as hyperdimensional computing, machine learning for sensing, and Bayesian computation.
c) Hardware systems implementing principles of the neural computation in the brain
d) Theoretical papers with a clear link to impact on hardware implementability

• Perspectives on the nature, past, present, and future of hardware for AI
• Reviews of hardware-related AI topics at any level of abstraction
• Constructive commentaries on specific papers or pieces of work


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

AI is now a fundamental part of the technological landscape enabling pervasive computing and becoming an omnipresent feature of everyday life. From data centers to smartphones and voice-activated assistants, there is a compelling need to develop bespoke hardware that is energy-efficient and that supports the increasingly multi-faceted functionality of AI systems. The field of “hardware for AI” is rapidly evolving, from custom neural network accelerators and event-based neuromorphic chips designed to process sensory information at the edge, to powerful AI platforms for training models in the cloud. We foresee that in the future the diversity in architectures, hardware platforms, and applications that drive AI hardware development will only continue to increase.

With this Research Topic, we wish to create a broad-ranging and comprehensive compendium of cutting-edge research in AI hardware. The main objective is to help the community broaden their understanding of what constitutes AI hardware and become aware of the sheer richness of this vast subject. To achieve this, we intend to use this topic to index in one location the state-of-the-art hardware systems that solve very different problems in AI. It will present a well-rounded and inclusive view of hardware solutions addressing the myriad aspects of the truly vast-ranging topic of “AI”.

While industrial hardware AI has become synonymous with deep learning accelerators, an aim here is also to showcase research into future hardware platforms exploiting principles from neuroscience to change the way we think about ‘intelligent’ systems. This can include the development of new algorithms for on-chip online training under power and memory constraints similar to the biology, and also the incorporation of bio-inspired information coding, communication and electronic modeling strategies in neural networks to reduce energy consumption and improve performance.

Contributions ranging from technically focused hardware implementations, domain reviews, and perspectives on the present and future of AI hardware (at all levels of abstraction from synaptic function to cognition) are all welcome.
The topic is designed to be inclusive and give voice to a broad variety of views in order to stimulate debate and cross-pollination in the community.
To this purpose, we welcome articles addressing the following:

• Technical contributions describing cutting-edge advances in the field of AI hardware.
a) Hardware systems for core areas of AI such as machine learning and deep learning accelerators, vector processing units, etc.
b) Hardware for more unconventional areas, such as hyperdimensional computing, machine learning for sensing, and Bayesian computation.
c) Hardware systems implementing principles of the neural computation in the brain
d) Theoretical papers with a clear link to impact on hardware implementability

• Perspectives on the nature, past, present, and future of hardware for AI
• Reviews of hardware-related AI topics at any level of abstraction
• Constructive commentaries on specific papers or pieces of work


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

09 October 2020 Abstract
16 April 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

09 October 2020 Abstract
16 April 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..