ORIGINAL RESEARCH article

Front. High Perform. Comput.

Sec. Architecture and Systems

Volume 3 - 2025 | doi: 10.3389/fhpcp.2025.1570210

FlexNPU: A Dataflow-aware Flexible Deep Learning Accelerator for Energy-Efficient Edge Devices

Provisionally accepted
Arnab  RahaArnab Raha*Deepak  A MathaikuttyDeepak A MathaikuttyShamik  KunduShamik KunduSoumendu  K GhoshSoumendu K Ghosh
  • Intel (United States), Santa Clara, United States

The final, formatted version of the article will be published soon.

This paper introduces FLEXNPU, a Flexible Neural Processing Unit, which adopts agile design principles to enable versatile dataflows, enhancing energy efficiency. Unlike conventional convolutional neural network accelerator architectures that adhere to fixed dataflows (such as input, weight, output, or row stationary) to transfer activations and weights between storage and compute units, our design revolutionizes by enabling adaptable dataflows of any type through configurable software descriptors. Considering that data movement costs considerably outweigh compute costs from an energy perspective, the flexibility in dataflow allows us to optimize the movement per layer for minimal data transfer and energy consumption, a capability unattainable in fixed dataflow architectures. To further enhance throughput and reduce energy consumption in the FLEXNPU architecture, we propose a novel sparsity-based acceleration logic that utilizes finegrained sparsity in both the activation and weight tensors to bypass redundant computations, thus optimizing the convolution engine within the hardware accelerator. Extensive experimental results underscore a significant improvement in the performance and energy efficiency of FLEXNPU compared to existing DNN accelerators.

Keywords: Deep Neural Network Accelerator, flexible data flow, sparsity acceleration, energy efficiency, Edge intelligence

Received: 03 Feb 2025; Accepted: 04 Jun 2025.

Copyright: © 2025 Raha, Mathaikutty, Kundu and Ghosh. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Arnab Raha, Intel (United States), Santa Clara, United States

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.