Embedded Systems

Hardware Accelerator and Neural Network Co-Optimization for Ultra-Low-Power Audio Processing Devices

by Christoph Gerum, Adrian Frischknecht, Paul Palomero Bernardo, Tobias Hald, Konstantin Lübeck, and Oliver Bringmann
In 2022 25th Euromicro Conference on Digital System Design (DSD), pages 1-8, 2022.

Keywords: Machine Learning, Neural Networks, AutoML, Neural Architecture Search


The increasing spread of artificial neural networks does not stop at ultralow-power edge devices. However, these very often have high computational demand and require specialized hardware accelerators to ensure the design meets power and performance constraints. The manual optimization of neural networks along with the corresponding hardware accelerators can be very challenging. This paper presents HANNAH (Hard- ware Accelerator and Neural Network seArcH), a framework for automated and combined hardware/software co-design of deep neural networks and hardware accelerators for resource and power-constrained edge devices. The optimization approach uses an evolution-based search algorithm, a neural network template technique and analytical KPI models for the configurable Ultra- Trail hardware accelerator template in order to find an optimized neural network and accelerator configuration. We demonstrate that HANNAH can find suitable neural networks with minimized power consumption and high accuracy for different audio clas- sification tasks such as single-class wake-word detection, multi- class keyword detection and voice activity detection, which are superior to the related work.