Extension of our neural network framework HANNAH
Bachelor’s Thesis / Master’s Thesis / Student Research Project
The chair has its own framework called HANNAH (Hardware Accelerator and Neural Network seArchH) for sensor processing tasks (voice activity detection, keyword spotting, human activity detection, atrial fibrillation) using different neural networks (TC-Resnet, SincNet, Branchynet, Wavenet, LSTMs,…). HANNAH has the ability to extract different features (Spectrogram, MFCC, Mel Features), quantization (weights, bias, activation) using Nervana Distiller, advanced noise handling and many more. The framework is built on PyTorch, PyTorch Lightning and Nervana Distiller. For the training of the neural networks we have a cluster with 160 Geforce GTX1080Ti or some local machines equipped with Tesla P100.
- Implementation and analysis of new features preprocessings
- Hardware-Features: In HANNAH we are using floating point for feature extraction but in our hardware accelerator a fixed point Fourier Transformation is used. This leads to a different behaviour of the same neural network. The task would be to implement the feature extraction in HANNAH like it is in hardware.
- Integration of new datasets from sensors (accelerometer, electrocardiogram, ….)
- Implementation and analysis of new neural networks.
- Compression of the neural network with run-length encoding, Huffmans code, singular value decomposition, a new convolutional kernel structure and some more ;)
- Extend our deployment of neural networks for hardware accelerator parameters. Currently only one accelerator configuration is supported but the deployment should be more general.
- GUI for easy configuration and visulazation of the results.
- Development of new convolution algorithms (Intra-Kernel Weight Sharing)
We also welcome your own ideas.
- You should have basic knowledge of Python
- Knowledge of neural networks, PyTorch, quantization and signal processing is beneficial but not necessary.