Using Artificial Neural Networks (ANNs) costs a lot of energy, with studies showing that using a large network 1000 times can cost up-to 2.9 kWh. There is a desperate need to research how to make future ANNs much more energy efficiency, such that (for example) they can be placed in smart phones without needlessly taxing the environment. One opportunity is to migrate existing ANNs to make use of a different – possible brain-inspired – computational paradigm.
Spiking Neural Networks (SNNs) belong to the third generation neuron models, which are more complex and expressive than the ones commonly used in Deep-Learning (DL) today. SNNs aspire to replicate the computational power of the animal brain, where neurons communicate with each other through sparse events called spikes. Due to their sparse communication, SNNs can be many times more energy-efficient at solving a particular task compared to traditional DL methods. SNNs are often executed on specialized circuits called neuromorphic systems, that have been created to mimic how the brain computes. Examples of such commercial systems are Intel Loihi, IBM TrueNorth, and academic examples include University of Manchester’s SPiNNaker or the neuromorphic systems that we (in the research group) develop ourselves.
There are multiple ways of training a SNNs to perform some function, such as for example image classification. One way is to train the network in the spiking domain through (often unsupervised) methods such as Spike Timing Dependent Plasticity (STDP), which is how our brain work. Another, perhaps more prominent strategy, is to train the network as a regular Deep Neural Network (DNN), and then use advanced and clever techniques to transfer the network into the spiking domain, allowing the network to harvest the energy-efficiency that SNNs and neuromorphic hardware offers. This strategy is very beneficial, as it has been shown to yield high-quality networks whose performance rivals those of traditional DNNs. Irrespective of which methods one chose, the both have one thing in common: they are very computationally demanding to train.
Our project wants to leverage the computational power of ALVIS and Dardel High-Performance Computing (HPC) computer to explore and investigate how to train SNN-based neural networks. We believe that this will facilitate faster exploration of our propose algorithms, since we will be able to train (and convert) the networks in a matter of hours on the GPUs rather than days on a traditional CPU system. We plan to make use of well-known infrastructure such as PyTorch or Tensorflow as well as tools such as NEST/Briar2 to train networks such that their final state can be transferrable to the spiking domain in one of the our neuromorphic architectures, allowing AI/ML to be performed in an very energy efficient way without sacrificing inference accuracy. This will be performed using CPUs (since frameworks such as NEST/Briar2 primarily support CPUs). Finally, we apply to both Dardel and Alvis also in order to quantify our SNN implementation improvements over running the same workloads on NVIDIA and AMD GPUs, as well as on AMD CPUs (primarily, NEST/Briar2).