Spiking Neural Networks (SNN) are considered a novel class of Artificial Neural Networks, which closer resemble information processing in human brains. Information in those networks is exchanged by "events" (pulses) instead of continuous tensors of largely redundant discrete values, which offer temporal dynamics at the local computing unit.
Independently, a novel form of vision sensors was introduced about a decade ago, so-called event-based cameras. Those cameras perceive visual information as changes in local illumination, and report those as events / spikes to a computing system.
SNN and EventCameras are a natural match for low-latency real-world information processing systems. In my lab, we investigate training of medium-sized SNN for real-world object detection in engineered environments, such as a robotic arm interacting with human co-workers, or autonomous mobile robots taking decisions based on visual input in real-time.
In this NAISS medium-size computing-call proposal we will train SNNs on an existing data-set of real-world objects, which generate time-varying spatio-temporal event signatures. Several such data sets exist, ranging from autonomous driving scenarios (publicly available) to external traffic observations (public and lab internal additional data sets) to human toolshop objects (lab internal created, to be published).
We require GPU computing resources beyond our lab's availbility to evaluate various large network topologies and optimize the network's dynamics' parameters under environmental contraints (such as motion speed, illuminations, etc). We anticipate training a large number of medium-sized SNN for different object categeries; and for each of the network under a variety of parameters.