In certain learning tasks, there are obvious symmetries in the data and/or learning tasks. As an example, graphs do not change after a relabeling of the data. There is reason to that a machine learning system should benefit from utilizing these symmetries, or equivariances.
The aim of this project is to understand equivariant deep neural networks models function from a mathematical perspective. Of particular interest is how their training different from their non-equivariant counterparts.
Our interests are mainly theoretical, and the networks we will train will typically be relatively small. Performing the experiments on the clusters is still appropriate, since the experiments still may be quite unhandy to handle on our local machines (e.g. when repeating experiments many times).