This project focuses on exploring data augmentation and dropout techniques within the context of geometric deep learning, with a particular emphasis on learning equivariance properties in neural networks.
To address these challenges, this project investigates the use of data augmentation strategies tailored for geometric data to improve model generalization by artificially increasing dataset variability while preserving underlying geometric structures. Additionally, dropout techniques are adapted and analyzed to prevent overfitting in equivariant models, which may otherwise become overly sensitive to training data due to their constrained architectures.
We implement several geometric data augmentation methods, including rotations, reflections, and random perturbations in graph and manifold data, and evaluate their impact on learning equivariant representations. Furthermore, we design dropout variants compatible with geometric neural network layers, assessing their effectiveness in regularizing models without breaking equivariance.