Neural network architectures often do not make efficient use of GPU resources, due to a combination of batch size, architecture design and computational complexity. We will investigate how variations in the neural network architecture affects the overall efficiency and performance of GPU resources. The PyTorch and CuPy framework will be mainly used.
Name of main supervisor: Amir Aminifar
Affiliation of main supervisor: Lund University