This project investigates a unified theoretical framework for regularization in
neural networks, incorporating classical shrinkage methods and proposed methods, adaptive regularization, and novel deep learning-specific techniques. We provide rigorous mathematical analysis of each method’s properties, including their behavior under different loss functions
(regression and classification), computational complexity, and theoretical guarantees.
A key contribution is the extension of statistical shrinkage methods to deep
learning contexts with proofs of their efficacy in controlling overfitting. The performance of the proposed methods will be investigated by a Monte Carlo simulation