SUPR
Revisiting regularization methods for deep learning: extensions and comparative insights
Dnr:

NAISS 2025/5-281

Type:

NAISS Medium Compute

Principal Investigator:

Muhammad Qasim

Affiliation:

Lunds universitet

Start Date:

2025-05-28

End Date:

2026-06-01

Primary Classification:

10106: Probability Theory and Statistics (Statistics with medical aspects at 30118 and with social aspects at 50907)

Secondary Classification:

10210: Artificial Intelligence

Tertiary Classification:

10212: Algorithms

Allocation

Abstract

This project investigates a unified theoretical framework for regularization in neural networks, incorporating classical shrinkage methods and proposed methods, adaptive regularization, and novel deep learning-specific techniques. We provide rigorous mathematical analysis of each method’s properties, including their behavior under different loss functions (regression and classification), computational complexity, and theoretical guarantees. A key contribution is the extension of statistical shrinkage methods to deep learning contexts with proofs of their efficacy in controlling overfitting. The performance of the proposed methods will be investigated by a Monte Carlo simulation