SUPR
Scalable optimization for machine-learning
Dnr:

NAISS 2024/5-352

Type:

NAISS Medium Compute

Principal Investigator:

Mikael Johansson

Affiliation:

Kungliga Tekniska högskolan

Start Date:

2024-07-01

End Date:

2025-07-01

Primary Classification:

10105: Computational Mathematics

Allocation

Abstract

The emergence of big data has caused a dramatic shift in the operating regime for optimization algorithms. Since over a decade, focus has turned from interior-point methods to (stochastic) first-order algorithms to achieve better scalability. However, these methods still fall short in many modern applications. Increasingly often, data is spread across geographically dispersed locations and problem dimensions are huge, both in terms of decision vector sizes and the number of data points used. Communication, not computations, is becoming the bottleneck! In our research, we develop distributed optimization algorithms for machine learning. The research encompasses both relatively minor enhancement of optimization algorithms (e.g. better step-size policies) to the development of brand new distributed optimization algorithms and/or approaches. We are also interested n developing techniques for reducing communication overhead and for balancing the workload across processing nodes to accelerate the convergence (i.e. reduce the training times). This project continues and extend our previous project; see our activity report.