This project aims to develop and evaluate novel decentralized optimization algorithms tailored for personalized machine learning (PML). In contrast to traditional federated learning, where a global model is trained collaboratively across devices, personalized machine learning focuses on training models that adapt to the unique data distribution of each individual client (user or device). This is crucial in scenarios where user preferences, environments, or behaviours differ significantly.
Motivation and Background:
As data privacy and communication constraints become more critical, decentralized and federated learning have gained popularity. However, many of these approaches optimize a shared global objective, which may not perform well for all users—especially in heterogeneous settings. Personalization addresses this by enabling each client to train a model that better reflects its local data characteristics, improving user experience and performance. My research focuses on designing decentralized algorithms that: Require no central server, Allow peer-to-peer updates, Achieve provable convergence guarantees, And incorporate mechanisms for client-level personalization.