Federated Learning (FL) offers a promising solution for privacy-preserving model training by keeping sensitive data decentralized. However, FL faces significant challenges stemming from heterogeneous data distributions (non-IID) and security threats posed by malicious (Byzantine) clients.
This project investigates novel schemes to optimally select clients and filter their contributions to the federated learning scheme.
Building upon recent advancements in learnable aggregation weights, we model the client selection and model update tasks as a bi-level optimization problem where decision variables encode the importance given to each client as well as the global model parameters. The primary aim of this project is to evaluate the scalability and stability of such frameworks in different operational regimes. The goal is to develop a robust, scalable algorithm that outperforms state-of-the-art defenses in highly heterogeneous federated learning environments.