Federated learning (FL) offers a promising solution to preserve data privacy while enabling collaborative AI model training across multiple clients. However, as the adoption of federated learning expands across various applications, the demand for computational resources—particularly CPU and memory—has increased significantly. Additionally, existing FL frameworks often lack scalability, robustness, and performance optimization in geographically distributed environments.
In this project, we focus on evaluating, optimizing, and extending FEDn, a federated learning framework designed to address these challenges. Our goal is to test FEDn with high-level deep learning models, particularly transformers, to assess its ability to handle large-scale federated training efficiently. Furthermore, we aim to develop a user-friendly web-based interface to enhance the accessibility and usability of FEDn, making it easier for researchers and practitioners to manage and monitor federated learning experiments in real time.