The goal of this project is to research federated learning of machine learning algorithms in a distributed manner for training large-scale problems. In particular, we study how in the process of learning data privacy can be preserved under communication between local nodes. For this purpose, we propose an ADMM-based SVM with differential privacy. In addition, we investigate how accuracy will be influenced compared to the non-private algorithm for small and big actors. The initial communication in the network between agents is designed in a decentralized manner in which no master or a central agent controls the communication and each agent communicates with the one-hop neighbors. This is adapted in a distributed network-based SVMs algorithm.
We will compare several federated learning methods in terms of accuracy and CPU time. We will investigate a lower bound to the number of samples to be labeled to get good performance in which only a few agents communicate. We will conduct experiments to evaluate the effectiveness of the developed adaptive communication strategy and the proposed distributed multi-agent active learning on large-scale problems.