This project focuses on developing secure and robust aggregation techniques in Federated Learning (FL), particularly in adversarial settings where malicious clients attempt to manipulate global model updates. Our research investigates Byzantine-robust aggregation rules, adversarial defenses, and privacy-preserving mechanisms to enhance the security of FL.