Distributed machine learning (DML) enables the scalable processing of large datasets and the training of complex models by distributing computations across multiple nodes or machines. However, the communication between the distributed agents and a central parameter server become a bottleneck for the training efficiency and learning performance, especially when wireless networks support the communication links. This project is a part of finding methods for increasing the communication and energy efficiency in DML systems, for example, though data compression.