Computing architectures for machine learning with a focus on energy efficiency and small devices to improve the prospects of artificial intelligence at the edge. Most of today's AI computations are done in datacenters and the cloud, far from the user. There is an increasing need to put these computations closer to the user, to improve on aspects such as energy consumption, latency, and security. This project aims to develop new methods, algorithms, architectures, and hardware implementations for energy-efficient machine learning closer to the edge. This includes techniques such as approximate computations, quantization, and sparsity.