Mixed-precision algorithms play a pivotal role in addressing the energy challenge in high-performance computing. The critical constraint of not exceeding 20MW power consumption drives the Exascale system design, prompting hardware architects to optimise by redesigning components, adopting low-hardware-overhead accelerators, and exploring technologies like 3D memory stacking. Most algorithms use double precision in all computations even though a lower working precision in some parts of the algorithm would not affect the overall precision of the algorithm, leading to a waste of computational energy. From the algorithmic side, the goal is to reduce energy-to-solution by enhancing the algorithmic schemes to use the provided hardware in an energy and computationally optimal way. This can for instance be achieved by strategically lowering precision in noncritical parts of algorithms while maintaining user-defined accuracy standards. However, we can also approach the issue by applying dynamic voltage and frequency scaling of CPUs or memory or both in order to lower the energy consumption of standard solvers in scientific computing
In this project, we aim to investigate the energy consumption of standard solvers in scientific computing and based on this propose strategies for better time-to-solution and energy-to-solution. We also would like to explore coloration between these two metrics. Alongside the software release, we will produce a detailed best practice guide showcasing our findings and making suggestions to the scientific computing community. Additionally, we aim to submit a high-impact research paper to a reputable conference.