This project focuses on improving the performance of AI models in edge–cloud collaborative computing, where efficient coordination between limited edge devices and powerful cloud systems is essential. Achieving low latency, high accuracy, and energy efficiency in such distributed environments requires extensive experimentation with model architectures, partitioning strategies, and optimization techniques.
The research involves computationally intensive tasks, including training, fine-tuning, and evaluating deep learning models under diverse deployment scenarios. These processes are inherently resource-demanding, as they require repeated large-scale experiments and handling of substantial datasets and model outputs.
Access to high-performance computing resources is therefore critical to enable systematic exploration and validation of the proposed methods. Without sufficient computational capacity and storage, it would not be feasible to conduct experiments at the scale and depth necessary to achieve meaningful and reliable results.