Digital Twins (DTs) are virtual replicas of physical systems, enabling real-time monitoring, decision support, and scenario simulation across various domains. The integration of artificial intelligence (AI) methods, particularly time series forecasting, significantly enhances the predictive capabilities of DTs. Forecasting enables DTs to anticipate future outcomes based on historical data, aiding in critical applications such as traffic management, healthcare, and industrial operations. However, as AI-driven forecasting models become increasingly complex, their black-box nature poses challenges to interpretability, making it difficult for system operators to understand and trust their outputs.
This research aims to bridge this gap by integrating forecasting algorithms with explainability frameworks tailored for DTs. Through model-driven approaches and interpretable forecasting techniques, the project seeks to enhance both the predictive performance and usability of AI-driven DTs. The findings will contribute to the development of robust, transparent, and effective forecasting solutions, ultimately improving the decision-making capabilities of DTs in various real-world applications​.