Trust in AI tools is key given the rapid market adoption of opaque "black box" Deep Learning models, such as GPT-4. In sight of these trends, the TAGIT project intends to systematically evaluate a number of open-source Large Language Models (LLMs) and other foundation models (i.e., self-supervised deep neural networks). These evaluations are aimed at comparatively discuss the usage of different AI tools at each stage of the development process of Cyber Physical Systems (CPS). The TAGIT project also aims to study the perceived trustworthiness of these tools and the perceived quality of their outcomes (e.g., text translations, AI-generated user stories, etc.). Moreover, the expected outcome of TAGIT is a systematic mapping of existing open-source tools to open problems within the AI field and discuss the impact of Prompt Engineering (PE) in making CPS more robust, ethical and lawful.