Our study focuses on the fine-tuning, and usage of AI chatbot to generate software artefacts (code, tests, requirements, etc.). Particularly, our object of study is user context and intentions to mitigate issues of inaccurate responses from chatbots due to misunderstandings of context. We aim to train and use various LMs using prompt engineering and structuring as remedies. While these techniques have shown promise in enhancing chatbot performance across various tasks, their effectiveness predominantly lies within the scope of natural language processing tasks, such as question answering, with limited exploration in code generation tasks.
Our research aims to bridge this gap by conducting a comprehensive evaluation of the impact of various prompt programming and structuring techniques on code generation, thereby offering insights into improving LLMs' performance and understanding in a broader spectrum of tasks.