We want to address the problem of how to include using good design principles when utilizing LLMs in code generation, code summarization and test generation. One of the problem with LMMS is that they fail to capture higher-level constructs, like design-pattern and architectural patterns and that they can potentially introduce security vulnerabilities into the code. These problems will be analyzed by, firstly evaluate LLMs ability to generate low-level solutions to understand their ability to understand the vocabulary of the problem and design domain and use the result as a benchmark, secondly evaluate Agentic AI frameworks for the same tasks and finally evaluate the ability of a Agentic RAG AI system's ability to generate code following a given architecture and design pattern without introducing new vulnerabilities into the code. We will need to analyze a wide range of LLMs for this task.