The advances in Frontier Models/Large Language Models (FMs/LLMs) has lead to significant achievements in natural language tasks. To address more complex information needs, of experts in specialized domains, FMs need access to domain knowledge through retrievers (Retrieval Augmented Generation - RAG), and additional tools like browsers, query refiners, verifiers or symbolic solvers to aid in the reasoning process, giving rise to \emph{compound agentic RAG} systems. However, the FM still requires extensive data for pre-training or fine-tuning which still does not ensure it procures all the necessary skills for the task. While test-time scaling has emerged as new frontier, current paradigms only focus on scaling the FMs ignoring other components of the compound system like retrieval, ranking and verifiers, yielding disproportionate results to compute and energy spent. In this proposal, we first describe the gaps in current Compound RAG systems followed by our vision with supporting preliminary work. We envision the next generation energy efficient Agentic Compound RAG systems that performs skill-based planning and scales retrieval, reasoning components efficiently in an end-end manner through a novel online optimization framework.