This project aims to push the boundaries of Large Language Model (LLM) applications and multi-modal integration for, but not limited to, Human-Robot Interaction (HRI). Recent developments in LLMs have shown exceptional performance in complex natural language tasks, yet substantial challenges remain, including multimodal generation, inference efficiency, robust interaction dynamics, knowledge injection, and user-preference alignment for practical HRI deployment. This project addresses these core areas through a series of interrelated sub-projects, spanning theoretical foundations, algorithmic improvements, system design, and real-world evaluations. This project will also explore methods such as zeroth-order and reinforcement learning optimisation for federated, community-based personalisation. It will finally cover topics on the multimodality of LLMs from and for robotics, with a particular interest in deployable LLM-powered robotic systems.