Driver monitoring is critical for road safety, especially in partially automated vehicles where shifts in attention and cognitive state can lead to safety-critical situations. This project explores the use of Large Language Models (LLMs) for driver state monitoring, focusing on key indicators such as attention, distraction, and cognitive load. The study will involve empirical testing of multiple LLMs (e.g., GPT-4V, LLaMA, MiniCPM, BLIP-2, Flamingo) on retrospective in-vehicle video datasets. These models, which integrate visual and contextual reasoning, will be assessed for their ability to identify behavioral signals associated with varying levels of driver engagement. Model outputs will be evaluated against established benchmarks in driver monitoring research. This comparative analysis will provide insights into each model’s strengths and limitations in interpreting driver behavior. The project aims to provide an initial assessment of LLM-based driver monitoring capabilities and form a foundation for the future development of adaptive safety systems that can support real-time detection of critical changes in driver state.