In recent years, there has been increased interest in the design, training, and evaluation of end-to-end autonomous driving (AD) systems. One often overlooked aspect is the uncertainty of planned trajectories predicted by these systems, despite awareness of their own uncertainty being key to achieving safety and robustness. We propose to estimate this uncertainty by adapting loss prediction from the uncertainty quantification literature. To this end, we introduce a novel light-weight module, dubbed CATPlan, that is trained to decode motion and planning embeddings into estimates of the collision loss used to partially supervise end-to-end AD systems. During inference, these estimates are interpreted as collision risk. The performance of CATPlan has been verified on a well-acknowledged dataset, nuScenes. In this project, we want to further investigate it on CARLA, a closed-loop benchmark, which allows fine-grained evaluation at different driving scenarios. We also want to explore of vision language models are good at risk-awareness.