Children and young adults are vulnerable to online harassment, bullying and sexual predators. While bullying is typically aggressive in tone, online grooming is more subtle. The project involves developing machine learning algorithms to detect underlying tone and patterns specific to grooming conversations, with the goal of enhancing detection capabilities, ultimately supporting safer online environments. The project is using existing available grooming conversations to train the models and explore the effect of tone. We aim to create a tool that is able to detect a wider range of potential threats as hitherto features of conversations will be considered.