Social contagion, where behaviors and information spread through populations similarly to pathogens, plays a crucial role in shaping modern human behavior. However, the mechanisms driving social contagions have yet to be studied from a unified perspective, leaving gaps in our understanding. By exploring how Reinforcement Learning (RL) drives social contagions, we aim to identify key social and non-social learning features that lead to transmission on social networks, thereby informing strategies to mitigate the contagion of maladaptive behaviors. Through agent-based modeling (ABM), we simulate agents learning the weights of spreading items and making decisions based on evolving items and social feedback. Various parameters in our model, including those associated with RL and network structure, are varied to analyze their effects on contagion dynamics. Preliminary results indicate that our simulations capture key patterns observed in real-world behavior and information spread, including the S-shaped diffusion curve and power law behavior of the distribution of adoption frequency on social networks. To confirm the stability and robustness of these findings, we aim to extend our simulations to larger networks with longer durations and multiple independent runs. This expansion will require substantial computational resources to manage the higher processing demands and data volume. Additionally, we plan to test further hypotheses to identify the key features driving social contagions, using variations of our model and the resulting simulation outputs. These analyses will require high-performance computing resources.