The role of mutual information in machine learning models has been the subject of many, sometimes seemingly contradictory studies. In our recent work, a potential clue to better understanding this picture has emerged. Namely, a contrastive use of mutual information, where external or internal noise sources are exploited, to tease out robust, and hopefully useful, informational content from highly variable factors, that may be expected to contain spurious information to be pruned. We test this in two settings: the first being using standard mutual information estimators from the literature (augmented by our novel approach) and the second being the second-order mutual information estimator that we developed and tested in our previous experiments. We believe that the resulting investigation is of high general interest.