Our goal is to address the problem of degradation both diversity and calibration durning Supervised Fine-Tuning (SFT) of Large Language Models (LLMs). Recently, there are multiple studies that investigates this issue and suggest different loss functions to preserve or increase diversity. However, we found that all of them fails in finding a reasonable tradeoff between quality, diversity and calibration. To bridge the gap we propose our modification to the standard loss function CE + regularized entropy. To validate our approach we aim to fine-tune 8 different quantized LLMs on instruct dataset and evaluate them on two novel diversity datasets. Finally, we will compare our loss function to the alternative and show the difference in both diversity and calibration metrics (ECE-informed).