Why Does LLM Diversity Shrink? Reconsidering Generative Diversity After Supervised Fine-Tuning
Why Does LLM Diversity Shrink? Reconsidering Generative Diversity After Supervised Fine-Tuning “Diversity in Large Language Models under Supervised Fine-Tuning” examines a familiar claim in LLM research: supervised fine-tuning (SFT) helps align models with user intent, but may also reduce generative diversity. The paper, released on arXiv in May 2026, is notable because it does not simply repeat that assumption. Instead, it frames the issue as something that needs formal empirical testing, especially since prior work has studied LLM expressiveness from multiple perspectives rather than through one settled definition of diversity. [S4] [S4] intro: Paper overview This paper focuses directly on the relationship between supervised fine-tuning and generation diversity in large language models. According to the abstract, SFT is treated as an essential step for aligning LLMs with user intent, while the possible loss of diversity is described as a widely repeated concern that has not been t...