Rethinking LLM Reasoning as Internal State Change, Not Visible Chain-of-Thought
Rethinking LLM Reasoning as Internal State Change, Not Visible Chain-of-Thought “LLM Reasoning Is Latent, Not the Chain of Thought” is a position paper released on arXiv in April 2026. Its central claim is not that chain-of-thought is useless, but that the field may be treating the wrong object as the core of reasoning: instead of equating reasoning with the text a model writes out, the paper argues that reasoning should be studied as the formation of latent-state trajectories inside the model. The authors frame this as an important conceptual shift because debates about faithfulness, interpretability, reasoning benchmarks, and inference-time intervention all depend on what we think reasoning actually is. [S4] [S4] Paper overview: what it is and what problem it raises This paper is explicitly presented as a position paper, which means its main contribution is conceptual rather than a new benchmark score or a specific engineering system. According to the abstract, it argues that curr...