Three Recent Papers on Making LLM Agents More Stable in Planning and Reasoning
Three Recent Papers on Making LLM Agents More Stable in Planning and Reasoning In April 2026, three arXiv papers approached a similar problem from different angles: why LLM agents become unreliable on complex, multi-step work, and how that instability might be reduced with more structure. Analytica introduces a structured analysis framework called Soft Propositional Reasoning, From Coarse to Fine proposes self-adaptive hierarchical planning instead of fixed planning granularity, and CAP-CoT focuses on improving Chain-of-Thought stability through iterative and contrastive correction. Read together, they suggest a common direction: complex agent behavior may need to be broken down, revised, and organized more explicitly rather than left to a single free-form pass. [S5][S9][S11] [S5] [S9] [S11] Analytica, From Coarse to Fine, and CAP-CoT: the April 2026 context All three papers were released on arXiv in April 2026 and focus on weaknesses that appear when LLM systems are asked to do mor...