Two Ways to Stabilize LLM Agents on Complex Tasks: Hierarchical Planning and CAP-CoT
Two Ways to Stabilize LLM Agents on Complex Tasks: Hierarchical Planning and CAP-CoT Two recent papers examine a similar weakness in LLM-based agents from different angles. “From Coarse to Fine: Self-Adaptive Hierarchical Planning for LLM Agents” focuses on planning in dynamic, multi-step tasks and argues that many current agents rely on plans with a fixed level of detail, which can be too coarse for hard tasks and too detailed for simple ones. “CAP-CoT: Cycle Adversarial Prompt for Improving Chain of Thoughts in LLM Reasoning” looks at reasoning instead, starting from the observation that Chain-of-Thought prompting can become unstable on long, multi-step problems and may produce inconsistent answers even when the task does not change. Together, the papers ask how LLM agents can become more stable when both planning and reasoning stretch across many steps. [S9][S11] [S9] [S11] What the papers are about The first paper, “From Coarse to Fine: Self-Adaptive Hierarchical Planning for LL...