Posts
Showing posts with the label paper brief
When Do Tools Help LLM Agents, and When Do They Backfire?
- Get link
- X
- Other Apps
Two Ways to Stabilize LLM Agents on Complex Tasks: Hierarchical Planning and CAP-CoT
- Get link
- X
- Other Apps
When Does LLM Self-Correction Actually Help? Papers on Iterative Refinement, Evaluation, and Reliability
- Get link
- X
- Other Apps
Is LLM Reasoning Really a Chain of Thought? What a New Paper Questions
- Get link
- X
- Other Apps
Rethinking LLM Reasoning as Internal State Change, Not Visible Chain-of-Thought
- Get link
- X
- Other Apps