Skip to main content

Posts

Featured

Why LLMs Lose Context in Multi-Turn Interaction: What Three New Papers Suggest About Causes and Responses

Why LLMs Lose Context in Multi-Turn Interaction: What Three New Papers Suggest About Causes and Responses Recent papers are converging on a practical problem: large language models and interactive agents may perform well in a single turn, yet become unreliable over long conversations or extended task execution. The selected papers approach this from different angles—mechanisms of instruction drift, action selection under uncertainty, long-horizon interaction planning, and the difficulty of accounting for unstated user preferences—but they share a common concern: keeping goals, rules, and context usable throughout interaction rather than only at the start. [S7][S1][S8][S3] [S7] [S1] [S8] [S3] Paper overview: what these works are about "When Attention Closes: How LLMs Lose the Thread in Multi-Turn Interaction" focuses directly on the problem named in its title: models can follow complex instructions in one turn, but over many turns they often lose track of instructions, pers...

Latest Posts

Three AI News Updates on Safer Agents, Multi-Turn Tool Use, and Infrastructure Scale

How Conversational LLM Agents Choose the Next Question: BALAR and PRISM

Can LLMs Reuse Tools Creatively? What CreativityBench Tries to Measure

Why Safety in LLM Agents May Depend More on Interaction Topology Than on the Model

When Do Tools Help LLM Agents, and When Do They Backfire?

Why Does LLM Diversity Shrink? Reconsidering Generative Diversity After Supervised Fine-Tuning

AWS and NVIDIA Show Two AI Trends: Better LLM Evaluation and Wider Agent Adoption

LLM Agents and Scientific Discovery: What Four New arXiv Papers Suggest About the Next Wave of Automation

DreamProver and AGEL-Comp: What LLM Agents Need to Reason Better and Generalize Further

Three Recent Papers on Making LLM Agents More Stable in Planning and Reasoning