Why LLMs Lose Context in Multi-Turn Interaction: What Three New Papers Suggest About Causes and Responses
Why LLMs Lose Context in Multi-Turn Interaction: What Three New Papers Suggest About Causes and Responses Recent papers are converging on a practical problem: large language models and interactive agents may perform well in a single turn, yet become unreliable over long conversations or extended task execution. The selected papers approach this from different angles—mechanisms of instruction drift, action selection under uncertainty, long-horizon interaction planning, and the difficulty of accounting for unstated user preferences—but they share a common concern: keeping goals, rules, and context usable throughout interaction rather than only at the start. [S7][S1][S8][S3] [S7] [S1] [S8] [S3] Paper overview: what these works are about "When Attention Closes: How LLMs Lose the Thread in Multi-Turn Interaction" focuses directly on the problem named in its title: models can follow complex instructions in one turn, but over many turns they often lose track of instructions, pers...