Context Rot

The gradual degradation of AI output quality that occurs as a model’s context window fills with noise, outdated information, failed attempts, and irrelevant conversation history.

Every message, tool call, file read, and response adds tokens to the context window. Eventually, the model is processing so much information that it loses track of what matters. That is context rot.

Early in a session, a model with well-structured context performs reliably. Twenty or thirty interactions later, the context window is cluttered with debugging detours, superseded instructions, earlier draft attempts, and conversational noise. The model still has the original instructions somewhere in the window, but its ability to retrieve and act on them drops measurably.

Why it happens

Large language models do not treat every token in the context window equally. Attention mechanisms prioritize certain positions and patterns. As the window fills, earlier material gets effectively buried. The model does not forget in the human sense. It loses the ability to prioritize the right information when surrounded by noise.

What it looks like in practice

The signs are subtle at first. The model starts repeating points it already made. It drifts from established guidelines. It ignores constraints that it followed perfectly in earlier turns. Teams that run long AI sessions for content production, data analysis, or workflow automation encounter this regularly, often blaming the model when the real problem is an unmanaged context window.

What to do about it

Treat context like a resource with a carrying capacity. Structure reference materials in external files the model can read on demand rather than stuffing everything into the conversation. Break complex tasks into sub-sessions with clean context. Build context hygiene into your AI workflows the same way you build data hygiene into your analytics pipeline.

Frequently Asked Questions

When does context rot typically start?

In most implementations, output quality begins to degrade noticeably after 20 to 30 turns in a conversation or agent session. Beyond 40 turns, degradation accelerates as early instructions fade from the model’s effective attention.

Can you fix context rot once it starts?

Prevention works better than recovery. Techniques include summarizing and compressing earlier context, using sub-agents with clean context windows for specific tasks, and starting fresh sessions with well-structured reference files rather than trying to rehabilitate a degraded one.