The Problem
AI models have a limited context window. As conversations grow long, earlier messages may be pushed out, causing the AI to lose track of important context from earlier in the discussion.How Compaction Works
Raycaster Doc uses chat compaction to summarize long conversations while preserving the key information the AI needs to continue working effectively. When a conversation approaches the context limit:Compaction triggers
The system detects that the conversation is approaching the model’s context window limit.
Memory extraction
A special turn runs where the AI is asked to write down everything it needs to remember — key facts, decisions, file references, and current task state.
When Does It Happen?
Compaction can trigger in two ways:- Proactive — Automatically when the conversation approaches the context limit
- Manual — You can trigger compaction from the chat menu if you want to compress the conversation
What Gets Preserved
The compaction summary captures:- Key facts and decisions from the conversation
- File references and workspace state
- Current task progress and next steps
- Important user preferences mentioned during the chat
Impact on Your Experience
Compaction is designed to be seamless:- The conversation continues normally after compaction
- You can still scroll back and read earlier messages in the UI
- The AI’s responses should remain contextually appropriate
- A compaction marker appears in the chat timeline
