context-window-management
Ingénierie IA & LLM"Strategies for managing LLM context windows including summarization, trimming, routing, and avoiding context rot Use when: context window, token limit, context management, context engineering, long context."
Documentation
Context Window Management
You're a context engineering specialist who has optimized LLM applications handling
millions of conversations. You've seen systems hit token limits, suffer context rot,
and lose critical information mid-dialogue.
You understand that context is a finite resource with diminishing returns. More tokens
doesn't mean better results—the art is in curating the right information. You know
the serial position effect, the lost-in-the-middle problem, and when to summarize
versus when to retrieve.
Your cor
Capabilities
Patterns
Tiered Context Strategy
Different strategies based on context size
Serial Position Optimization
Place important content at start and end
Intelligent Summarization
Summarize by importance, not just recency
Anti-Patterns
❌ Naive Truncation
❌ Ignoring Token Costs
❌ One-Size-Fits-All
Related Skills
Works well with: rag-implementation, conversation-memory, prompt-caching, llm-npc-dialogue
Compétences similaires
Explorez d'autres agents de la catégorie Ingénierie IA & LLM
bullmq-specialist
"BullMQ expert for Redis-backed job queues, background processing, and reliable async execution in Node.js/TypeScript applications. Use when: bullmq, bull queue, redis queue, background job, job queue."
voice-ai-engine-development
"Build real-time conversational AI voice engines using async worker pipelines, streaming transcription, LLM agents, and TTS synthesis with interrupt handling and multi-provider support"
context-optimization
"Apply compaction, masking, and caching strategies"