- Added real-time context window usage indicator in header - Format: emoji + percentage (🟢 52%) - Color-coded emoji: 🟢 <75%, 🟡 <90%, 🔴 >=90% - Hover tooltip shows full details: 'Context: 4,300 / 8,192 tokens (52%)' - Token estimation: 1 token ≈ 4 characters - Model-aware context windows: llama3 (8K), qwen2.5 (32K), deepseek (16K) - Includes system prompts, messages, tool calls, and streaming content - Updates in real-time as conversation progresses - All quality checks passing (TypeScript, Biome, Clippy, builds) Tested and verified: - Shows accurate percentage of context usage - Emoji changes color at appropriate thresholds - Different models show correct context window sizes - Can exceed 100% when over limit (shows red) - Tooltip provides exact token counts
Project Specs
This folder contains the "Living Specification" for the project. It serves as the source of truth for all AI sessions.
Structure
- 00_CONTEXT.md: The high-level overview, goals, domain definition, and glossary. Start here.
- tech/: Implementation details, including the Tech Stack, Architecture, and Constraints.
- STACK.md: The technical "Constitution" (Languages, Libraries, Patterns).
- functional/: Domain logic and behavior descriptions, platform-agnostic.
- 01_CORE.md: Core functional specifications.
Usage for LLMs
- Always read 00_CONTEXT.md and tech/STACK.md at the beginning of a session.
- Before writing code, ensure the spec in this folder reflects the desired reality.
- If a Story changes behavior, update the spec first, get approval, then write code.