diff --git a/.story_kit/work/1_upcoming/120_story_test_coverage_llm_chat_rs.md b/.story_kit/work/1_upcoming/120_story_test_coverage_llm_chat_rs.md new file mode 100644 index 0000000..3685898 --- /dev/null +++ b/.story_kit/work/1_upcoming/120_story_test_coverage_llm_chat_rs.md @@ -0,0 +1,27 @@ +--- +name: "Add test coverage for llm/chat.rs (2.6% -> 60%+)" +--- + +# Story 120: Add test coverage for llm/chat.rs + +Currently at 2.6% line coverage (343 lines, 334 missed). This is the chat completion orchestration layer — the biggest uncovered module by missed line count. + +## What to test + +- Message construction and formatting +- Token counting/estimation logic +- Chat session management +- Error handling paths (provider errors, timeout, malformed responses) +- Any pure functions that don't require a live LLM connection + +## Notes + +- Mock the LLM provider trait/interface rather than making real API calls +- Focus on the logic layer, not the provider integration +- Target 60%+ line coverage + +## Acceptance Criteria + +- [ ] Line coverage for `llm/chat.rs` reaches 60%+ +- [ ] Tests pass with `cargo test` +- [ ] `cargo clippy` clean