story-kit: queue 120_story_test_coverage_llm_chat_rs for QA
This commit is contained in:
@@ -1,27 +0,0 @@
|
||||
---
|
||||
name: "Add test coverage for llm/chat.rs (2.6% -> 60%+)"
|
||||
---
|
||||
|
||||
# Story 120: Add test coverage for llm/chat.rs
|
||||
|
||||
Currently at 2.6% line coverage (343 lines, 334 missed). This is the chat completion orchestration layer — the biggest uncovered module by missed line count.
|
||||
|
||||
## What to test
|
||||
|
||||
- Message construction and formatting
|
||||
- Token counting/estimation logic
|
||||
- Chat session management
|
||||
- Error handling paths (provider errors, timeout, malformed responses)
|
||||
- Any pure functions that don't require a live LLM connection
|
||||
|
||||
## Notes
|
||||
|
||||
- Mock the LLM provider trait/interface rather than making real API calls
|
||||
- Focus on the logic layer, not the provider integration
|
||||
- Target 60%+ line coverage
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] Line coverage for `llm/chat.rs` reaches 60%+
|
||||
- [ ] Tests pass with `cargo test`
|
||||
- [ ] `cargo clippy` clean
|
||||
Reference in New Issue
Block a user