Commit Graph

927 Commits

Author SHA1 Message Date
Dave
37ef7abb2f Deleted orphan file 2026-02-17 13:39:46 +00:00
Dave
3302b62839 Rename workflow to story-driven test workflow 2026-02-17 13:38:31 +00:00
Dave
3079ea194f Add story for UI test TODO visibility 2026-02-17 13:35:27 +00:00
Dave
4c887d93b5 Refocus workflow on TDD and reorganize stories 2026-02-17 13:34:32 +00:00
Dave
1f4f10930f Archived story 25 2026-02-16 20:34:21 +00:00
Dave
f1e5ac72e0 Project creation is workign 2026-02-16 20:34:03 +00:00
Dave
3be9088794 Split into components 2026-02-16 19:59:45 +00:00
Dave
1adbadb6eb ibid 2026-02-16 19:53:43 +00:00
Dave
e6638a6517 Clean up previous project display 2026-02-16 19:53:31 +00:00
Dave
8ed40dd444 Fixed up project picker a bit 2026-02-16 19:48:39 +00:00
Dave
45bce740b6 Text-completion picker to select a project 2026-02-16 19:44:29 +00:00
Dave
ffab287d16 Put in a recent project picker 2026-02-16 18:57:39 +00:00
Dave
539cbba409 Ignoring the store.json 2026-02-16 17:18:03 +00:00
Dave
2bb987d629 Happier startup message 2026-02-16 17:10:23 +00:00
Dave
37be55242d Auto-build the fronted into the release binary on cargo build --release 2026-02-16 17:05:09 +00:00
Dave
dae772e619 Added some API doc comments 2026-02-16 16:55:59 +00:00
Dave
feb05dc8d0 Refactored and documented the HTTP API 2026-02-16 16:50:50 +00:00
Dave
f76376b203 More smoothing, as they say 2026-02-16 16:35:25 +00:00
Dave
5923165fcf Refactoring the structure a bit 2026-02-16 16:24:21 +00:00
Dave
a2188e2c7f Smoothing out some tauri conversion leftovers 2026-02-16 16:02:30 +00:00
Dave
b1706aaa3b Caonverting to workspace 2026-02-16 15:54:16 +00:00
Dave
caf293a8c4 Renamed .living_spec in a few more places 2026-02-16 15:45:44 +00:00
Dave
3865883998 Renamed living spec to Story Kit 2026-02-16 15:44:20 +00:00
Dave
0876c53e17 moved from tauri to a server with embedded UI 2026-02-13 12:31:36 +00:00
Dave
d4203cfaab Updating README 2026-02-06 16:28:50 +00:00
Dave
0465609295 Upating README for living spec 2026-02-06 16:28:40 +00:00
Dave
f6b86ea5a6 Testing more paths in search 2026-01-27 16:07:00 +00:00
Dave
1f11eaedab ibid 2026-01-27 15:28:09 +00:00
Dave
c24f44bf51 wip tests for chat.rs 2026-01-27 15:01:47 +00:00
Dave
97b0ce1b58 Wrote some tests. 2026-01-27 14:45:28 +00:00
Dave
8b14aa1f6f Fixing warnings and moving LLM providers into a module 2026-01-27 13:30:46 +00:00
Dave
1f1f5f6dac Updated dependencies, fixed some clippy warnings 2026-01-26 18:15:29 +00:00
Dave
c2da7f9f18 Fix Story 12: Claude API key storage now working
- Fixed silent API key save failure by switching from keyring to Tauri store
- Removed keyring dependency (didn't work in macOS dev mode for unsigned apps)
- Implemented reliable cross-platform storage using tauri-plugin-store
- Added pendingMessageRef to preserve user message during API key dialog flow
- Refactored sendMessage to accept optional message parameter for retry
- Removed all debug logging and test code
- Removed unused entitlements.plist and macOS config
- API key now persists correctly between sessions
- Auto-retry after saving key works properly

Story 12 complete and archived.
2025-12-27 20:08:24 +00:00
Dave
2976c854d0 Fix model selection resetting issue
The useEffect was re-running every time model changed, causing
Claude model selection to reset back to Ollama models. Fixed by:
- Removing model from useEffect dependency array
- Only run model fetch on component mount
- Check if saved model exists before loading it
- Don't reset model if it's not in Ollama list (allows Claude)
2025-12-27 19:50:47 +00:00
Dave
e3f4f92c54 Frontend: Add Claude integration UI
- Add Claude models to dropdown with optgroup sections
- Update context window calculation for Claude (200k tokens)
- Add API key dialog modal for first-time Claude use
- Check for API key existence before sending Claude requests
- Auto-detect provider from model name (claude-*)
- Update sendMessage to handle Claude provider
- Store and retrieve API key via backend commands
- Add visual separation between Anthropic and Ollama models
2025-12-27 19:43:00 +00:00
Dave
1529ca77e7 Backend: Add Anthropic/Claude provider integration
- Add anthropic.rs module with streaming support
- Convert between internal and Anthropic tool/message formats
- Add keyring dependency for secure API key storage
- Add API key management commands (get_exists, set)
- Auto-detect provider from model name (claude-* prefix)
- Support SSE streaming from Anthropic API
- Handle tool calling with Anthropic's format
- Add cancellation support for Anthropic streams
2025-12-27 19:41:20 +00:00
Dave
e71dcd8226 Story 12: Update story and specs for Claude integration
Story Updates:
- Unified model dropdown with section headers (Anthropic, Ollama)
- Auto-detect provider from model name (claude-* prefix)
- API key prompt on first Claude model use
- Secure storage in OS keychain via keyring crate
- 200k token context window for Claude models

Spec Updates (AI_INTEGRATION.md):
- Document Anthropic provider implementation
- Anthropic API protocol (SSE streaming, tool format)
- Tool format conversion between internal and Anthropic formats
- API key storage in OS keychain
- Unified dropdown UI flow

Spec Updates (STACK.md):
- Add keyring crate for secure API key storage
- Add eventsource-stream for Anthropic SSE streaming
- Document automatic provider detection
- Update API key management approach
2025-12-27 19:37:01 +00:00
Dave
ca7efc2888 Archive Story 23 to stories/archive/ 2025-12-27 19:30:32 +00:00
Dave
bdb82bcf49 Story 23: Alphabetize LLM dropdown list
User Story:
As a user, I want the LLM model dropdown to be alphabetically sorted
so I can quickly find the model I'm looking for.

Implementation:
- Added alphabetical sorting with case-insensitive comparison
- Used localeCompare() for proper string comparison
- Sort happens immediately after fetching models from backend
- Currently selected model remains selected after sorting

Technical Details:
- Sort logic: models.sort((a, b) => a.toLowerCase().localeCompare(b.toLowerCase()))
- Frontend-only change, no backend modifications needed
- Sorting preserves model selection state

Acceptance Criteria Met:
 Models sorted alphabetically (case-insensitive)
 Selected model remains selected after sorting
 Works for all models from Ollama
 Updates correctly when models change

Files Changed:
- src/components/Chat.tsx: Added sorting logic to model fetch
- .living_spec/stories/23_alphabetize_llm_dropdown.md: Marked complete
2025-12-27 19:30:17 +00:00
Dave
f963c2e17e Archive Story 22 to stories/archive/ 2025-12-27 19:21:48 +00:00
Dave
57826dc5ee Story 22: Implement smart auto-scroll that respects user scrolling
User Story:
As a user, I want to be able to scroll up to review previous messages
while the AI is streaming or adding new content, without being
constantly dragged back to the bottom.

Implementation:
- Replaced position-based threshold detection with user-intent tracking
- Detects when user scrolls UP and disables auto-scroll completely
- Auto-scroll only re-enables when user manually returns to bottom (<5px)
- Uses refs to track scroll position and direction for smooth operation
- Works seamlessly during rapid token streaming and tool execution

Technical Details:
- lastScrollTopRef: Tracks previous scroll position to detect direction
- userScrolledUpRef: Flag set when upward scrolling is detected
- Direct scrollTop manipulation for instant, non-fighting scroll behavior
- Threshold of 5px from absolute bottom to re-enable auto-scroll

Spec Updates:
- Added comprehensive Smart Auto-Scroll section to UI_UX.md
- Documented the problem, solution, requirements, and implementation
- Includes code examples and edge case handling

Acceptance Criteria Met:
 Auto-scroll disabled when scrolling up
 Auto-scroll resumes when returning to bottom
 Works normally when already at bottom
 Smooth detection without flickering
 Works during streaming and tool execution

Files Changed:
- src/components/Chat.tsx: Implemented user-intent tracking
- .living_spec/specs/functional/UI_UX.md: Added Smart Auto-Scroll spec
- .living_spec/stories/22_smart_autoscroll.md: Marked complete
2025-12-27 19:21:34 +00:00
Dave
1baf3fa728 Clean up duplicate Story 18 files (already archived) 2025-12-27 18:51:11 +00:00
Dave
50e2c2cd70 Remove Story 16 placeholder (work completed in previous commit) 2025-12-27 18:50:05 +00:00
Dave
02d3c05f34 UI: Move submit/stop button outside input field
- Restructure input area to use flexbox layout
- Button now sits beside input instead of overlapping
- Prevents text from being obscured on longer prompts
- Input takes full available width with proper spacing
2025-12-27 18:47:48 +00:00
Dave
71ce87d836 Story 15: Implement New Session cancellation
- Call cancel_chat before clearing session state
- Prevents tools from executing silently in background
- Prevents streaming from leaking into new session
- Uses same cancellation infrastructure as Story 13
- Clean session start with no side effects

Closes Story 15
2025-12-27 18:39:28 +00:00
Dave
e1fb0e3d19 Story 13: Implement Stop button with backend cancellation
- Add tokio watch channel for cancellation signaling
- Implement cancel_chat command
- Add cancellation checks in streaming loop and before tool execution
- Stop button (■) replaces Send button (↑) during generation
- Preserve partial streaming content when cancelled
- Clean UX: no error messages on cancellation
- Backend properly stops streaming and prevents tool execution

Closes Story 13
2025-12-27 18:32:15 +00:00
Dave
846967ee99 Fix race condition: ignore streaming events from old sessions
- Added sessionIdRef to track current session
- When clearing session, generate new session ID
- Event listeners check if sessionId matches before updating state
- Prevents old streaming responses from appearing in new sessions
- All quality checks passing
2025-12-27 17:37:25 +00:00
Dave
68f35d4591 Fix New Session bug: use async Tauri dialog instead of window.confirm
- Tauri overrides window.confirm to return Promise, not boolean
- Changed clearSession to async function using Tauri's ask() dialog
- Now properly waits for user confirmation before clearing state
- Removed debug logging
- All quality checks passing
2025-12-27 17:33:03 +00:00
Dave
418fa86f7d Add debug logging to clearSession to diagnose Cancel button bug 2025-12-27 17:30:03 +00:00
Dave
bd8d838457 Story 17: Display Context Window Usage with emoji indicator
- Added real-time context window usage indicator in header
- Format: emoji + percentage (🟢 52%)
- Color-coded emoji: 🟢 <75%, 🟡 <90%, 🔴 >=90%
- Hover tooltip shows full details: 'Context: 4,300 / 8,192 tokens (52%)'
- Token estimation: 1 token ≈ 4 characters
- Model-aware context windows: llama3 (8K), qwen2.5 (32K), deepseek (16K)
- Includes system prompts, messages, tool calls, and streaming content
- Updates in real-time as conversation progresses
- All quality checks passing (TypeScript, Biome, Clippy, builds)

Tested and verified:
- Shows accurate percentage of context usage
- Emoji changes color at appropriate thresholds
- Different models show correct context window sizes
- Can exceed 100% when over limit (shows red)
- Tooltip provides exact token counts
2025-12-27 17:26:21 +00:00