fix: make llm provider async and add tool toggle

This commit is contained in:
Dave
2025-12-24 17:32:46 +00:00
parent d9cd16601b
commit b241c47fd9
8 changed files with 149 additions and 28 deletions

View File

@@ -0,0 +1,17 @@
# Story: Ollama Model Detection
## User Story
**As a** User
**I want to** select my Ollama model from a dropdown list of installed models
**So that** I don't have to manually type (and potentially mistype) the model names.
## Acceptance Criteria
* [ ] Backend: Implement `get_ollama_models()` command.
* [ ] Call `GET /api/tags` on the Ollama instance.
* [ ] Parse the JSON response to extracting model names.
* [ ] Frontend: Replace the "Ollama Model" text input with a `<select>` dropdown.
* [ ] Frontend: Populate the dropdown on load.
* [ ] Frontend: Handle connection errors gracefully (if Ollama isn't running, show empty or error).
## Out of Scope
* Downloading new models via the UI (pulling).