feat: agent brain (ollama) and chat ui
This commit is contained in:
@@ -87,7 +87,8 @@ When the LLM context window fills up (or the chat gets slow/confused):
|
||||
If a user hands you this document and says "Apply this process to my project":
|
||||
|
||||
1. **Analyze the Request:** Ask for the high-level goal ("What are we building?") and the tech preferences ("Rust or Python?").
|
||||
2. **Scaffold:** Run commands to create the `specs/` and `stories/` folders.
|
||||
3. **Draft Context:** Write `specs/00_CONTEXT.md` based on the user's answer.
|
||||
4. **Draft Stack:** Write `specs/tech/STACK.md` based on best practices for that language.
|
||||
5. **Wait:** Ask the user for "Story #1".
|
||||
2. **Git Check:** Check if the directory is a git repository (`git status`). If not, run `git init`.
|
||||
3. **Scaffold:** Run commands to create the `specs/` and `stories/` folders.
|
||||
4. **Draft Context:** Write `specs/00_CONTEXT.md` based on the user's answer.
|
||||
5. **Draft Stack:** Write `specs/tech/STACK.md` based on best practices for that language.
|
||||
6. **Wait:** Ask the user for "Story #1".
|
||||
|
||||
29
.living_spec/specs/functional/AI_INTEGRATION.md
Normal file
29
.living_spec/specs/functional/AI_INTEGRATION.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Functional Spec: AI Integration
|
||||
|
||||
## 1. Provider Abstraction
|
||||
The system uses a pluggable architecture for LLMs. The `ModelProvider` interface abstracts:
|
||||
* **Generation:** Sending prompt + history + tools to the model.
|
||||
* **Parsing:** Extracting text content vs. tool calls from the raw response.
|
||||
|
||||
## 2. Ollama Implementation
|
||||
* **Endpoint:** `http://localhost:11434/api/chat`
|
||||
* **JSON Protocol:**
|
||||
* Request: `{ model: "name", messages: [...], stream: false, tools: [...] }`
|
||||
* Response: Standard Ollama JSON with `message.tool_calls`.
|
||||
* **Fallback:** If the specific local model doesn't support native tool calling, we may need a fallback system prompt approach, but for this story, we assume a tool-capable model (like `llama3.1` or `mistral-nemo`).
|
||||
|
||||
## 3. Chat Loop (Backend)
|
||||
The `chat` command acts as the **Agent Loop**:
|
||||
1. Frontend sends: `User Message`.
|
||||
2. Backend appends to `SessionState.history`.
|
||||
3. Backend calls `OllamaProvider`.
|
||||
4. **If Text Response:** Return text to Frontend.
|
||||
5. **If Tool Call:**
|
||||
* Backend executes the Tool (using the Core Tools from Story #2).
|
||||
* Backend appends `ToolResult` to history.
|
||||
* Backend *re-prompts* Ollama with the new history (recursion).
|
||||
* Repeat until Text Response or Max Turns reached.
|
||||
|
||||
## 4. Frontend State
|
||||
* **Settings:** Store `llm_provider` ("ollama"), `ollama_model` ("llama3.2"), `ollama_base_url`.
|
||||
* **Chat:** Display the conversation. Tool calls should be visible as "System Events" (e.g., collapsed accordions).
|
||||
@@ -78,6 +78,8 @@ To support both Remote and Local models, the system implements a `ModelProvider`
|
||||
* `walkdir`: Simple directory traversal.
|
||||
* `tokio`: Async runtime.
|
||||
* `reqwest`: For LLM API calls (if backend-initiated).
|
||||
* `uuid`: For unique message IDs.
|
||||
* `chrono`: For timestamps.
|
||||
* `tauri-plugin-dialog`: Native system dialogs.
|
||||
* **JavaScript:**
|
||||
* `@tauri-apps/api`: Tauri Bridge.
|
||||
|
||||
22
.living_spec/stories/archive/03_llm_ollama.md
Normal file
22
.living_spec/stories/archive/03_llm_ollama.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# Story: The Agent Brain (Ollama Integration)
|
||||
|
||||
## User Story
|
||||
**As a** User
|
||||
**I want to** connect the Assistant to a local Ollama instance
|
||||
**So that** I can chat with the Agent and have it execute tools without sending data to the cloud.
|
||||
|
||||
## Acceptance Criteria
|
||||
* [ ] Backend: Implement `ModelProvider` trait/interface.
|
||||
* [ ] Backend: Implement `OllamaProvider` (POST /api/chat).
|
||||
* [ ] Backend: Implement `chat(message, history, provider_config)` command.
|
||||
* [ ] Must support passing Tool Definitions to Ollama (if model supports it) or System Prompt instructions.
|
||||
* [ ] Must parse Tool Calls from the response.
|
||||
* [ ] Frontend: Settings Screen to toggle "Ollama" and set Model Name (default: `llama3`).
|
||||
* [ ] Frontend: Chat Interface.
|
||||
* [ ] Message History (User/Assistant).
|
||||
* [ ] Tool Call visualization (e.g., "Running git status...").
|
||||
|
||||
## Out of Scope
|
||||
* Remote Providers (Anthropic/OpenAI) - Future Story.
|
||||
* Streaming responses (wait for full completion for MVP).
|
||||
* Complex context window management (just send full history for now).
|
||||
Reference in New Issue
Block a user