feat: event-driven ui updates
This commit is contained in:
24
.living_spec/specs/functional/UI_UX.md
Normal file
24
.living_spec/specs/functional/UI_UX.md
Normal file
@@ -0,0 +1,24 @@
|
||||
# Functional Spec: UI/UX Responsiveness
|
||||
|
||||
## Problem
|
||||
Currently, the `chat` command in Rust is an async function that performs a long-running, blocking loop (waiting for LLM, executing tools). While Tauri executes this on a separate thread from the UI, the frontend awaits the *entire* result before re-rendering. This makes the app feel "frozen" because there is no feedback during the 10-60 seconds of generation.
|
||||
|
||||
## Solution: Event-Driven Feedback
|
||||
Instead of waiting for the final array of messages, the Backend should emit **Events** to the Frontend in real-time.
|
||||
|
||||
### 1. Events
|
||||
* `chat:token`: Emitted when a text token is generated (Streaming text).
|
||||
* `chat:tool-start`: Emitted when a tool call begins (e.g., `{ tool: "git status" }`).
|
||||
* `chat:tool-end`: Emitted when a tool call finishes (e.g., `{ output: "..." }`).
|
||||
|
||||
### 2. Implementation Strategy (MVP)
|
||||
For this story, we won't fully implement token streaming (as `reqwest` blocking/async mixed with stream parsing is complex). We will focus on **State Updates**:
|
||||
|
||||
* **Refactor `chat` command:**
|
||||
* Instead of returning `Vec<Message>` at the very end, it accepts a `AppHandle`.
|
||||
* Inside the loop, after every step (LLM response, Tool Execution), emit an event `chat:update` containing the *current partial history*.
|
||||
* The Frontend listens to `chat:update` and re-renders immediately.
|
||||
|
||||
### 3. Visuals
|
||||
* **Loading State:** The "Send" button should show a spinner or "Stop" button.
|
||||
* **Auto-Scroll:** The chat view should stick to the bottom as new events arrive.
|
||||
@@ -22,6 +22,7 @@ The application follows a **Tool-Use (Function Calling)** architecture:
|
||||
* Validates the request against the **Safety Policy**.
|
||||
* Executes the native code (File I/O, Shell Process, Search).
|
||||
* Returns the output (stdout/stderr/file content) to the LLM.
|
||||
* **Event Loop:** The backend emits real-time events (`chat:update`) to the frontend to ensure UI responsiveness during long-running Agent tasks.
|
||||
|
||||
## LLM Provider Abstraction
|
||||
To support both Remote and Local models, the system implements a `ModelProvider` abstraction layer.
|
||||
|
||||
Reference in New Issue
Block a user