25 lines
1.5 KiB
Markdown
25 lines
1.5 KiB
Markdown
|
|
# Functional Spec: UI/UX Responsiveness
|
||
|
|
|
||
|
|
## Problem
|
||
|
|
Currently, the `chat` command in Rust is an async function that performs a long-running, blocking loop (waiting for LLM, executing tools). While Tauri executes this on a separate thread from the UI, the frontend awaits the *entire* result before re-rendering. This makes the app feel "frozen" because there is no feedback during the 10-60 seconds of generation.
|
||
|
|
|
||
|
|
## Solution: Event-Driven Feedback
|
||
|
|
Instead of waiting for the final array of messages, the Backend should emit **Events** to the Frontend in real-time.
|
||
|
|
|
||
|
|
### 1. Events
|
||
|
|
* `chat:token`: Emitted when a text token is generated (Streaming text).
|
||
|
|
* `chat:tool-start`: Emitted when a tool call begins (e.g., `{ tool: "git status" }`).
|
||
|
|
* `chat:tool-end`: Emitted when a tool call finishes (e.g., `{ output: "..." }`).
|
||
|
|
|
||
|
|
### 2. Implementation Strategy (MVP)
|
||
|
|
For this story, we won't fully implement token streaming (as `reqwest` blocking/async mixed with stream parsing is complex). We will focus on **State Updates**:
|
||
|
|
|
||
|
|
* **Refactor `chat` command:**
|
||
|
|
* Instead of returning `Vec<Message>` at the very end, it accepts a `AppHandle`.
|
||
|
|
* Inside the loop, after every step (LLM response, Tool Execution), emit an event `chat:update` containing the *current partial history*.
|
||
|
|
* The Frontend listens to `chat:update` and re-renders immediately.
|
||
|
|
|
||
|
|
### 3. Visuals
|
||
|
|
* **Loading State:** The "Send" button should show a spinner or "Stop" button.
|
||
|
|
* **Auto-Scroll:** The chat view should stick to the bottom as new events arrive.
|