feat: event-driven ui updates
This commit is contained in:
24
.living_spec/specs/functional/UI_UX.md
Normal file
24
.living_spec/specs/functional/UI_UX.md
Normal file
@@ -0,0 +1,24 @@
|
||||
# Functional Spec: UI/UX Responsiveness
|
||||
|
||||
## Problem
|
||||
Currently, the `chat` command in Rust is an async function that performs a long-running, blocking loop (waiting for LLM, executing tools). While Tauri executes this on a separate thread from the UI, the frontend awaits the *entire* result before re-rendering. This makes the app feel "frozen" because there is no feedback during the 10-60 seconds of generation.
|
||||
|
||||
## Solution: Event-Driven Feedback
|
||||
Instead of waiting for the final array of messages, the Backend should emit **Events** to the Frontend in real-time.
|
||||
|
||||
### 1. Events
|
||||
* `chat:token`: Emitted when a text token is generated (Streaming text).
|
||||
* `chat:tool-start`: Emitted when a tool call begins (e.g., `{ tool: "git status" }`).
|
||||
* `chat:tool-end`: Emitted when a tool call finishes (e.g., `{ output: "..." }`).
|
||||
|
||||
### 2. Implementation Strategy (MVP)
|
||||
For this story, we won't fully implement token streaming (as `reqwest` blocking/async mixed with stream parsing is complex). We will focus on **State Updates**:
|
||||
|
||||
* **Refactor `chat` command:**
|
||||
* Instead of returning `Vec<Message>` at the very end, it accepts a `AppHandle`.
|
||||
* Inside the loop, after every step (LLM response, Tool Execution), emit an event `chat:update` containing the *current partial history*.
|
||||
* The Frontend listens to `chat:update` and re-renders immediately.
|
||||
|
||||
### 3. Visuals
|
||||
* **Loading State:** The "Send" button should show a spinner or "Stop" button.
|
||||
* **Auto-Scroll:** The chat view should stick to the bottom as new events arrive.
|
||||
@@ -22,6 +22,7 @@ The application follows a **Tool-Use (Function Calling)** architecture:
|
||||
* Validates the request against the **Safety Policy**.
|
||||
* Executes the native code (File I/O, Shell Process, Search).
|
||||
* Returns the output (stdout/stderr/file content) to the LLM.
|
||||
* **Event Loop:** The backend emits real-time events (`chat:update`) to the frontend to ensure UI responsiveness during long-running Agent tasks.
|
||||
|
||||
## LLM Provider Abstraction
|
||||
To support both Remote and Local models, the system implements a `ModelProvider` abstraction layer.
|
||||
|
||||
@@ -6,7 +6,7 @@ use crate::llm::types::{
|
||||
use crate::state::SessionState;
|
||||
use serde::Deserialize;
|
||||
use serde_json::json;
|
||||
use tauri::State;
|
||||
use tauri::{AppHandle, Emitter, State};
|
||||
|
||||
#[derive(Deserialize)]
|
||||
pub struct ProviderConfig {
|
||||
@@ -26,6 +26,7 @@ pub async fn get_ollama_models(base_url: Option<String>) -> Result<Vec<String>,
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn chat(
|
||||
app: AppHandle,
|
||||
messages: Vec<Message>,
|
||||
config: ProviderConfig,
|
||||
state: State<'_, SessionState>,
|
||||
@@ -77,6 +78,8 @@ pub async fn chat(
|
||||
|
||||
current_history.push(assistant_msg.clone());
|
||||
new_messages.push(assistant_msg);
|
||||
app.emit("chat:update", ¤t_history)
|
||||
.map_err(|e| e.to_string())?;
|
||||
|
||||
// Execute Tools
|
||||
for call in tool_calls {
|
||||
@@ -93,6 +96,8 @@ pub async fn chat(
|
||||
|
||||
current_history.push(tool_msg.clone());
|
||||
new_messages.push(tool_msg);
|
||||
app.emit("chat:update", ¤t_history)
|
||||
.map_err(|e| e.to_string())?;
|
||||
}
|
||||
} else {
|
||||
// Final text response
|
||||
@@ -104,7 +109,10 @@ pub async fn chat(
|
||||
};
|
||||
|
||||
// We don't push to current_history needed for next loop, because we are done.
|
||||
new_messages.push(assistant_msg);
|
||||
new_messages.push(assistant_msg.clone());
|
||||
current_history.push(assistant_msg);
|
||||
app.emit("chat:update", ¤t_history)
|
||||
.map_err(|e| e.to_string())?;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import { useState, useRef, useEffect } from "react";
|
||||
import { invoke } from "@tauri-apps/api/core";
|
||||
import { listen } from "@tauri-apps/api/event";
|
||||
import Markdown from "react-markdown";
|
||||
import { Message, ProviderConfig } from "../types";
|
||||
|
||||
@@ -27,6 +28,16 @@ export function Chat() {
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, []);
|
||||
|
||||
useEffect(() => {
|
||||
const unlistenPromise = listen<Message[]>("chat:update", (event) => {
|
||||
setMessages(event.payload);
|
||||
});
|
||||
|
||||
return () => {
|
||||
unlistenPromise.then((unlisten) => unlisten());
|
||||
};
|
||||
}, []);
|
||||
|
||||
const scrollToBottom = () => {
|
||||
messagesEndRef.current?.scrollIntoView({ behavior: "smooth" });
|
||||
};
|
||||
@@ -52,13 +63,11 @@ export function Chat() {
|
||||
};
|
||||
|
||||
// Invoke backend chat command
|
||||
// The backend returns the *new* messages (assistant response + tool outputs)
|
||||
const response = await invoke<Message[]>("chat", {
|
||||
// We rely on 'chat:update' events to update the state in real-time
|
||||
await invoke("chat", {
|
||||
messages: newHistory,
|
||||
config: config,
|
||||
});
|
||||
|
||||
setMessages((prev) => [...prev, ...response]);
|
||||
} catch (e) {
|
||||
console.error(e);
|
||||
setMessages((prev) => [
|
||||
@@ -229,7 +238,6 @@ export function Chat() {
|
||||
borderRadius: "4px",
|
||||
border: "1px solid #ccc",
|
||||
}}
|
||||
disabled={loading}
|
||||
/>
|
||||
<button
|
||||
onClick={sendMessage}
|
||||
|
||||
Reference in New Issue
Block a user