Files
storkit/.living_spec/specs/tech/STACK.md
Dave 64d1b788be Story 18: Token-by-token streaming responses
- Backend: Added OllamaProvider::chat_stream() with newline-delimited JSON parsing
- Backend: Emit chat:token events for each token received from Ollama
- Backend: Added futures dependency and stream feature for reqwest
- Frontend: Added streamingContent state and chat:token event listener
- Frontend: Real-time token display with auto-scroll
- Frontend: Markdown and syntax highlighting support for streaming content
- Fixed all TypeScript errors (tsc --noEmit)
- Fixed all Biome warnings and errors
- Fixed all Clippy warnings
- Added comprehensive code quality documentation
- Added tsc --noEmit to verification checklist

Tested and verified:
- Tokens stream in real-time
- Auto-scroll works during streaming
- Tool calls interrupt streaming correctly
- Multi-turn conversations work
- Smooth performance with no lag
2025-12-27 16:50:18 +00:00

5.0 KiB

Tech Stack & Constraints

Overview

This project is a desktop application built with Tauri. It functions as an Agentic Code Assistant capable of safely executing tools on the host system.

Core Stack

  • Backend: Rust (Tauri Core)
    • MSRV: Stable (latest)
    • Framework: Tauri v2
  • Frontend: TypeScript + React
    • Build Tool: Vite
    • Styling: CSS Modules or Tailwind (TBD - Defaulting to CSS Modules)
    • State Management: React Context / Hooks
    • Chat UI: Rendered Markdown with syntax highlighting.

Agent Architecture

The application follows a Tool-Use (Function Calling) architecture:

  1. Frontend: Collects user input and sends it to the LLM.
  2. LLM: Decides to generate text OR request a Tool Call (e.g., execute_shell, read_file).
  3. Tauri Backend (The "Hand"):
    • Intercepts Tool Calls.
    • Validates the request against the Safety Policy.
    • Executes the native code (File I/O, Shell Process, Search).
    • Returns the output (stdout/stderr/file content) to the LLM.
    • Event Loop: The backend emits real-time events (chat:update) to the frontend to ensure UI responsiveness during long-running Agent tasks.

LLM Provider Abstraction

To support both Remote and Local models, the system implements a ModelProvider abstraction layer.

  • Strategy:
    • Abstract the differences between API formats (OpenAI-compatible vs Anthropic vs Gemini).
    • Normalize "Tool Use" definitions, as each provider handles function calling schemas differently.
  • Supported Providers:
    • Anthropic: Focus on Claude 3.5 Sonnet for coding tasks.
    • Google: Gemini 1.5 Pro for massive context windows.
    • Ollama: Local inference (e.g., Llama 3, DeepSeek Coder) for privacy and offline usage.
  • Configuration:
    • Provider selection is runtime-configurable by the user.
    • API Keys must be stored securely (using OS native keychain where possible).

Tooling Capabilities

1. Filesystem (Native)

  • Scope: Strictly limited to the user-selected project_root.
  • Operations: Read, Write, List, Delete.
  • Constraint: Modifications to .git/ are strictly forbidden via file APIs (use Git tools instead).

2. Shell Execution

  • Library: tokio::process for async execution.
  • Constraint: We do not run an interactive shell (repl). We run discrete, stateless commands.
  • Allowlist: The agent may only execute specific binaries:
    • git
    • cargo, rustc, rustfmt, clippy
    • npm, node, yarn, pnpm, bun
    • ls, find, grep (if not using internal search)
    • mkdir, rm, touch, mv, cp

3. Search & Navigation

  • Library: ignore (by BurntSushi) + grep logic.
  • Behavior:
    • Must respect .gitignore files automatically.
    • Must be performant (parallel traversal).

Coding Standards

Rust

  • Style: rustfmt standard.
  • Linter: clippy - Must pass with 0 warnings before merging.
  • Error Handling: Custom AppError type deriving thiserror. All Commands return Result<T, AppError>.
  • Concurrency: Heavy tools (Search, Shell) must run on tokio threads to avoid blocking the UI.
  • Quality Gates:
    • cargo clippy --all-targets --all-features must show 0 errors, 0 warnings
    • cargo check must succeed
    • cargo test must pass all tests

TypeScript / React

  • Style: Biome formatter (replaces Prettier/ESLint).
  • Linter: Biome - Must pass with 0 errors, 0 warnings before merging.
  • Types: Shared types with Rust (via tauri-specta or manual interface matching) are preferred to ensure type safety across the bridge.
  • Quality Gates:
    • npx @biomejs/biome check src/ must show 0 errors, 0 warnings
    • npm run build must succeed
    • No any types allowed (use proper types or unknown)
    • React keys must use stable IDs, not array indices
    • All buttons must have explicit type attribute

Libraries (Approved)

  • Rust:
    • serde, serde_json: Serialization.
    • ignore: Fast recursive directory iteration respecting gitignore.
    • walkdir: Simple directory traversal.
    • tokio: Async runtime.
    • reqwest: For LLM API calls (if backend-initiated).
    • uuid: For unique message IDs.
    • chrono: For timestamps.
    • tauri-plugin-dialog: Native system dialogs.
    • tauri-plugin-store: Persistent key-value storage.
  • JavaScript:
    • @tauri-apps/api: Tauri Bridge.
    • @tauri-apps/plugin-dialog: Dialog API.
    • @tauri-apps/plugin-store: Store API.
    • react-markdown: For rendering chat responses.

Safety & Sandbox

  1. Project Scope: The application must strictly enforce that it does not read/write outside the project_root selected by the user.
  2. Human in the Loop:
    • Shell commands that modify state (non-readonly) should ideally require a UI confirmation (configurable).
    • File writes must be confirmed or revertible.